WorldWideScience

Sample records for bayesian tensor estimation

  1. Tensor completion for PDEs with uncertain coefficients and Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2017-03-05

    In this work, we tried to show connections between Bayesian update and tensor completion techniques. Usually, only a small/sparse vector/tensor of measurements is available. The typical measurement is a function of the solution. The solution of a stochastic PDE is a tensor, the measurement as well. The idea is to use completion techniques to compute all "missing" values of the measurement tensor and only then apply the Bayesian technique.

  2. Tensor completion for PDEs with uncertain coefficients and Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2017-01-01

    In this work, we tried to show connections between Bayesian update and tensor completion techniques. Usually, only a small/sparse vector/tensor of measurements is available. The typical measurement is a function of the solution. The solution of a stochastic PDE is a tensor, the measurement as well. The idea is to use completion techniques to compute all "missing" values of the measurement tensor and only then apply the Bayesian technique.

  3. Bayesian regularization of diffusion tensor images

    DEFF Research Database (Denmark)

    Frandsen, Jesper; Hobolth, Asger; Østergaard, Leif

    2007-01-01

    Diffusion tensor imaging (DTI) is a powerful tool in the study of the course of nerve fibre bundles in the human brain. Using DTI, the local fibre orientation in each image voxel can be described by a diffusion tensor which is constructed from local measurements of diffusion coefficients along...... several directions. The measured diffusion coefficients and thereby the diffusion tensors are subject to noise, leading to possibly flawed representations of the three dimensional fibre bundles. In this paper we develop a Bayesian procedure for regularizing the diffusion tensor field, fully utilizing...

  4. Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination.

    Science.gov (United States)

    Zhao, Qibin; Zhang, Liqing; Cichocki, Andrzej

    2015-09-01

    CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank . In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.

  5. An improved Bayesian tensor regularization and sampling algorithm to track neuronal fiber pathways in the language circuit.

    Science.gov (United States)

    Mishra, Arabinda; Anderson, Adam W; Wu, Xi; Gore, John C; Ding, Zhaohua

    2010-08-01

    The purpose of this work is to design a neuronal fiber tracking algorithm, which will be more suitable for reconstruction of fibers associated with functionally important regions in the human brain. The functional activations in the brain normally occur in the gray matter regions. Hence the fibers bordering these regions are weakly myelinated, resulting in poor performance of conventional tractography methods to trace the fiber links between them. A lower fractional anisotropy in this region makes it even difficult to track the fibers in the presence of noise. In this work, the authors focused on a stochastic approach to reconstruct these fiber pathways based on a Bayesian regularization framework. To estimate the true fiber direction (propagation vector), the a priori and conditional probability density functions are calculated in advance and are modeled as multivariate normal. The variance of the estimated tensor element vector is associated with the uncertainty due to noise and partial volume averaging (PVA). An adaptive and multiple sampling of the estimated tensor element vector, which is a function of the pre-estimated variance, overcomes the effect of noise and PVA in this work. The algorithm has been rigorously tested using a variety of synthetic data sets. The quantitative comparison of the results to standard algorithms motivated the authors to implement it for in vivo DTI data analysis. The algorithm has been implemented to delineate fibers in two major language pathways (Broca's to SMA and Broca's to Wernicke's) across 12 healthy subjects. Though the mean of standard deviation was marginally bigger than conventional (Euler's) approach [P. J. Basser et al., "In vivo fiber tractography using DT-MRI data," Magn. Reson. Med. 44(4), 625-632 (2000)], the number of extracted fibers in this approach was significantly higher. The authors also compared the performance of the proposed method to Lu's method [Y. Lu et al., "Improved fiber tractography with Bayesian

  6. Surface tensor estimation from linear sections

    DEFF Research Database (Denmark)

    Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel

    From Crofton's formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....

  7. Surface tensor estimation from linear sections

    DEFF Research Database (Denmark)

    Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel

    2015-01-01

    From Crofton’s formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....

  8. Robust estimation of adaptive tensors of curvature by tensor voting.

    Science.gov (United States)

    Tong, Wai-Shun; Tang, Chi-Keung

    2005-03-01

    Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.

  9. Bayesian ISOLA: new tool for automated centroid moment tensor inversion

    Science.gov (United States)

    Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John

    2017-04-01

    Focal mechanisms are important for understanding seismotectonics of a region, and they serve as a basic input for seismic hazard assessment. Usually, the point source approximation and the moment tensor (MT) are used. We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances and high signal-to-noise are rejected, and full-waveform inversion in a space-time grid around a provided hypocenter. The method is innovative in the following aspects: (i) The CMT inversion is fully automated, no user interaction is required, although the details of the process can be visually inspected latter on many figures which are automatically plotted.(ii) The automated process includes detection of disturbances based on MouseTrap code, so disturbed recordings do not affect inversion.(iii) A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequencies.(iv) Bayesian approach is used, so not only the best solution is obtained, but also the posterior probability density function.(v) A space-time grid search effectively combined with the least-squares inversion of moment tensor components speeds up the inversion and allows to obtain more accurate results compared to stochastic methods. The method has been tested on synthetic and observed data. It has been tested by comparison with manually processed moment tensors of all events greater than M≥3 in the Swiss catalogue over 16 years using data available at the Swiss data center (http://arclink.ethz.ch). The quality of the results of the presented automated process is comparable with careful manual processing of data. The software package programmed in Python has been designed to be as versatile as possible in

  10. Bayesian estimates of linkage disequilibrium

    Directory of Open Access Journals (Sweden)

    Abad-Grau María M

    2007-06-01

    Full Text Available Abstract Background The maximum likelihood estimator of D' – a standard measure of linkage disequilibrium – is biased toward disequilibrium, and the bias is particularly evident in small samples and rare haplotypes. Results This paper proposes a Bayesian estimation of D' to address this problem. The reduction of the bias is achieved by using a prior distribution on the pair-wise associations between single nucleotide polymorphisms (SNPs that increases the likelihood of equilibrium with increasing physical distances between pairs of SNPs. We show how to compute the Bayesian estimate using a stochastic estimation based on MCMC methods, and also propose a numerical approximation to the Bayesian estimates that can be used to estimate patterns of LD in large datasets of SNPs. Conclusion Our Bayesian estimator of D' corrects the bias toward disequilibrium that affects the maximum likelihood estimator. A consequence of this feature is a more objective view about the extent of linkage disequilibrium in the human genome, and a more realistic number of tagging SNPs to fully exploit the power of genome wide association studies.

  11. Tensor estimation for double-pulsed diffusional kurtosis imaging.

    Science.gov (United States)

    Shaw, Calvin B; Hui, Edward S; Helpern, Joseph A; Jensen, Jens H

    2017-07-01

    Double-pulsed diffusional kurtosis imaging (DP-DKI) represents the double diffusion encoding (DDE) MRI signal in terms of six-dimensional (6D) diffusion and kurtosis tensors. Here a method for estimating these tensors from experimental data is described. A standard numerical algorithm for tensor estimation from conventional (i.e. single diffusion encoding) diffusional kurtosis imaging (DKI) data is generalized to DP-DKI. This algorithm is based on a weighted least squares (WLS) fit of the signal model to the data combined with constraints designed to minimize unphysical parameter estimates. The numerical algorithm then takes the form of a quadratic programming problem. The principal change required to adapt the conventional DKI fitting algorithm to DP-DKI is replacing the three-dimensional diffusion and kurtosis tensors with the 6D tensors needed for DP-DKI. In this way, the 6D diffusion and kurtosis tensors for DP-DKI can be conveniently estimated from DDE data by using constrained WLS, providing a practical means for condensing DDE measurements into well-defined mathematical constructs that may be useful for interpreting and applying DDE MRI. Data from healthy volunteers for brain are used to demonstrate the DP-DKI tensor estimation algorithm. In particular, representative parametric maps of selected tensor-derived rotational invariants are presented. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Sampling-free Bayesian inversion with adaptive hierarchical tensor representations

    Science.gov (United States)

    Eigel, Martin; Marschall, Manuel; Schneider, Reinhold

    2018-03-01

    A sampling-free approach to Bayesian inversion with an explicit polynomial representation of the parameter densities is developed, based on an affine-parametric representation of a linear forward model. This becomes feasible due to the complete treatment in function spaces, which requires an efficient model reduction technique for numerical computations. The advocated perspective yields the crucial benefit that error bounds can be derived for all occuring approximations, leading to provable convergence subject to the discretization parameters. Moreover, it enables a fully adaptive a posteriori control with automatic problem-dependent adjustments of the employed discretizations. The method is discussed in the context of modern hierarchical tensor representations, which are used for the evaluation of a random PDE (the forward model) and the subsequent high-dimensional quadrature of the log-likelihood, alleviating the ‘curse of dimensionality’. Numerical experiments demonstrate the performance and confirm the theoretical results.

  13. Bayesian estimation and tracking a practical guide

    CERN Document Server

    Haug, Anton J

    2012-01-01

    A practical approach to estimating and tracking dynamic systems in real-worl applications Much of the literature on performing estimation for non-Gaussian systems is short on practical methodology, while Gaussian methods often lack a cohesive derivation. Bayesian Estimation and Tracking addresses the gap in the field on both accounts, providing readers with a comprehensive overview of methods for estimating both linear and nonlinear dynamic systems driven by Gaussian and non-Gaussian noices. Featuring a unified approach to Bayesian estimation and tracking, the book emphasizes the derivation

  14. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...

  15. Uncertainty Quantification in Earthquake Source Characterization with Probabilistic Centroid Moment Tensor Inversion

    Science.gov (United States)

    Dettmer, J.; Benavente, R. F.; Cummins, P. R.

    2017-12-01

    This work considers probabilistic, non-linear centroid moment tensor inversion of data from earthquakes at teleseismic distances. The moment tensor is treated as deviatoric and centroid location is parametrized with fully unknown latitude, longitude, depth and time delay. The inverse problem is treated as fully non-linear in a Bayesian framework and the posterior density is estimated with interacting Markov chain Monte Carlo methods which are implemented in parallel and allow for chain interaction. The source mechanism and location, including uncertainties, are fully described by the posterior probability density and complex trade-offs between various metrics are studied. These include the percent of double couple component as well as fault orientation and the probabilistic results are compared to results from earthquake catalogs. Additional focus is on the analysis of complex events which are commonly not well described by a single point source. These events are studied by jointly inverting for multiple centroid moment tensor solutions. The optimal number of sources is estimated by the Bayesian information criterion to ensure parsimonious solutions. [Supported by NSERC.

  16. A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri

    2013-01-01

    representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...

  17. Approximate Bayesian recursive estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2014-01-01

    Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf

  18. Bayesian estimation of the discrete coefficient of determination.

    Science.gov (United States)

    Chen, Ting; Braga-Neto, Ulisses M

    2016-12-01

    The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.

  19. Joint eigenvector estimation from mutually anisotropic tensors improves susceptibility tensor imaging of the brain, kidney, and heart.

    Science.gov (United States)

    Dibb, Russell; Liu, Chunlei

    2017-06-01

    To develop a susceptibility-based MRI technique for probing microstructure and fiber architecture of magnetically anisotropic tissues-such as central nervous system white matter, renal tubules, and myocardial fibers-in three dimensions using susceptibility tensor imaging (STI) tools. STI can probe tissue microstructure, but is limited by reconstruction artifacts because of absent phase information outside the tissue and noise. STI accuracy may be improved by estimating a joint eigenvector from mutually anisotropic susceptibility and relaxation tensors. Gradient-recalled echo image data were simulated using a numerical phantom and acquired from the ex vivo mouse brain, kidney, and heart. Susceptibility tensor data were reconstructed using STI, regularized STI, and the proposed algorithm of mutually anisotropic and joint eigenvector STI (MAJESTI). Fiber map and tractography results from each technique were compared with diffusion tensor data. MAJESTI reduced the estimated susceptibility tensor orientation error by 30% in the phantom, 36% in brain white matter, 40% in the inner medulla of the kidney, and 45% in myocardium. This improved the continuity and consistency of susceptibility-based fiber tractography in each tissue. MAJESTI estimation of the susceptibility tensors yields lower orientation errors for susceptibility-based fiber mapping and tractography in the intact brain, kidney, and heart. Magn Reson Med 77:2331-2346, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  20. BAYESIAN ESTIMATION OF THERMONUCLEAR REACTION RATES

    Energy Technology Data Exchange (ETDEWEB)

    Iliadis, C.; Anderson, K. S. [Department of Physics and Astronomy, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3255 (United States); Coc, A. [Centre de Sciences Nucléaires et de Sciences de la Matière (CSNSM), CNRS/IN2P3, Univ. Paris-Sud, Université Paris–Saclay, Bâtiment 104, F-91405 Orsay Campus (France); Timmes, F. X.; Starrfield, S., E-mail: iliadis@unc.edu [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-1504 (United States)

    2016-11-01

    The problem of estimating non-resonant astrophysical S -factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied to this problem in the past, almost all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extrasolar planets, gravitational waves, and Type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We present astrophysical S -factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the reactions d(p, γ ){sup 3}He, {sup 3}He({sup 3}He,2p){sup 4}He, and {sup 3}He( α , γ ){sup 7}Be, important for deuterium burning, solar neutrinos, and Big Bang nucleosynthesis.

  1. The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective.

    Science.gov (United States)

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.

  2. Gradients estimation from random points with volumetric tensor in turbulence

    Science.gov (United States)

    Watanabe, Tomoaki; Nagata, Koji

    2017-12-01

    We present an estimation method of fully-resolved/coarse-grained gradients from randomly distributed points in turbulence. The method is based on a linear approximation of spatial gradients expressed with the volumetric tensor, which is a 3 × 3 matrix determined by a geometric distribution of the points. The coarse grained gradient can be considered as a low pass filtered gradient, whose cutoff is estimated with the eigenvalues of the volumetric tensor. The present method, the volumetric tensor approximation, is tested for velocity and passive scalar gradients in incompressible planar jet and mixing layer. Comparison with a finite difference approximation on a Cartesian grid shows that the volumetric tensor approximation computes the coarse grained gradients fairly well at a moderate computational cost under various conditions of spatial distributions of points. We also show that imposing the solenoidal condition improves the accuracy of the present method for solenoidal vectors, such as a velocity vector in incompressible flows, especially when the number of the points is not large. The volumetric tensor approximation with 4 points poorly estimates the gradient because of anisotropic distribution of the points. Increasing the number of points from 4 significantly improves the accuracy. Although the coarse grained gradient changes with the cutoff length, the volumetric tensor approximation yields the coarse grained gradient whose magnitude is close to the one obtained by the finite difference. We also show that the velocity gradient estimated with the present method well captures the turbulence characteristics such as local flow topology, amplification of enstrophy and strain, and energy transfer across scales.

  3. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type

    Science.gov (United States)

    Kusumoto, Shigekazu

    2017-12-01

    The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

  4. MCMC for parameters estimation by bayesian approach

    International Nuclear Information System (INIS)

    Ait Saadi, H.; Ykhlef, F.; Guessoum, A.

    2011-01-01

    This article discusses the parameter estimation for dynamic system by a Bayesian approach associated with Markov Chain Monte Carlo methods (MCMC). The MCMC methods are powerful for approximating complex integrals, simulating joint distributions, and the estimation of marginal posterior distributions, or posterior means. The MetropolisHastings algorithm has been widely used in Bayesian inference to approximate posterior densities. Calibrating the proposal distribution is one of the main issues of MCMC simulation in order to accelerate the convergence.

  5. Bayesian Simultaneous Estimation for Means in k Sample Problems

    OpenAIRE

    Imai, Ryo; Kubokawa, Tatsuya; Ghosh, Malay

    2017-01-01

    This paper is concerned with the simultaneous estimation of k population means when one suspects that the k means are nearly equal. As an alternative to the preliminary test estimator based on the test statistics for testing hypothesis of equal means, we derive Bayesian and minimax estimators which shrink individual sample means toward a pooled mean estimator given under the hypothesis. Interestingly, it is shown that both the preliminary test estimator and the Bayesian minimax shrinkage esti...

  6. Estimation of the magnetic field gradient tensor using the Swarm constellation

    DEFF Research Database (Denmark)

    Kotsiaros, Stavros; Finlay, Chris; Olsen, Nils

    2014-01-01

    For the first time, part of the magnetic field gradient tensor is estimated in space by the Swarm mission. We investigate the possibility of a more complete estimation of the gradient tensor exploiting the Swarm constellation. The East-West gradients can be approximated by observations from...... deviations compared to conventional vector observations at almost all latitudes. Analytical and numerical analysis of the spectral properties of the gradient tensor shows that specific combinations of the East-West and North-South gradients have almost identical signal content to the radial gradient...

  7. Flood quantile estimation at ungauged sites by Bayesian networks

    Science.gov (United States)

    Mediero, L.; Santillán, D.; Garrote, L.

    2012-04-01

    Estimating flood quantiles at a site for which no observed measurements are available is essential for water resources planning and management. Ungauged sites have no observations about the magnitude of floods, but some site and basin characteristics are known. The most common technique used is the multiple regression analysis, which relates physical and climatic basin characteristic to flood quantiles. Regression equations are fitted from flood frequency data and basin characteristics at gauged sites. Regression equations are a rigid technique that assumes linear relationships between variables and cannot take the measurement errors into account. In addition, the prediction intervals are estimated in a very simplistic way from the variance of the residuals in the estimated model. Bayesian networks are a probabilistic computational structure taken from the field of Artificial Intelligence, which have been widely and successfully applied to many scientific fields like medicine and informatics, but application to the field of hydrology is recent. Bayesian networks infer the joint probability distribution of several related variables from observations through nodes, which represent random variables, and links, which represent causal dependencies between them. A Bayesian network is more flexible than regression equations, as they capture non-linear relationships between variables. In addition, the probabilistic nature of Bayesian networks allows taking the different sources of estimation uncertainty into account, as they give a probability distribution as result. A homogeneous region in the Tagus Basin was selected as case study. A regression equation was fitted taking the basin area, the annual maximum 24-hour rainfall for a given recurrence interval and the mean height as explanatory variables. Flood quantiles at ungauged sites were estimated by Bayesian networks. Bayesian networks need to be learnt from a huge enough data set. As observational data are reduced, a

  8. Comparison of two global digital algorithms for Minkowski tensor estimation

    DEFF Research Database (Denmark)

    The geometry of real world objects can be described by Minkowski tensors. Algorithms have been suggested to approximate Minkowski tensors if only a binary image of the object is available. This paper presents implementations of two such algorithms. The theoretical convergence properties...... are confirmed by simulations on test sets, and recommendations for input arguments of the algorithms are given. For increasing resolutions, we obtain more accurate estimators for the Minkowski tensors. Digitisations of more complicated objects are shown to require higher resolutions....

  9. Bayesian estimation of isotopic age differences

    International Nuclear Information System (INIS)

    Curl, R.L.

    1988-01-01

    Isotopic dating is subject to uncertainties arising from counting statistics and experimental errors. These uncertainties are additive when an isotopic age difference is calculated. If large, they can lead to no significant age difference by classical statistics. In many cases, relative ages are known because of stratigraphic order or other clues. Such information can be used to establish a Bayes estimate of age difference which will include prior knowledge of age order. Age measurement errors are assumed to be log-normal and a noninformative but constrained bivariate prior for two true ages in known order is adopted. True-age ratio is distributed as a truncated log-normal variate. Its expected value gives an age-ratio estimate, and its variance provides credible intervals. Bayesian estimates of ages are different and in correct order even if measured ages are identical or reversed in order. For example, age measurements on two samples might both yield 100 ka with coefficients of variation of 0.2. Bayesian estimates are 22.7 ka for age difference with a 75% credible interval of [4.4, 43.7] ka

  10. Bayesian parameter estimation in probabilistic risk assessment

    International Nuclear Information System (INIS)

    Siu, Nathan O.; Kelly, Dana L.

    1998-01-01

    Bayesian statistical methods are widely used in probabilistic risk assessment (PRA) because of their ability to provide useful estimates of model parameters when data are sparse and because the subjective probability framework, from which these methods are derived, is a natural framework to address the decision problems motivating PRA. This paper presents a tutorial on Bayesian parameter estimation especially relevant to PRA. It summarizes the philosophy behind these methods, approaches for constructing likelihood functions and prior distributions, some simple but realistic examples, and a variety of cautions and lessons regarding practical applications. References are also provided for more in-depth coverage of various topics

  11. Bayesian estimation of dose rate effectiveness

    International Nuclear Information System (INIS)

    Arnish, J.J.; Groer, P.G.

    2000-01-01

    A Bayesian statistical method was used to quantify the effectiveness of high dose rate 137 Cs gamma radiation at inducing fatal mammary tumours and increasing the overall mortality rate in BALB/c female mice. The Bayesian approach considers both the temporal and dose dependence of radiation carcinogenesis and total mortality. This paper provides the first direct estimation of dose rate effectiveness using Bayesian statistics. This statistical approach provides a quantitative description of the uncertainty of the factor characterising the dose rate in terms of a probability density function. The results show that a fixed dose from 137 Cs gamma radiation delivered at a high dose rate is more effective at inducing fatal mammary tumours and increasing the overall mortality rate in BALB/c female mice than the same dose delivered at a low dose rate. (author)

  12. Bayesian and maximum likelihood estimation of genetic maps

    DEFF Research Database (Denmark)

    York, Thomas L.; Durrett, Richard T.; Tanksley, Steven

    2005-01-01

    There has recently been increased interest in the use of Markov Chain Monte Carlo (MCMC)-based Bayesian methods for estimating genetic maps. The advantage of these methods is that they can deal accurately with missing data and genotyping errors. Here we present an extension of the previous methods...... of genotyping errors. A similar advantage of the Bayesian method was not observed for missing data. We also re-analyse a recently published set of data from the eggplant and show that the use of the MCMC-based method leads to smaller estimates of genetic distances....

  13. Parametric Bayesian Estimation of Differential Entropy and Relative Entropy

    OpenAIRE

    Gupta; Srivastava

    2010-01-01

    Given iid samples drawn from a distribution with known parametric form, we propose the minimization of expected Bregman divergence to form Bayesian estimates of differential entropy and relative entropy, and derive such estimators for the uniform, Gaussian, Wishart, and inverse Wishart distributions. Additionally, formulas are given for a log gamma Bregman divergence and the differential entropy and relative entropy for the Wishart and inverse Wishart. The results, as always with Bayesian est...

  14. A new Bayesian recursive technique for parameter estimation

    Science.gov (United States)

    Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

    2006-08-01

    The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

  15. Parametric Bayesian Estimation of Differential Entropy and Relative Entropy

    Directory of Open Access Journals (Sweden)

    Maya Gupta

    2010-04-01

    Full Text Available Given iid samples drawn from a distribution with known parametric form, we propose the minimization of expected Bregman divergence to form Bayesian estimates of differential entropy and relative entropy, and derive such estimators for the uniform, Gaussian, Wishart, and inverse Wishart distributions. Additionally, formulas are given for a log gamma Bregman divergence and the differential entropy and relative entropy for the Wishart and inverse Wishart. The results, as always with Bayesian estimates, depend on the accuracy of the prior parameters, but example simulations show that the performance can be substantially improved compared to maximum likelihood or state-of-the-art nonparametric estimators.

  16. Bayesian estimation inherent in a Mexican-hat-type neural network

    Science.gov (United States)

    Takiyama, Ken

    2016-05-01

    Brain functions, such as perception, motor control and learning, and decision making, have been explained based on a Bayesian framework, i.e., to decrease the effects of noise inherent in the human nervous system or external environment, our brain integrates sensory and a priori information in a Bayesian optimal manner. However, it remains unclear how Bayesian computations are implemented in the brain. Herein, I address this issue by analyzing a Mexican-hat-type neural network, which was used as a model of the visual cortex, motor cortex, and prefrontal cortex. I analytically demonstrate that the dynamics of an order parameter in the model corresponds exactly to a variational inference of a linear Gaussian state-space model, a Bayesian estimation, when the strength of recurrent synaptic connectivity is appropriately stronger than that of an external stimulus, a plausible condition in the brain. This exact correspondence can reveal the relationship between the parameters in the Bayesian estimation and those in the neural network, providing insight for understanding brain functions.

  17. Basics of Bayesian reliability estimation from attribute test data

    International Nuclear Information System (INIS)

    Martz, H.F. Jr.; Waller, R.A.

    1975-10-01

    The basic notions of Bayesian reliability estimation from attribute lifetest data are presented in an introductory and expository manner. Both Bayesian point and interval estimates of the probability of surviving the lifetest, the reliability, are discussed. The necessary formulas are simply stated, and examples are given to illustrate their use. In particular, a binomial model in conjunction with a beta prior model is considered. Particular attention is given to the procedure for selecting an appropriate prior model in practice. Empirical Bayes point and interval estimates of reliability are discussed and examples are given. 7 figures, 2 tables

  18. Estimation of post-test probabilities by residents: Bayesian reasoning versus heuristics?

    Science.gov (United States)

    Hall, Stacey; Phang, Sen Han; Schaefer, Jeffrey P; Ghali, William; Wright, Bruce; McLaughlin, Kevin

    2014-08-01

    Although the process of diagnosing invariably begins with a heuristic, we encourage our learners to support their diagnoses by analytical cognitive processes, such as Bayesian reasoning, in an attempt to mitigate the effects of heuristics on diagnosing. There are, however, limited data on the use ± impact of Bayesian reasoning on the accuracy of disease probability estimates. In this study our objective was to explore whether Internal Medicine residents use a Bayesian process to estimate disease probabilities by comparing their disease probability estimates to literature-derived Bayesian post-test probabilities. We gave 35 Internal Medicine residents four clinical vignettes in the form of a referral letter and asked them to estimate the post-test probability of the target condition in each case. We then compared these to literature-derived probabilities. For each vignette the estimated probability was significantly different from the literature-derived probability. For the two cases with low literature-derived probability our participants significantly overestimated the probability of these target conditions being the correct diagnosis, whereas for the two cases with high literature-derived probability the estimated probability was significantly lower than the calculated value. Our results suggest that residents generate inaccurate post-test probability estimates. Possible explanations for this include ineffective application of Bayesian reasoning, attribute substitution whereby a complex cognitive task is replaced by an easier one (e.g., a heuristic), or systematic rater bias, such as central tendency bias. Further studies are needed to identify the reasons for inaccuracy of disease probability estimates and to explore ways of improving accuracy.

  19. Bayesian inference and interpretation of centroid moment tensors of the 2016 Kumamoto earthquake sequence, Kyushu, Japan

    Science.gov (United States)

    Hallo, Miroslav; Asano, Kimiyuki; Gallovič, František

    2017-09-01

    On April 16, 2016, Kumamoto prefecture in Kyushu region, Japan, was devastated by a shallow M JMA7.3 earthquake. The series of foreshocks started by M JMA6.5 foreshock 28 h before the mainshock. They have originated in Hinagu fault zone intersecting the mainshock Futagawa fault zone; hence, the tectonic background for this earthquake sequence is rather complex. Here we infer centroid moment tensors (CMTs) for 11 events with M JMA between 4.8 and 6.5, using strong motion records of the K-NET, KiK-net and F-net networks. We use upgraded Bayesian full-waveform inversion code ISOLA-ObsPy, which takes into account uncertainty of the velocity model. Such an approach allows us to reliably assess uncertainty of the CMT parameters including the centroid position. The solutions show significant systematic spatial and temporal variations throughout the sequence. Foreshocks are right-lateral steeply dipping strike-slip events connected to the NE-SW shear zone. Those located close to the intersection of the Hinagu and Futagawa fault zones are dipping slightly to ESE, while those in the southern area are dipping to WNW. Contrarily, aftershocks are mostly normal dip-slip events, being related to the N-S extensional tectonic regime. Most of the deviatoric moment tensors contain only minor CLVD component, which can be attributed to the velocity model uncertainty. Nevertheless, two of the CMTs involve a significant CLVD component, which may reflect complex rupture process. Decomposition of those moment tensors into two pure shear moment tensors suggests combined right-lateral strike-slip and normal dip-slip mechanisms, consistent with the tectonic settings of the intersection of the Hinagu and Futagawa fault zones.[Figure not available: see fulltext.

  20. Tensor Completion for Estimating Missing Values in Visual Data

    KAUST Repository

    Liu, Ji

    2012-01-25

    In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependant relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between Fa

  1. Tensor Completion for Estimating Missing Values in Visual Data

    KAUST Repository

    Liu, Ji; Musialski, Przemyslaw; Wonka, Peter; Ye, Jieping

    2012-01-01

    In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependant relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between Fa

  2. Tensor completion for estimating missing values in visual data.

    Science.gov (United States)

    Liu, Ji; Musialski, Przemyslaw; Wonka, Peter; Ye, Jieping

    2013-01-01

    In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependent relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC an

  3. Bayesian estimation methods in metrology

    International Nuclear Information System (INIS)

    Cox, M.G.; Forbes, A.B.; Harris, P.M.

    2004-01-01

    In metrology -- the science of measurement -- a measurement result must be accompanied by a statement of its associated uncertainty. The degree of validity of a measurement result is determined by the validity of the uncertainty statement. In recognition of the importance of uncertainty evaluation, the International Standardization Organization in 1995 published the Guide to the Expression of Uncertainty in Measurement and the Guide has been widely adopted. The validity of uncertainty statements is tested in interlaboratory comparisons in which an artefact is measured by a number of laboratories and their measurement results compared. Since the introduction of the Mutual Recognition Arrangement, key comparisons are being undertaken to determine the degree of equivalence of laboratories for particular measurement tasks. In this paper, we discuss the possible development of the Guide to reflect Bayesian approaches and the evaluation of key comparison data using Bayesian estimation methods

  4. Bayesian Estimation of Wave Spectra – Proper Formulation of ABIC

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2007-01-01

    It is possible to estimate on-site wave spectra using measured ship responses applied to Bayesian Modelling based on two prior information: the wave spectrum must be smooth both directional-wise and frequency-wise. This paper introduces two hyperparameters into Bayesian Modelling and, hence, a pr...

  5. Bayesian error estimation in density-functional theory

    DEFF Research Database (Denmark)

    Mortensen, Jens Jørgen; Kaasbjerg, Kristen; Frederiksen, Søren Lund

    2005-01-01

    We present a practical scheme for performing error estimates for density-functional theory calculations. The approach, which is based on ideas from Bayesian statistics, involves creating an ensemble of exchange-correlation functionals by comparing with an experimental database of binding energies...

  6. [Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].

    Science.gov (United States)

    Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L

    2017-03-10

    To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.

  7. A Bayesian Markov geostatistical model for estimation of hydrogeological properties

    International Nuclear Information System (INIS)

    Rosen, L.; Gustafson, G.

    1996-01-01

    A geostatistical methodology based on Markov-chain analysis and Bayesian statistics was developed for probability estimations of hydrogeological and geological properties in the siting process of a nuclear waste repository. The probability estimates have practical use in decision-making on issues such as siting, investigation programs, and construction design. The methodology is nonparametric which makes it possible to handle information that does not exhibit standard statistical distributions, as is often the case for classified information. Data do not need to meet the requirements on additivity and normality as with the geostatistical methods based on regionalized variable theory, e.g., kriging. The methodology also has a formal way for incorporating professional judgments through the use of Bayesian statistics, which allows for updating of prior estimates to posterior probabilities each time new information becomes available. A Bayesian Markov Geostatistical Model (BayMar) software was developed for implementation of the methodology in two and three dimensions. This paper gives (1) a theoretical description of the Bayesian Markov Geostatistical Model; (2) a short description of the BayMar software; and (3) an example of application of the model for estimating the suitability for repository establishment with respect to the three parameters of lithology, hydraulic conductivity, and rock quality designation index (RQD) at 400--500 meters below ground surface in an area around the Aespoe Hard Rock Laboratory in southeastern Sweden

  8. Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function

    Science.gov (United States)

    Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.

    2017-06-01

    This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.

  9. Bayesian estimation of seasonal course of canopy leaf area index from hyperspectral satellite data

    Science.gov (United States)

    Varvia, Petri; Rautiainen, Miina; Seppänen, Aku

    2018-03-01

    In this paper, Bayesian inversion of a physically-based forest reflectance model is investigated to estimate of boreal forest canopy leaf area index (LAI) from EO-1 Hyperion hyperspectral data. The data consist of multiple forest stands with different species compositions and structures, imaged in three phases of the growing season. The Bayesian estimates of canopy LAI are compared to reference estimates based on a spectral vegetation index. The forest reflectance model contains also other unknown variables in addition to LAI, for example leaf single scattering albedo and understory reflectance. In the Bayesian approach, these variables are estimated simultaneously with LAI. The feasibility and seasonal variation of these estimates is also examined. Credible intervals for the estimates are also calculated and evaluated. The results show that the Bayesian inversion approach is significantly better than using a comparable spectral vegetation index regression.

  10. Bayesian Estimation and Inference using Stochastic Hardware

    Directory of Open Access Journals (Sweden)

    Chetan Singh Thakur

    2016-03-01

    Full Text Available In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker, demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND, we show how inference can be performed in a Directed Acyclic Graph (DAG using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.

  11. Bayesian Estimation and Inference Using Stochastic Electronics.

    Science.gov (United States)

    Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André

    2016-01-01

    In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.

  12. Application of Bayesian model averaging to measurements of the primordial power spectrum

    International Nuclear Information System (INIS)

    Parkinson, David; Liddle, Andrew R.

    2010-01-01

    Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940 s s is specified at a pivot scale 0.015 Mpc -1 . For the tensors model averaging can tighten the credible upper limit, depending on prior assumptions.

  13. Evidence Estimation for Bayesian Partially Observed MRFs

    NARCIS (Netherlands)

    Chen, Y.; Welling, M.

    2013-01-01

    Bayesian estimation in Markov random fields is very hard due to the intractability of the partition function. The introduction of hidden units makes the situation even worse due to the presence of potentially very many modes in the posterior distribution. For the first time we propose a

  14. Risk, unexpected uncertainty, and estimation uncertainty: Bayesian learning in unstable settings.

    Directory of Open Access Journals (Sweden)

    Elise Payzan-LeNestour

    Full Text Available Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating.

  15. Collective animal behavior from Bayesian estimation and probability matching.

    Directory of Open Access Journals (Sweden)

    Alfonso Pérez-Escudero

    2011-11-01

    Full Text Available Animals living in groups make movement decisions that depend, among other factors, on social interactions with other group members. Our present understanding of social rules in animal collectives is mainly based on empirical fits to observations, with less emphasis in obtaining first-principles approaches that allow their derivation. Here we show that patterns of collective decisions can be derived from the basic ability of animals to make probabilistic estimations in the presence of uncertainty. We build a decision-making model with two stages: Bayesian estimation and probabilistic matching. In the first stage, each animal makes a Bayesian estimation of which behavior is best to perform taking into account personal information about the environment and social information collected by observing the behaviors of other animals. In the probability matching stage, each animal chooses a behavior with a probability equal to the Bayesian-estimated probability that this behavior is the most appropriate one. This model derives very simple rules of interaction in animal collectives that depend only on two types of reliability parameters, one that each animal assigns to the other animals and another given by the quality of the non-social information. We test our model by obtaining theoretically a rich set of observed collective patterns of decisions in three-spined sticklebacks, Gasterosteus aculeatus, a shoaling fish species. The quantitative link shown between probabilistic estimation and collective rules of behavior allows a better contact with other fields such as foraging, mate selection, neurobiology and psychology, and gives predictions for experiments directly testing the relationship between estimation and collective behavior.

  16. Bayesian approach to magnetotelluric tensor decomposition

    Czech Academy of Sciences Publication Activity Database

    Červ, Václav; Pek, Josef; Menvielle, M.

    2010-01-01

    Roč. 53, č. 2 (2010), s. 21-32 ISSN 1593-5213 R&D Projects: GA AV ČR IAA200120701; GA ČR GA205/04/0746; GA ČR GA205/07/0292 Institutional research plan: CEZ:AV0Z30120515 Keywords : galvanic distortion * telluric distortion * impedance tensor * basic procedure * inversion * noise Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 0.336, year: 2010

  17. Normal estimation for pointcloud using GPU based sparse tensor voting

    OpenAIRE

    Liu , Ming; Pomerleau , François; Colas , Francis; Siegwart , Roland

    2012-01-01

    International audience; Normal estimation is the basis for most applications using pointcloud, such as segmentation. However, it is still a challenging problem regarding computational complexity and observation noise. In this paper, we propose a normal estimation method for pointcloud using results from tensor voting. Comparing with other approaches, we show it has smaller estimation error. Moreover, by varying the voting kernel size, we find it is a flexible approach for structure extraction...

  18. Estimation of the order of an autoregressive time series: a Bayesian approach

    International Nuclear Information System (INIS)

    Robb, L.J.

    1980-01-01

    Finite-order autoregressive models for time series are often used for prediction and other inferences. Given the order of the model, the parameters of the models can be estimated by least-squares, maximum-likelihood, or Yule-Walker method. The basic problem is estimating the order of the model. The problem of autoregressive order estimation is placed in a Bayesian framework. This approach illustrates how the Bayesian method brings the numerous aspects of the problem together into a coherent structure. A joint prior probability density is proposed for the order, the partial autocorrelation coefficients, and the variance; and the marginal posterior probability distribution for the order, given the data, is obtained. It is noted that the value with maximum posterior probability is the Bayes estimate of the order with respect to a particular loss function. The asymptotic posterior distribution of the order is also given. In conclusion, Wolfer's sunspot data as well as simulated data corresponding to several autoregressive models are analyzed according to Akaike's method and the Bayesian method. Both methods are observed to perform quite well, although the Bayesian method was clearly superior, in most cases

  19. Bayesian estimation in homodyne interferometry

    International Nuclear Information System (INIS)

    Olivares, Stefano; Paris, Matteo G A

    2009-01-01

    We address phase-shift estimation by means of squeezed vacuum probe and homodyne detection. We analyse Bayesian estimator, which is known to asymptotically saturate the classical Cramer-Rao bound to the variance, and discuss convergence looking at the a posteriori distribution as the number of measurements increases. We also suggest two feasible adaptive methods, acting on the squeezing parameter and/or the homodyne local oscillator phase, which allow us to optimize homodyne detection and approach the ultimate bound to precision imposed by the quantum Cramer-Rao theorem. The performances of our two-step methods are investigated by means of Monte Carlo simulated experiments with a small number of homodyne data, thus giving a quantitative meaning to the notion of asymptotic optimality.

  20. Iterative Bayesian Estimation of Travel Times on Urban Arterials: Fusing Loop Detector and Probe Vehicle Data.

    Science.gov (United States)

    Liu, Kai; Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo

    2016-01-01

    On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods.

  1. Diffusion tensor image registration using hybrid connectivity and tensor features.

    Science.gov (United States)

    Wang, Qian; Yap, Pew-Thian; Wu, Guorong; Shen, Dinggang

    2014-07-01

    Most existing diffusion tensor imaging (DTI) registration methods estimate structural correspondences based on voxelwise matching of tensors. The rich connectivity information that is given by DTI, however, is often neglected. In this article, we propose to integrate complementary information given by connectivity features and tensor features for improved registration accuracy. To utilize connectivity information, we place multiple anchors representing different brain anatomies in the image space, and define the connectivity features for each voxel as the geodesic distances from all anchors to the voxel under consideration. The geodesic distance, which is computed in relation to the tensor field, encapsulates information of brain connectivity. We also extract tensor features for every voxel to reflect the local statistics of tensors in its neighborhood. We then combine both connectivity features and tensor features for registration of tensor images. From the images, landmarks are selected automatically and their correspondences are determined based on their connectivity and tensor feature vectors. The deformation field that deforms one tensor image to the other is iteratively estimated and optimized according to the landmarks and their associated correspondences. Experimental results show that, by using connectivity features and tensor features simultaneously, registration accuracy is increased substantially compared with the cases using either type of features alone. Copyright © 2013 Wiley Periodicals, Inc.

  2. Bayesian estimation of mixtures with dynamic transitions and known component parameters

    Czech Academy of Sciences Publication Activity Database

    Nagy, I.; Suzdaleva, Evgenia; Kárný, Miroslav

    2011-01-01

    Roč. 47, č. 4 (2011), s. 572-594 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA TA ČR TA01030123; GA ČR GA102/08/0567 Grant - others:Skoda Auto(CZ) ENS/2009/UTIA Institutional research plan: CEZ:AV0Z10750506 Keywords : mixture model * Bayesian estimation * approximation * clustering * classification Subject RIV: BC - Control Systems Theory Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/AS/nagy-bayesian estimation of mixtures with dynamic transitions and known component parameters.pdf

  3. A new validation technique for estimations of body segment inertia tensors: Principal axes of inertia do matter.

    Science.gov (United States)

    Rossi, Marcel M; Alderson, Jacqueline; El-Sallam, Amar; Dowling, James; Reinbolt, Jeffrey; Donnelly, Cyril J

    2016-12-08

    The aims of this study were to: (i) establish a new criterion method to validate inertia tensor estimates by setting the experimental angular velocity data of an airborne objects as ground truth against simulations run with the estimated tensors, and (ii) test the sensitivity of the simulations to changes in the inertia tensor components. A rigid steel cylinder was covered with reflective kinematic markers and projected through a calibrated motion capture volume. Simulations of the airborne motion were run with two models, using inertia tensor estimated with geometric formula or the compound pendulum technique. The deviation angles between experimental (ground truth) and simulated angular velocity vectors and the root mean squared deviation angle were computed for every simulation. Monte Carlo analyses were performed to assess the sensitivity of simulations to changes in magnitude of principal moments of inertia within ±10% and to changes in orientation of principal axes of inertia within ±10° (of the geometric-based inertia tensor). Root mean squared deviation angles ranged between 2.9° and 4.3° for the inertia tensor estimated geometrically, and between 11.7° and 15.2° for the compound pendulum values. Errors up to 10% in magnitude of principal moments of inertia yielded root mean squared deviation angles ranging between 3.2° and 6.6°, and between 5.5° and 7.9° when lumped with errors of 10° in principal axes of inertia orientation. The proposed technique can effectively validate inertia tensors from novel estimation methods of body segment inertial parameter. Principal axes of inertia orientation should not be neglected when modelling human/animal mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Bayesian estimation of Weibull distribution parameters

    International Nuclear Information System (INIS)

    Bacha, M.; Celeux, G.; Idee, E.; Lannoy, A.; Vasseur, D.

    1994-11-01

    In this paper, we expose SEM (Stochastic Expectation Maximization) and WLB-SIR (Weighted Likelihood Bootstrap - Sampling Importance Re-sampling) methods which are used to estimate Weibull distribution parameters when data are very censored. The second method is based on Bayesian inference and allow to take into account available prior informations on parameters. An application of this method, with real data provided by nuclear power plants operation feedback analysis has been realized. (authors). 8 refs., 2 figs., 2 tabs

  5. Default Bayesian Estimation of the Fundamental Frequency

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2013-01-01

    Joint fundamental frequency and model order esti- mation is an important problem in several applications. In this paper, a default estimation algorithm based on a minimum of prior information is presented. The algorithm is developed in a Bayesian framework, and it can be applied to both real....... Moreover, several approximations of the posterior distributions on the fundamental frequency and the model order are derived, and one of the state-of-the-art joint fundamental frequency and model order estimators is demonstrated to be a special case of one of these approximations. The performance...

  6. Bayesian phylogenetic estimation of fossil ages.

    Science.gov (United States)

    Drummond, Alexei J; Stadler, Tanja

    2016-07-19

    Recent advances have allowed for both morphological fossil evidence and molecular sequences to be integrated into a single combined inference of divergence dates under the rule of Bayesian probability. In particular, the fossilized birth-death tree prior and the Lewis-Mk model of discrete morphological evolution allow for the estimation of both divergence times and phylogenetic relationships between fossil and extant taxa. We exploit this statistical framework to investigate the internal consistency of these models by producing phylogenetic estimates of the age of each fossil in turn, within two rich and well-characterized datasets of fossil and extant species (penguins and canids). We find that the estimation accuracy of fossil ages is generally high with credible intervals seldom excluding the true age and median relative error in the two datasets of 5.7% and 13.2%, respectively. The median relative standard error (RSD) was 9.2% and 7.2%, respectively, suggesting good precision, although with some outliers. In fact, in the two datasets we analyse, the phylogenetic estimate of fossil age is on average less than 2 Myr from the mid-point age of the geological strata from which it was excavated. The high level of internal consistency found in our analyses suggests that the Bayesian statistical model employed is an adequate fit for both the geological and morphological data, and provides evidence from real data that the framework used can accurately model the evolution of discrete morphological traits coded from fossil and extant taxa. We anticipate that this approach will have diverse applications beyond divergence time dating, including dating fossils that are temporally unconstrained, testing of the 'morphological clock', and for uncovering potential model misspecification and/or data errors when controversial phylogenetic hypotheses are obtained based on combined divergence dating analyses.This article is part of the themed issue 'Dating species divergences using

  7. On the prior probabilities for two-stage Bayesian estimates

    International Nuclear Information System (INIS)

    Kohut, P.

    1992-01-01

    The method of Bayesian inference is reexamined for its applicability and for the required underlying assumptions in obtaining and using prior probability estimates. Two different approaches are suggested to determine the first-stage priors in the two-stage Bayesian analysis which avoid certain assumptions required for other techniques. In the first scheme, the prior is obtained through a true frequency based distribution generated at selected intervals utilizing actual sampling of the failure rate distributions. The population variability distribution is generated as the weighed average of the frequency distributions. The second method is based on a non-parametric Bayesian approach using the Maximum Entropy Principle. Specific features such as integral properties or selected parameters of prior distributions may be obtained with minimal assumptions. It is indicated how various quantiles may also be generated with a least square technique

  8. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  9. Invited commentary: Lost in estimation--searching for alternatives to markov chains to fit complex Bayesian models.

    Science.gov (United States)

    Molitor, John

    2012-03-01

    Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.

  10. Reliable Dual Tensor Model Estimation in Single and Crossing Fibers Based on Jeffreys Prior

    Science.gov (United States)

    Yang, Jianfei; Poot, Dirk H. J.; Caan, Matthan W. A.; Su, Tanja; Majoie, Charles B. L. M.; van Vliet, Lucas J.; Vos, Frans M.

    2016-01-01

    Purpose This paper presents and studies a framework for reliable modeling of diffusion MRI using a data-acquisition adaptive prior. Methods Automated relevance determination estimates the mean of the posterior distribution of a rank-2 dual tensor model exploiting Jeffreys prior (JARD). This data-acquisition prior is based on the Fisher information matrix and enables the assessment whether two tensors are mandatory to describe the data. The method is compared to Maximum Likelihood Estimation (MLE) of the dual tensor model and to FSL’s ball-and-stick approach. Results Monte Carlo experiments demonstrated that JARD’s volume fractions correlated well with the ground truth for single and crossing fiber configurations. In single fiber configurations JARD automatically reduced the volume fraction of one compartment to (almost) zero. The variance in fractional anisotropy (FA) of the main tensor component was thereby reduced compared to MLE. JARD and MLE gave a comparable outcome in data simulating crossing fibers. On brain data, JARD yielded a smaller spread in FA along the corpus callosum compared to MLE. Tract-based spatial statistics demonstrated a higher sensitivity in detecting age-related white matter atrophy using JARD compared to both MLE and the ball-and-stick approach. Conclusions The proposed framework offers accurate and precise estimation of diffusion properties in single and dual fiber regions. PMID:27760166

  11. Efficient fuzzy Bayesian inference algorithms for incorporating expert knowledge in parameter estimation

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad

    2016-05-01

    Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert

  12. Direct diffusion tensor estimation using a model-based method with spatial and parametric constraints.

    Science.gov (United States)

    Zhu, Yanjie; Peng, Xi; Wu, Yin; Wu, Ed X; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2017-02-01

    To develop a new model-based method with spatial and parametric constraints (MB-SPC) aimed at accelerating diffusion tensor imaging (DTI) by directly estimating the diffusion tensor from highly undersampled k-space data. The MB-SPC method effectively incorporates the prior information on the joint sparsity of different diffusion-weighted images using an L1-L2 norm and the smoothness of the diffusion tensor using a total variation seminorm. The undersampled k-space datasets were obtained from fully sampled DTI datasets of a simulated phantom and an ex-vivo experimental rat heart with acceleration factors ranging from 2 to 4. The diffusion tensor was directly reconstructed by solving a minimization problem with a nonlinear conjugate gradient descent algorithm. The reconstruction performance was quantitatively assessed using the normalized root mean square error (nRMSE) of the DTI indices. The MB-SPC method achieves acceptable DTI measures at an acceleration factor up to 4. Experimental results demonstrate that the proposed method can estimate the diffusion tensor more accurately than most existing methods operating at higher net acceleration factors. The proposed method can significantly reduce artifact, particularly at higher acceleration factors or lower SNRs. This method can easily be adapted to MR relaxometry parameter mapping and is thus useful in the characterization of biological tissue such as nerves, muscle, and heart tissue. © 2016 American Association of Physicists in Medicine.

  13. Multiscale Bayesian neural networks for soil water content estimation

    Science.gov (United States)

    Jana, Raghavendra B.; Mohanty, Binayak P.; Springer, Everett P.

    2008-08-01

    Artificial neural networks (ANN) have been used for some time now to estimate soil hydraulic parameters from other available or more easily measurable soil properties. However, most such uses of ANNs as pedotransfer functions (PTFs) have been at matching spatial scales (1:1) of inputs and outputs. This approach assumes that the outputs are only required at the same scale as the input data. Unfortunately, this is rarely true. Different hydrologic, hydroclimatic, and contaminant transport models require soil hydraulic parameter data at different spatial scales, depending upon their grid sizes. While conventional (deterministic) ANNs have been traditionally used in these studies, the use of Bayesian training of ANNs is a more recent development. In this paper, we develop a Bayesian framework to derive soil water retention function including its uncertainty at the point or local scale using PTFs trained with coarser-scale Soil Survey Geographic (SSURGO)-based soil data. The approach includes an ANN trained with Bayesian techniques as a PTF tool with training and validation data collected across spatial extents (scales) in two different regions in the United States. The two study areas include the Las Cruces Trench site in the Rio Grande basin of New Mexico, and the Southern Great Plains 1997 (SGP97) hydrology experimental region in Oklahoma. Each region-specific Bayesian ANN is trained using soil texture and bulk density data from the SSURGO database (scale 1:24,000), and predictions of the soil water contents at different pressure heads with point scale data (1:1) inputs are made. The resulting outputs are corrected for bias using both linear and nonlinear correction techniques. The results show good agreement between the soil water content values measured at the point scale and those predicted by the Bayesian ANN-based PTFs for both the study sites. Overall, Bayesian ANNs coupled with nonlinear bias correction are found to be very suitable tools for deriving soil

  14. Bayesian estimation of covariance matrices: Application to market risk management at EDF

    International Nuclear Information System (INIS)

    Jandrzejewski-Bouriga, M.

    2012-01-01

    In this thesis, we develop new methods of regularized covariance matrix estimation, under the Bayesian setting. The regularization methodology employed is first related to shrinkage. We investigate a new Bayesian modeling of covariance matrix, based on hierarchical inverse-Wishart distribution, and then derive different estimators under standard loss functions. Comparisons between shrunk and empirical estimators are performed in terms of frequentist performance under different losses. It allows us to highlight the critical importance of the definition of cost function and show the persistent effect of the shrinkage-type prior on inference. In a second time, we consider the problem of covariance matrix estimation in Gaussian graphical models. If the issue is well treated for the decomposable case, it is not the case if you also consider non-decomposable graphs. We then describe a Bayesian and operational methodology to carry out the estimation of covariance matrix of Gaussian graphical models, decomposable or not. This procedure is based on a new and objective method of graphical-model selection, combined with a constrained and regularized estimation of the covariance matrix of the model chosen. The procedures studied effectively manage missing data. These estimation techniques were applied to calculate the covariance matrices involved in the market risk management for portfolios of EDF (Electricity of France), in particular for problems of calculating Value-at-Risk or in Asset Liability Management. (author)

  15. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  16. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.

    Science.gov (United States)

    Wiecki, Thomas V; Sofer, Imri; Frank, Michael J

    2013-01-01

    The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/

  17. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  18. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python

    Directory of Open Access Journals (Sweden)

    Thomas V Wiecki

    2013-08-01

    Full Text Available The diffusion model is a commonly used tool to infer latent psychological processes underlying decision making, and to link them to neural mechanisms based on reaction times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of reaction time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model, which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject / condition than non-hierarchical method, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g. fMRI influence decision making parameters. This paper will first describe the theoretical background of drift-diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the chi-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs

  19. Tensor based structure estimation in multi-channel images

    DEFF Research Database (Denmark)

    Schou, Jesper; Dierking, Wolfgang; Skriver, Henning

    2000-01-01

    . In the second part tensors are used for representing the structure information. This approach has the advantage, that tensors can be averaged either spatially or by applying several images, and the resulting tensor provides information of the average strength as well as orientation of the structure...

  20. Grid-search Moment Tensor Estimation: Implementation and CTBT-related Application

    Science.gov (United States)

    Stachnik, J. C.; Baker, B. I.; Rozhkov, M.; Friberg, P. A.; Leifer, J. M.

    2017-12-01

    This abstract presents a review work related to moment tensor estimation for Expert Technical Analysis at the Comprehensive Test Ban Treaty Organization. In this context of event characterization, estimation of key source parameters provide important insights into the nature of failure in the earth. For example, if the recovered source parameters are indicative of a shallow source with large isotropic component then one conclusion is that it is a human-triggered explosive event. However, an important follow-up question in this application is - does an alternative hypothesis like a deeper source with a large double couple component explain the data approximately as well as the best solution? Here we address the issue of both finding a most likely source and assessing its uncertainty. Using the uniform moment tensor discretization of Tape and Tape (2015) we exhaustively interrogate and tabulate the source eigenvalue distribution (i.e., the source characterization), tensor orientation, magnitude, and source depth. The benefit of the grid-search is that we can quantitatively assess the extent to which model parameters are resolved. This provides a valuable opportunity during the assessment phase to focus interpretation on source parameters that are well-resolved. Another benefit of the grid-search is that it proves to be a flexible framework where different pieces of information can be easily incorporated. To this end, this work is particularly interested in fitting teleseismic body waves and regional surface waves as well as incorporating teleseismic first motions when available. Being that the moment tensor search methodology is well-established we primarily focus on the implementation and application. We present a highly scalable strategy for systematically inspecting the entire model parameter space. We then focus on application to regional and teleseismic data recorded during a handful of natural and anthropogenic events, report on the grid-search optimum, and

  1. Genetic analysis of rare disorders: Bayesian estimation of twin concordance rates

    NARCIS (Netherlands)

    van den Berg, Stéphanie Martine; Hjelmborg, J.

    2012-01-01

    Twin concordance rates provide insight into the possibility of a genetic background for a disease. These concordance rates are usually estimated within a frequentistic framework. Here we take a Bayesian approach. For rare diseases, estimation methods based on asymptotic theory cannot be applied due

  2. Introduction to applied Bayesian statistics and estimation for social scientists

    CERN Document Server

    Lynch, Scott M

    2007-01-01

    ""Introduction to Applied Bayesian Statistics and Estimation for Social Scientists"" covers the complete process of Bayesian statistical analysis in great detail from the development of a model through the process of making statistical inference. The key feature of this book is that it covers models that are most commonly used in social science research - including the linear regression model, generalized linear models, hierarchical models, and multivariate regression models - and it thoroughly develops each real-data example in painstaking detail.The first part of the book provides a detailed

  3. Mean magnetic susceptibility regularized susceptibility tensor imaging (MMSR-STI) for estimating orientations of white matter fibers in human brain.

    Science.gov (United States)

    Li, Xu; van Zijl, Peter C M

    2014-09-01

    An increasing number of studies show that magnetic susceptibility in white matter fibers is anisotropic and may be described by a tensor. However, the limited head rotation possible for in vivo human studies leads to an ill-conditioned inverse problem in susceptibility tensor imaging (STI). Here we suggest the combined use of limiting the susceptibility anisotropy to white matter and imposing morphology constraints on the mean magnetic susceptibility (MMS) for regularizing the STI inverse problem. The proposed MMS regularized STI (MMSR-STI) method was tested using computer simulations and in vivo human data collected at 3T. The fiber orientation estimated from both the STI and MMSR-STI methods was compared to that from diffusion tensor imaging (DTI). Computer simulations show that the MMSR-STI method provides a more accurate estimation of the susceptibility tensor than the conventional STI approach. Similarly, in vivo data show that use of the MMSR-STI method leads to a smaller difference between the fiber orientation estimated from STI and DTI for most selected white matter fibers. The proposed regularization strategy for STI can improve estimation of the susceptibility tensor in white matter. © 2014 Wiley Periodicals, Inc.

  4. Sparse Bayesian Learning for DOA Estimation with Mutual Coupling

    Directory of Open Access Journals (Sweden)

    Jisheng Dai

    2015-10-01

    Full Text Available Sparse Bayesian learning (SBL has given renewed interest to the problem of direction-of-arrival (DOA estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs. Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise.

  5. CONSTRAINTS ON SCALAR AND TENSOR PERTURBATIONS IN PHENOMENOLOGICAL AND TWO-FIELD INFLATION MODELS: BAYESIAN EVIDENCES FOR PRIMORDIAL ISOCURVATURE AND TENSOR MODES

    Energy Technology Data Exchange (ETDEWEB)

    Vaeliviita, Jussi [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029, Blindern, N-0315 Oslo (Norway); Savelainen, Matti; Talvitie, Marianne; Kurki-Suonio, Hannu; Rusak, Stanislav, E-mail: jussi.valiviita@astro.uio.no [Department of Physics and Helsinki Institute of Physics, University of Helsinki, P.O. Box 64, FIN-00014 University of Helsinki (Finland)

    2012-07-10

    We constrain cosmological models where the primordial perturbations have an adiabatic and a (possibly correlated) cold dark matter (CDM) or baryon isocurvature component. We use both a phenomenological approach, where the power spectra of primordial perturbations are parameterized with amplitudes and spectral indices, and a slow-roll two-field inflation approach where slow-roll parameters are used as primary parameters, determining the spectral indices and the tensor-to-scalar ratio. In the phenomenological case, with CMB data, the upper limit to the CDM isocurvature fraction is {alpha} < 6.4% at k = 0.002 Mpc{sup -1} and 15.4% at k = 0.01 Mpc{sup -1}. The non-adiabatic contribution to the CMB temperature variance is -0.030 < {alpha}{sub T} < 0.049 at the 95% confidence level. Including the supernova (SN) (or large-scale structure) data, these limits become {alpha} < 7.0%, 13.7%, and -0.048 < {alpha}{sub T} < 0.042 (or {alpha} < 10.2%, 16.0%, and -0.071 < {alpha}{sub T} < 0.024). The CMB constraint on the tensor-to-scalar ratio, r < 0.26 at k = 0.01 Mpc{sup -1}, is not affected by the non-adiabatic modes. In the slow-roll two-field inflation approach, the spectral indices are constrained close to 1. This leads to tighter limits on the isocurvature fraction; with the CMB data {alpha} < 2.6% at k = 0.01 Mpc{sup -1}, but the constraint on {alpha}{sub T} is not much affected, -0.058 < {alpha}{sub T} < 0.045. Including SN (or LSS) data, these limits become {alpha} < 3.2% and -0.056 < {alpha}{sub T} < 0.030 (or {alpha} < 3.4% and -0.063 < {alpha}{sub T} < -0.008). In addition to the generally correlated models, we study also special cases where the adiabatic and isocurvature modes are uncorrelated or fully (anti)correlated. We calculate Bayesian evidences (model probabilities) in 21 different non-adiabatic cases and compare them to the corresponding adiabatic models, and find that in all cases the data support the pure adiabatic model.

  6. Nonlinear Bayesian Algorithms for Gas Plume Detection and Estimation from Hyper-spectral Thermal Image Data

    Energy Technology Data Exchange (ETDEWEB)

    Heasler, Patrick G.; Posse, Christian; Hylden, Jeff L.; Anderson, Kevin K.

    2007-06-13

    This paper presents a nonlinear Bayesian regression algorithm for the purpose of detecting and estimating gas plume content from hyper-spectral data. Remote sensing data, by its very nature, is collected under less controlled conditions than laboratory data. As a result, the physics-based model that is used to describe the relationship between the observed remotesensing spectra, and the terrestrial (or atmospheric) parameters that we desire to estimate, is typically littered with many unknown "nuisance" parameters (parameters that we are not interested in estimating, but also appear in the model). Bayesian methods are well-suited for this context as they automatically incorporate the uncertainties associated with all nuisance parameters into the error estimates of the parameters of interest. The nonlinear Bayesian regression methodology is illustrated on realistic simulated data from a three-layer model for longwave infrared (LWIR) measurements from a passive instrument. This shows that this approach should permit more accurate estimation as well as a more reasonable description of estimate uncertainty.

  7. Bayesian estimation applied to multiple species

    International Nuclear Information System (INIS)

    Kunz, Martin; Bassett, Bruce A.; Hlozek, Renee A.

    2007-01-01

    Observed data are often contaminated by undiscovered interlopers, leading to biased parameter estimation. Here we present BEAMS (Bayesian estimation applied to multiple species) which significantly improves on the standard maximum likelihood approach in the case where the probability for each data point being ''pure'' is known. We discuss the application of BEAMS to future type-Ia supernovae (SNIa) surveys, such as LSST, which are projected to deliver over a million supernovae light curves without spectra. The multiband light curves for each candidate will provide a probability of being Ia (pure) but the full sample will be significantly contaminated with other types of supernovae and transients. Given a sample of N supernovae with mean probability, , of being Ia, BEAMS delivers parameter constraints equal to N spectroscopically confirmed SNIa. In addition BEAMS can be simultaneously used to tease apart different families of data and to recover properties of the underlying distributions of those families (e.g. the type-Ibc and II distributions). Hence BEAMS provides a unified classification and parameter estimation methodology which may be useful in a diverse range of problems such as photometric redshift estimation or, indeed, any parameter estimation problem where contamination is an issue

  8. Using Tensor Completion Method to Achieving Better Coverage of Traffic State Estimation from Sparse Floating Car Data.

    Science.gov (United States)

    Ran, Bin; Song, Li; Zhang, Jian; Cheng, Yang; Tan, Huachun

    2016-01-01

    Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%.

  9. Using Tensor Completion Method to Achieving Better Coverage of Traffic State Estimation from Sparse Floating Car Data.

    Directory of Open Access Journals (Sweden)

    Bin Ran

    Full Text Available Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%.

  10. Differences in Gaussian diffusion tensor imaging and non-Gaussian diffusion kurtosis imaging model-based estimates of diffusion tensor invariants in the human brain.

    Science.gov (United States)

    Lanzafame, S; Giannelli, M; Garaci, F; Floris, R; Duggento, A; Guerrisi, M; Toschi, N

    2016-05-01

    An increasing number of studies have aimed to compare diffusion tensor imaging (DTI)-related parameters [e.g., mean diffusivity (MD), fractional anisotropy (FA), radial diffusivity (RD), and axial diffusivity (AD)] to complementary new indexes [e.g., mean kurtosis (MK)/radial kurtosis (RK)/axial kurtosis (AK)] derived through diffusion kurtosis imaging (DKI) in terms of their discriminative potential about tissue disease-related microstructural alterations. Given that the DTI and DKI models provide conceptually and quantitatively different estimates of the diffusion tensor, which can also depend on fitting routine, the aim of this study was to investigate model- and algorithm-dependent differences in MD/FA/RD/AD and anisotropy mode (MO) estimates in diffusion-weighted imaging of human brain white matter. The authors employed (a) data collected from 33 healthy subjects (20-59 yr, F: 15, M: 18) within the Human Connectome Project (HCP) on a customized 3 T scanner, and (b) data from 34 healthy subjects (26-61 yr, F: 5, M: 29) acquired on a clinical 3 T scanner. The DTI model was fitted to b-value =0 and b-value =1000 s/mm(2) data while the DKI model was fitted to data comprising b-value =0, 1000 and 3000/2500 s/mm(2) [for dataset (a)/(b), respectively] through nonlinear and weighted linear least squares algorithms. In addition to MK/RK/AK maps, MD/FA/MO/RD/AD maps were estimated from both models and both algorithms. Using tract-based spatial statistics, the authors tested the null hypothesis of zero difference between the two MD/FA/MO/RD/AD estimates in brain white matter for both datasets and both algorithms. DKI-derived MD/FA/RD/AD and MO estimates were significantly higher and lower, respectively, than corresponding DTI-derived estimates. All voxelwise differences extended over most of the white matter skeleton. Fractional differences between the two estimates [(DKI - DTI)/DTI] of most invariants were seen to vary with the invariant value itself as well as with MK

  11. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    Science.gov (United States)

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  12. Bayesian estimation of core-melt probability

    International Nuclear Information System (INIS)

    Lewis, H.W.

    1984-01-01

    A very simple application of the canonical Bayesian algorithm is made to the problem of estimation of the probability of core melt in a commercial power reactor. An approximation to the results of the Rasmussen study on reactor safety is used as the prior distribution, and the observation that there has been no core melt yet is used as the single experiment. The result is a substantial decrease in the mean probability of core melt--factors of 2 to 4 for reasonable choices of parameters. The purpose is to illustrate the procedure, not to argue for the decrease

  13. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  14. Evaluation of errors in prior mean and variance in the estimation of integrated circuit failure rates using Bayesian methods

    Science.gov (United States)

    Fletcher, B. C.

    1972-01-01

    The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.

  15. Bayesian estimation of animal movement from archival and satellite tags.

    Directory of Open Access Journals (Sweden)

    Michael D Sumner

    Full Text Available The reliable estimation of animal location, and its associated error is fundamental to animal ecology. There are many existing techniques for handling location error, but these are often ad hoc or are used in isolation from each other. In this study we present a Bayesian framework for determining location that uses all the data available, is flexible to all tagging techniques, and provides location estimates with built-in measures of uncertainty. Bayesian methods allow the contributions of multiple data sources to be decomposed into manageable components. We illustrate with two examples for two different location methods: satellite tracking and light level geo-location. We show that many of the problems with uncertainty involved are reduced and quantified by our approach. This approach can use any available information, such as existing knowledge of the animal's potential range, light levels or direct location estimates, auxiliary data, and movement models. The approach provides a substantial contribution to the handling uncertainty in archival tag and satellite tracking data using readily available tools.

  16. A Bayesian Combined Model for Time-Dependent Turning Movement Proportions Estimation at Intersections

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2014-01-01

    Full Text Available Time-dependent turning movement flows are very important input data for intelligent transportation systems but are impossible to be detected directly through current traffic surveillance systems. Existing estimation models have proved to be not accurate and reliable enough during all intervals. An improved way to address this problem is to develop a combined model framework that can integrate multiple submodels running simultaneously. This paper first presents a back propagation neural network model to estimate dynamic turning movements, as well as the self-adaptive learning rate approach and the gradient descent with momentum method for solving. Second, this paper develops an efficient Kalman filtering model and designs a revised sequential Kalman filtering algorithm. Based on the Bayesian method using both historical data and currently estimated results for error calibration, this paper further integrates above two submodels into a Bayesian combined model framework and proposes a corresponding algorithm. A field survey is implemented at an intersection in Beijing city to collect both time series of link counts and actual time-dependent turning movement flows, including historical and present data. The reported estimation results show that the Bayesian combined model is much more accurate and stable than other models.

  17. Bayesian estimation of dynamic matching function for U-V analysis in Japan

    Science.gov (United States)

    Kyo, Koki; Noda, Hideo; Kitagawa, Genshiro

    2012-05-01

    In this paper we propose a Bayesian method for analyzing unemployment dynamics. We derive a Beveridge curve for unemployment and vacancy (U-V) analysis from a Bayesian model based on a labor market matching function. In our framework, the efficiency of matching and the elasticities of new hiring with respect to unemployment and vacancy are regarded as time varying parameters. To construct a flexible model and obtain reasonable estimates in an underdetermined estimation problem, we treat the time varying parameters as random variables and introduce smoothness priors. The model is then described in a state space representation, enabling the parameter estimation to be carried out using Kalman filter and fixed interval smoothing. In such a representation, dynamic features of the cyclic unemployment rate and the structural-frictional unemployment rate can be accurately captured.

  18. A Web-Based System for Bayesian Benchmark Dose Estimation.

    Science.gov (United States)

    Shao, Kan; Shapiro, Andrew J

    2018-01-11

    Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.

  19. Bayesian Estimation of Small Effects in Exercise and Sports Science.

    Directory of Open Access Journals (Sweden)

    Kerrie L Mengersen

    Full Text Available The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL, and intermittent hypoxic exposure (IHE on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a 'magnitude-based inference' approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.

  20. Bayesian Estimation of Small Effects in Exercise and Sports Science.

    Science.gov (United States)

    Mengersen, Kerrie L; Drovandi, Christopher C; Robert, Christian P; Pyne, David B; Gore, Christopher J

    2016-01-01

    The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a 'magnitude-based inference' approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.

  1. Reliability estimation of safety-critical software-based systems using Bayesian networks

    International Nuclear Information System (INIS)

    Helminen, A.

    2001-06-01

    Due to the nature of software faults and the way they cause system failures new methods are needed for the safety and reliability evaluation of software-based safety-critical automation systems in nuclear power plants. In the research project 'Programmable automation system safety integrity assessment (PASSI)', belonging to the Finnish Nuclear Safety Research Programme (FINNUS, 1999-2002), various safety assessment methods and tools for software based systems are developed and evaluated. The project is financed together by the Radiation and Nuclear Safety Authority (STUK), the Ministry of Trade and Industry (KTM) and the Technical Research Centre of Finland (VTT). In this report the applicability of Bayesian networks to the reliability estimation of software-based systems is studied. The applicability is evaluated by building Bayesian network models for the systems of interest and performing simulations for these models. In the simulations hypothetical evidence is used for defining the parameter relations and for determining the ability to compensate disparate evidence in the models. Based on the experiences from modelling and simulations we are able to conclude that Bayesian networks provide a good method for the reliability estimation of software-based systems. (orig.)

  2. A Bayesian estimate of the concordance correlation coefficient with skewed data.

    Science.gov (United States)

    Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir

    2015-01-01

    Concordance correlation coefficient (CCC) is one of the most popular scaled indices used to evaluate agreement. Most commonly, it is used under the assumption that data is normally distributed. This assumption, however, does not apply to skewed data sets. While methods for the estimation of the CCC of skewed data sets have been introduced and studied, the Bayesian approach and its comparison with the previous methods has been lacking. In this study, we propose a Bayesian method for the estimation of the CCC of skewed data sets and compare it with the best method previously investigated. The proposed method has certain advantages. It tends to outperform the best method studied before when the variation of the data is mainly from the random subject effect instead of error. Furthermore, it allows for greater flexibility in application by enabling incorporation of missing data, confounding covariates, and replications, which was not considered previously. The superiority of this new approach is demonstrated using simulation as well as real-life biomarker data sets used in an electroencephalography clinical study. The implementation of the Bayesian method is accessible through the Comprehensive R Archive Network. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Estimating Parameters in Physical Models through Bayesian Inversion: A Complete Example

    KAUST Repository

    Allmaras, Moritz; Bangerth, Wolfgang; Linhart, Jean Marie; Polanco, Javier; Wang, Fang; Wang, Kainan; Webster, Jennifer; Zedler, Sarah

    2013-01-01

    All mathematical models of real-world phenomena contain parameters that need to be estimated from measurements, either for realistic predictions or simply to understand the characteristics of the model. Bayesian statistics provides a framework

  4. TensorLy: Tensor Learning in Python

    NARCIS (Netherlands)

    Kossaifi, Jean; Panagakis, Yannis; Pantic, Maja

    2016-01-01

    Tensor methods are gaining increasing traction in machine learning. However, there are scant to no resources available to perform tensor learning and decomposition in Python. To answer this need we developed TensorLy. TensorLy is a state of the art general purpose library for tensor learning.

  5. A comparison of the Bayesian and frequentist approaches to estimation

    CERN Document Server

    Samaniego, Francisco J

    2010-01-01

    This monograph contributes to the area of comparative statistical inference. Attention is restricted to the important subfield of statistical estimation. The book is intended for an audience having a solid grounding in probability and statistics at the level of the year-long undergraduate course taken by statistics and mathematics majors. The necessary background on Decision Theory and the frequentist and Bayesian approaches to estimation is presented and carefully discussed in Chapters 1-3. The 'threshold problem' - identifying the boundary between Bayes estimators which tend to outperform st

  6. Bayesian Probability Theory

    Science.gov (United States)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  7. Probabilistic inference with noisy-threshold models based on a CP tensor decomposition

    Czech Academy of Sciences Publication Activity Database

    Vomlel, Jiří; Tichavský, Petr

    2014-01-01

    Roč. 55, č. 4 (2014), s. 1072-1092 ISSN 0888-613X R&D Projects: GA ČR GA13-20012S; GA ČR GA102/09/1278 Institutional support: RVO:67985556 Keywords : Bayesian networks * Probabilistic inference * Candecomp-Parafac tensor decomposition * Symmetric tensor rank Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.451, year: 2014 http://library.utia.cas.cz/separaty/2014/MTR/vomlel-0427059.pdf

  8. Bayesian nonparametric estimation of hazard rate in monotone Aalen model

    Czech Academy of Sciences Publication Activity Database

    Timková, Jana

    2014-01-01

    Roč. 50, č. 6 (2014), s. 849-868 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Aalen model * Bayesian estimation * MCMC Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/timkova-0438210.pdf

  9. BURD, Bayesian estimation in data analysis of Probabilistic Safety Assessment

    International Nuclear Information System (INIS)

    Jang, Seung-cheol; Park, Jin-Kyun

    2008-01-01

    1 - Description of program or function: BURD (Bayesian Update for Reliability Data) is a simple code that can be used to obtain a Bayesian estimate easily in the data analysis of PSA (Probabilistic Safety Assessment). According to the Bayes' theorem, basically, the code facilitates calculations of posterior distribution given the prior and the likelihood (evidence) distributions. The distinctive features of the program, BURD, are the following: - The input consists of the prior and likelihood functions that can be chosen from the built-in statistical distributions. - The available prior distributions are uniform, Jeffrey's non informative, beta, gamma, and log-normal that are most-frequently used in performing PSA. - For likelihood function, the user can choose from four statistical distributions, e.g., beta, gamma, binomial and poisson. - A simultaneous graphic display of the prior and posterior distributions facilitate an intuitive interpretation of the results. - Export facilities for the graphic display screen and text-type outputs are available. - Three options for treating zero-evidence data are provided. - Automatic setup of an integral calculus section for a Bayesian updating. 2 - Methods: The posterior distribution is estimated in accordance with the Bayes' theorem, given the prior and the likelihood (evidence) distributions. 3 - Restrictions on the complexity of the problem: The accuracy of the results depends on the calculational error of the statistical function library in MS Excel

  10. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    Science.gov (United States)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  11. Estimating Parameters in Physical Models through Bayesian Inversion: A Complete Example

    KAUST Repository

    Allmaras, Moritz

    2013-02-07

    All mathematical models of real-world phenomena contain parameters that need to be estimated from measurements, either for realistic predictions or simply to understand the characteristics of the model. Bayesian statistics provides a framework for parameter estimation in which uncertainties about models and measurements are translated into uncertainties in estimates of parameters. This paper provides a simple, step-by-step example-starting from a physical experiment and going through all of the mathematics-to explain the use of Bayesian techniques for estimating the coefficients of gravity and air friction in the equations describing a falling body. In the experiment we dropped an object from a known height and recorded the free fall using a video camera. The video recording was analyzed frame by frame to obtain the distance the body had fallen as a function of time, including measures of uncertainty in our data that we describe as probability densities. We explain the decisions behind the various choices of probability distributions and relate them to observed phenomena. Our measured data are then combined with a mathematical model of a falling body to obtain probability densities on the space of parameters we seek to estimate. We interpret these results and discuss sources of errors in our estimation procedure. © 2013 Society for Industrial and Applied Mathematics.

  12. Physics of ultrasonic wave propagation in bone and heart characterized using Bayesian parameter estimation

    Science.gov (United States)

    Anderson, Christian Carl

    This Dissertation explores the physics underlying the propagation of ultrasonic waves in bone and in heart tissue through the use of Bayesian probability theory. Quantitative ultrasound is a noninvasive modality used for clinical detection, characterization, and evaluation of bone quality and cardiovascular disease. Approaches that extend the state of knowledge of the physics underpinning the interaction of ultrasound with inherently inhomogeneous and isotropic tissue have the potential to enhance its clinical utility. Simulations of fast and slow compressional wave propagation in cancellous bone were carried out to demonstrate the plausibility of a proposed explanation for the widely reported anomalous negative dispersion in cancellous bone. The results showed that negative dispersion could arise from analysis that proceeded under the assumption that the data consist of only a single ultrasonic wave, when in fact two overlapping and interfering waves are present. The confounding effect of overlapping fast and slow waves was addressed by applying Bayesian parameter estimation to simulated data, to experimental data acquired on bone-mimicking phantoms, and to data acquired in vitro on cancellous bone. The Bayesian approach successfully estimated the properties of the individual fast and slow waves even when they strongly overlapped in the acquired data. The Bayesian parameter estimation technique was further applied to an investigation of the anisotropy of ultrasonic properties in cancellous bone. The degree to which fast and slow waves overlap is partially determined by the angle of insonation of ultrasound relative to the predominant direction of trabecular orientation. In the past, studies of anisotropy have been limited by interference between fast and slow waves over a portion of the range of insonation angles. Bayesian analysis estimated attenuation, velocity, and amplitude parameters over the entire range of insonation angles, allowing a more complete

  13. Estimation of effective thermal conductivity tensor from composite microstructure images

    International Nuclear Information System (INIS)

    Thomas, M; Boyard, N; Jarny, Y; Delaunay, D

    2008-01-01

    The determination of the effective thermal properties of inhomogeneous materials is a long-standing problem of continuously interest. The impressive number of methods developed to measure or estimate the thermal properties of composite materials clearly exhibits the importance given to their knowledge. Homogenization models are a cheap way to determine or predict them. Many different approaches of homogenization were developed, but the last advances are credited to numerical methods. In this study, a new computational model is developed to estimate the 2D thermal conductivity tensor and the thermal main directions of a pure carbon/epoxy unidirectional composite. This tool is based on real composite microstructure.

  14. A Bayesian approach to estimate sensible and latent heat over vegetated land surface

    Directory of Open Access Journals (Sweden)

    C. van der Tol

    2009-06-01

    Full Text Available Sensible and latent heat fluxes are often calculated from bulk transfer equations combined with the energy balance. For spatial estimates of these fluxes, a combination of remotely sensed and standard meteorological data from weather stations is used. The success of this approach depends on the accuracy of the input data and on the accuracy of two variables in particular: aerodynamic and surface conductance. This paper presents a Bayesian approach to improve estimates of sensible and latent heat fluxes by using a priori estimates of aerodynamic and surface conductance alongside remote measurements of surface temperature. The method is validated for time series of half-hourly measurements in a fully grown maize field, a vineyard and a forest. It is shown that the Bayesian approach yields more accurate estimates of sensible and latent heat flux than traditional methods.

  15. TensorLy: Tensor Learning in Python

    OpenAIRE

    Kossaifi, Jean; Panagakis, Yannis; Pantic, Maja

    2016-01-01

    Tensors are higher-order extensions of matrices. While matrix methods form the cornerstone of machine learning and data analysis, tensor methods have been gaining increasing traction. However, software support for tensor operations is not on the same footing. In order to bridge this gap, we have developed \\emph{TensorLy}, a high-level API for tensor methods and deep tensorized neural networks in Python. TensorLy aims to follow the same standards adopted by the main projects of the Python scie...

  16. Bayesian ensemble approach to error estimation of interatomic potentials

    DEFF Research Database (Denmark)

    Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.

    2004-01-01

    Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...

  17. Use of Bayesian Estimates to determine the Volatility Parameter Input in the Black-Scholes and Binomial Option Pricing Models

    Directory of Open Access Journals (Sweden)

    Shu Wing Ho

    2011-12-01

    Full Text Available The valuation of options and many other derivative instruments requires an estimation of exante or forward looking volatility. This paper adopts a Bayesian approach to estimate stock price volatility. We find evidence that overall Bayesian volatility estimates more closely approximate the implied volatility of stocks derived from traded call and put options prices compared to historical volatility estimates sourced from IVolatility.com (“IVolatility”. Our evidence suggests use of the Bayesian approach to estimate volatility can provide a more accurate measure of ex-ante stock price volatility and will be useful in the pricing of derivative securities where the implied stock price volatility cannot be observed.

  18. Estimation of Uncertainties of Full Moment Tensors

    Science.gov (United States)

    2017-10-06

    For our moment tensor inversions, we use the ‘cut-and-paste’ ( CAP ) code of Zhu and Helmberger (1996) and Zhu and Ben-Zion (2013), with some...modifications. For the misfit function we use an L1 norm Silwal and Tape (2016), and we incorporate the number of misfitting polarities into the waveform... norm of the eigenvalue triple provides the magnitude of the moment tensor, leaving two free parameters to define the source type. In the same year

  19. Estimation of full moment tensors, including uncertainties, for earthquakes, volcanic events, and nuclear explosions

    Science.gov (United States)

    Alvizuri, Celso R.

    We present a catalog of full seismic moment tensors for 63 events from Uturuncu volcano in Bolivia. The events were recorded during 2011-2012 in the PLUTONS seismic array of 24 broadband stations. Most events had magnitudes between 0.5 and 2.0 and did not generate discernible surface waves; the largest event was Mw 2.8. For each event we computed the misfit between observed and synthetic waveforms, and we used first-motion polarity measurements to reduce the number of possible solutions. Each moment tensor solution was obtained using a grid search over the six-dimensional space of moment tensors. For each event we show the misfit function in eigenvalue space, represented by a lune. We identify three subsets of the catalog: (1) 6 isotropic events, (2) 5 tensional crack events, and (3) a swarm of 14 events southeast of the volcanic center that appear to be double couples. The occurrence of positively isotropic events is consistent with other published results from volcanic and geothermal regions. Several of these previous results, as well as our results, cannot be interpreted within the context of either an oblique opening crack or a crack-plus-double-couple model. Proper characterization of uncertainties for full moment tensors is critical for distinguishing among physical models of source processes. A seismic moment tensor is a 3x3 symmetric matrix that provides a compact representation of a seismic source. We develop an algorithm to estimate moment tensors and their uncertainties from observed seismic data. For a given event, the algorithm performs a grid search over the six-dimensional space of moment tensors by generating synthetic waveforms for each moment tensor and then evaluating a misfit function between the observed and synthetic waveforms. 'The' moment tensor M0 for the event is then the moment tensor with minimum misfit. To describe the uncertainty associated with M0, we first convert the misfit function to a probability function. The uncertainty, or

  20. Bayesian Estimation of Two-Parameter Weibull Distribution Using Extension of Jeffreys' Prior Information with Three Loss Functions

    Directory of Open Access Journals (Sweden)

    Chris Bambey Guure

    2012-01-01

    Full Text Available The Weibull distribution has been observed as one of the most useful distribution, for modelling and analysing lifetime data in engineering, biology, and others. Studies have been done vigorously in the literature to determine the best method in estimating its parameters. Recently, much attention has been given to the Bayesian estimation approach for parameters estimation which is in contention with other estimation methods. In this paper, we examine the performance of maximum likelihood estimator and Bayesian estimator using extension of Jeffreys prior information with three loss functions, namely, the linear exponential loss, general entropy loss, and the square error loss function for estimating the two-parameter Weibull failure time distribution. These methods are compared using mean square error through simulation study with varying sample sizes. The results show that Bayesian estimator using extension of Jeffreys' prior under linear exponential loss function in most cases gives the smallest mean square error and absolute bias for both the scale parameter α and the shape parameter β for the given values of extension of Jeffreys' prior.

  1. Segmental Bayesian estimation of gap-junctional and inhibitory conductance of inferior olive neurons from spike trains with complicated dynamics

    Directory of Open Access Journals (Sweden)

    Huu eHoang

    2015-05-01

    Full Text Available The inverse problem for estimating model parameters from brain spike data is an ill-posed problem because of a huge mismatch in the system complexity between the model and the brain as well as its non-stationary dynamics, and needs a stochastic approach that finds the most likely solution among many possible solutions. In the present study, we developed a segmental Bayesian method to estimate the two parameters of interest, the gap-junctional (gc and inhibitory conductance (gi from inferior olive spike data. Feature vectors were estimated for the spike data in a segment-wise fashion to compensate for the non-stationary firing dynamics. Hierarchical Bayesian estimation was conducted to estimate the gc and gi for every spike segment using a forward model constructed in the principal component analysis (PCA space of the feature vectors, and to merge the segmental estimates into single estimates for every neuron. The segmental Bayesian estimation gave smaller fitting errors than the conventional Bayesian inference, which finds the estimates once across the entire spike data, or the minimum error method, which directly finds the closest match in the PCA space. The segmental Bayesian inference has the potential to overcome the problem of non-stationary dynamics and resolve the ill-posedness of the inverse problem because of the mismatch between the model and the brain under the constraints based, and it is a useful tool to evaluate parameters of interest for neuroscience from experimental spike train data.

  2. Bayesian-based estimation of acoustic surface impedance: Finite difference frequency domain approach.

    Science.gov (United States)

    Bockman, Alexander; Fackler, Cameron; Xiang, Ning

    2015-04-01

    Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.

  3. Estimating Steatosis Prevalence in Overweight and Obese Children: Comparison of Bayesian Small Area and Direct Methods

    Directory of Open Access Journals (Sweden)

    Hamid Reza Khalkhali

    2016-09-01

    Full Text Available Background Often, there is no access to sufficient sample size to estimate the prevalence using the method of direct estimator in all areas. The aim of this study was to compare small area’s Bayesian method and direct method in estimating the prevalence of steatosis in obese and overweight children. Materials and Methods: In this cross-sectional study, was conducted on 150 overweight and obese children aged 2 to 15 years referred to the Children's digestive clinic of Urmia University of Medical Sciences- Iran, in 2013. After Body mass index (BMI calculation, children with overweight and obese were assessed in terms of primary tests of obesity screening. Then children with steatosis confirmed by abdominal Ultrasonography, were referred to the laboratory for doing further tests. Steatosis prevalence was estimated by direct and Bayesian method and their efficiency were evaluated using mean-square error Jackknife method. The study data was analyzed using the open BUGS3.1.2 and R2.15.2 software. Results: The findings indicated that estimation of steatosis prevalence in children using Bayesian and direct methods were between 0.3098 to 0.493, and 0.355 to 0.560 respectively, in Health Districts; 0.3098 to 0.502, and 0.355 to 0.550 in Education Districts; 0.321 to 0.582, and 0.357 to 0.615 in age groups; 0.313 to 0.429, and 0.383 to 0.536 in sex groups. In general, according to the results, mean-square error of Bayesian estimation was smaller than direct estimation (P

  4. MAP estimators and their consistency in Bayesian nonparametric inverse problems

    KAUST Repository

    Dashti, M.

    2013-09-01

    We consider the inverse problem of estimating an unknown function u from noisy measurements y of a known, possibly nonlinear, map applied to u. We adopt a Bayesian approach to the problem and work in a setting where the prior measure is specified as a Gaussian random field μ0. We work under a natural set of conditions on the likelihood which implies the existence of a well-posed posterior measure, μy. Under these conditions, we show that the maximum a posteriori (MAP) estimator is well defined as the minimizer of an Onsager-Machlup functional defined on the Cameron-Martin space of the prior; thus, we link a problem in probability with a problem in the calculus of variations. We then consider the case where the observational noise vanishes and establish a form of Bayesian posterior consistency for the MAP estimator. We also prove a similar result for the case where the observation of can be repeated as many times as desired with independent identically distributed noise. The theory is illustrated with examples from an inverse problem for the Navier-Stokes equation, motivated by problems arising in weather forecasting, and from the theory of conditioned diffusions, motivated by problems arising in molecular dynamics. © 2013 IOP Publishing Ltd.

  5. MAP estimators and their consistency in Bayesian nonparametric inverse problems

    International Nuclear Information System (INIS)

    Dashti, M; Law, K J H; Stuart, A M; Voss, J

    2013-01-01

    We consider the inverse problem of estimating an unknown function u from noisy measurements y of a known, possibly nonlinear, map G applied to u. We adopt a Bayesian approach to the problem and work in a setting where the prior measure is specified as a Gaussian random field μ 0 . We work under a natural set of conditions on the likelihood which implies the existence of a well-posed posterior measure, μ y . Under these conditions, we show that the maximum a posteriori (MAP) estimator is well defined as the minimizer of an Onsager–Machlup functional defined on the Cameron–Martin space of the prior; thus, we link a problem in probability with a problem in the calculus of variations. We then consider the case where the observational noise vanishes and establish a form of Bayesian posterior consistency for the MAP estimator. We also prove a similar result for the case where the observation of G(u) can be repeated as many times as desired with independent identically distributed noise. The theory is illustrated with examples from an inverse problem for the Navier–Stokes equation, motivated by problems arising in weather forecasting, and from the theory of conditioned diffusions, motivated by problems arising in molecular dynamics. (paper)

  6. Bayesian estimation and use of high-throughput remote sensing indices for quantitative genetic analyses of leaf growth.

    Science.gov (United States)

    Baker, Robert L; Leong, Wen Fung; An, Nan; Brock, Marcus T; Rubin, Matthew J; Welch, Stephen; Weinig, Cynthia

    2018-02-01

    We develop Bayesian function-valued trait models that mathematically isolate genetic mechanisms underlying leaf growth trajectories by factoring out genotype-specific differences in photosynthesis. Remote sensing data can be used instead of leaf-level physiological measurements. Characterizing the genetic basis of traits that vary during ontogeny and affect plant performance is a major goal in evolutionary biology and agronomy. Describing genetic programs that specifically regulate morphological traits can be complicated by genotypic differences in physiological traits. We describe the growth trajectories of leaves using novel Bayesian function-valued trait (FVT) modeling approaches in Brassica rapa recombinant inbred lines raised in heterogeneous field settings. While frequentist approaches estimate parameter values by treating each experimental replicate discretely, Bayesian models can utilize information in the global dataset, potentially leading to more robust trait estimation. We illustrate this principle by estimating growth asymptotes in the face of missing data and comparing heritabilities of growth trajectory parameters estimated by Bayesian and frequentist approaches. Using pseudo-Bayes factors, we compare the performance of an initial Bayesian logistic growth model and a model that incorporates carbon assimilation (A max ) as a cofactor, thus statistically accounting for genotypic differences in carbon resources. We further evaluate two remotely sensed spectroradiometric indices, photochemical reflectance (pri2) and MERIS Terrestrial Chlorophyll Index (mtci) as covariates in lieu of A max , because these two indices were genetically correlated with A max across years and treatments yet allow much higher throughput compared to direct leaf-level gas-exchange measurements. For leaf lengths in uncrowded settings, including A max improves model fit over the initial model. The mtci and pri2 indices also outperform direct A max measurements. Of particular

  7. Release the BEESTS: Bayesian Estimation of Ex-Gaussian STop-Signal Reaction Time Distributions

    Directory of Open Access Journals (Sweden)

    Dora eMatzke

    2013-12-01

    Full Text Available The stop-signal paradigm is frequently used to study response inhibition. Inthis paradigm, participants perform a two-choice response time task wherethe primary task is occasionally interrupted by a stop-signal that promptsparticipants to withhold their response. The primary goal is to estimatethe latency of the unobservable stop response (stop signal reaction timeor SSRT. Recently, Matzke, Dolan, Logan, Brown, and Wagenmakers (inpress have developed a Bayesian parametric approach that allows for theestimation of the entire distribution of SSRTs. The Bayesian parametricapproach assumes that SSRTs are ex-Gaussian distributed and uses Markovchain Monte Carlo sampling to estimate the parameters of the SSRT distri-bution. Here we present an efficient and user-friendly software implementa-tion of the Bayesian parametric approach —BEESTS— that can be appliedto individual as well as hierarchical stop-signal data. BEESTS comes withan easy-to-use graphical user interface and provides users with summarystatistics of the posterior distribution of the parameters as well various diag-nostic tools to assess the quality of the parameter estimates. The softwareis open source and runs on Windows and OS X operating systems. In sum,BEESTS allows experimental and clinical psychologists to estimate entiredistributions of SSRTs and hence facilitates the more rigorous analysis ofstop-signal data.

  8. Combining the boundary shift integral and tensor-based morphometry for brain atrophy estimation

    Science.gov (United States)

    Michalkiewicz, Mateusz; Pai, Akshay; Leung, Kelvin K.; Sommer, Stefan; Darkner, Sune; Sørensen, Lauge; Sporring, Jon; Nielsen, Mads

    2016-03-01

    Brain atrophy from structural magnetic resonance images (MRIs) is widely used as an imaging surrogate marker for Alzheimers disease. Their utility has been limited due to the large degree of variance and subsequently high sample size estimates. The only consistent and reasonably powerful atrophy estimation methods has been the boundary shift integral (BSI). In this paper, we first propose a tensor-based morphometry (TBM) method to measure voxel-wise atrophy that we combine with BSI. The combined model decreases the sample size estimates significantly when compared to BSI and TBM alone.

  9. Bayesian Parameter Estimation via Filtering and Functional Approximations

    KAUST Repository

    Matthies, Hermann G.

    2016-11-25

    The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.

  10. Bayesian Parameter Estimation via Filtering and Functional Approximations

    KAUST Repository

    Matthies, Hermann G.; Litvinenko, Alexander; Rosic, Bojana V.; Zander, Elmar

    2016-01-01

    The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.

  11. INLA goes extreme: Bayesian tail regression for the estimation of high spatio-temporal quantiles

    KAUST Repository

    Opitz, Thomas; Huser, Raphaë l; Bakka, Haakon; Rue, Haavard

    2018-01-01

    approach is based on a Bayesian generalized additive modeling framework that is designed to estimate complex trends in marginal extremes over space and time. First, we estimate a high non-stationary threshold using a gamma distribution for precipitation

  12. Assessment of the accuracy of a Bayesian estimation algorithm for perfusion CT by using a digital phantom

    International Nuclear Information System (INIS)

    Sasaki, Makoto; Kudo, Kohsuke; Uwano, Ikuko; Goodwin, Jonathan; Higuchi, Satomi; Ito, Kenji; Yamashita, Fumio; Boutelier, Timothe; Pautot, Fabrice; Christensen, Soren

    2013-01-01

    A new deconvolution algorithm, the Bayesian estimation algorithm, was reported to improve the precision of parametric maps created using perfusion computed tomography. However, it remains unclear whether quantitative values generated by this method are more accurate than those generated using optimized deconvolution algorithms of other software packages. Hence, we compared the accuracy of the Bayesian and deconvolution algorithms by using a digital phantom. The digital phantom data, in which concentration-time curves reflecting various known values for cerebral blood flow (CBF), cerebral blood volume (CBV), mean transit time (MTT), and tracer delays were embedded, were analyzed using the Bayesian estimation algorithm as well as delay-insensitive singular value decomposition (SVD) algorithms of two software packages that were the best benchmarks in a previous cross-validation study. Correlation and agreement of quantitative values of these algorithms with true values were examined. CBF, CBV, and MTT values estimated by all the algorithms showed strong correlations with the true values (r = 0.91-0.92, 0.97-0.99, and 0.91-0.96, respectively). In addition, the values generated by the Bayesian estimation algorithm for all of these parameters showed good agreement with the true values [intraclass correlation coefficient (ICC) = 0.90, 0.99, and 0.96, respectively], while MTT values from the SVD algorithms were suboptimal (ICC = 0.81-0.82). Quantitative analysis using a digital phantom revealed that the Bayesian estimation algorithm yielded CBF, CBV, and MTT maps strongly correlated with the true values and MTT maps with better agreement than those produced by delay-insensitive SVD algorithms. (orig.)

  13. Assessment of the accuracy of a Bayesian estimation algorithm for perfusion CT by using a digital phantom

    Energy Technology Data Exchange (ETDEWEB)

    Sasaki, Makoto; Kudo, Kohsuke; Uwano, Ikuko; Goodwin, Jonathan; Higuchi, Satomi; Ito, Kenji; Yamashita, Fumio [Iwate Medical University, Division of Ultrahigh Field MRI, Institute for Biomedical Sciences, Yahaba (Japan); Boutelier, Timothe; Pautot, Fabrice [Olea Medical, Department of Research and Innovation, La Ciotat (France); Christensen, Soren [University of Melbourne, Department of Neurology and Radiology, Royal Melbourne Hospital, Victoria (Australia)

    2013-10-15

    A new deconvolution algorithm, the Bayesian estimation algorithm, was reported to improve the precision of parametric maps created using perfusion computed tomography. However, it remains unclear whether quantitative values generated by this method are more accurate than those generated using optimized deconvolution algorithms of other software packages. Hence, we compared the accuracy of the Bayesian and deconvolution algorithms by using a digital phantom. The digital phantom data, in which concentration-time curves reflecting various known values for cerebral blood flow (CBF), cerebral blood volume (CBV), mean transit time (MTT), and tracer delays were embedded, were analyzed using the Bayesian estimation algorithm as well as delay-insensitive singular value decomposition (SVD) algorithms of two software packages that were the best benchmarks in a previous cross-validation study. Correlation and agreement of quantitative values of these algorithms with true values were examined. CBF, CBV, and MTT values estimated by all the algorithms showed strong correlations with the true values (r = 0.91-0.92, 0.97-0.99, and 0.91-0.96, respectively). In addition, the values generated by the Bayesian estimation algorithm for all of these parameters showed good agreement with the true values [intraclass correlation coefficient (ICC) = 0.90, 0.99, and 0.96, respectively], while MTT values from the SVD algorithms were suboptimal (ICC = 0.81-0.82). Quantitative analysis using a digital phantom revealed that the Bayesian estimation algorithm yielded CBF, CBV, and MTT maps strongly correlated with the true values and MTT maps with better agreement than those produced by delay-insensitive SVD algorithms. (orig.)

  14. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    Science.gov (United States)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  15. Frequency offset estimation in OFDM systems using Bayesian filtering

    Science.gov (United States)

    Yu, Yihua

    2011-10-01

    Orthogonal frequency division multiplexing (OFDM) is sensitive to carrier frequency offset (CFO) that causes inter-carrier interference (ICI). In this paper, we present two schemes for the CFO estimation, which are based on rejection sampling (RS) and a form of particle filtering (PF) called kernel smoothing technique, respectively. The first scheme is offline estimation, where the observations contained in the OFDM training symbol are treated in the batch mode. The second scheme is online estimation, where the observations in the OFDM training symbol are treated in the sequential manner. Simulations are provided to illustrate the performances of the schemes. Performance comparisons of the two schemes and with other Bayesian methods are provided. Simulation results show that the two schemes are effective when estimating the CFO and can effectively combat the effect of ICI in OFDM systems.

  16. Estimation of parameter uncertainty for an activated sludge model using Bayesian inference: a comparison with the frequentist method.

    Science.gov (United States)

    Zonta, Zivko J; Flotats, Xavier; Magrí, Albert

    2014-08-01

    The procedure commonly used for the assessment of the parameters included in activated sludge models (ASMs) relies on the estimation of their optimal value within a confidence region (i.e. frequentist inference). Once optimal values are estimated, parameter uncertainty is computed through the covariance matrix. However, alternative approaches based on the consideration of the model parameters as probability distributions (i.e. Bayesian inference), may be of interest. The aim of this work is to apply (and compare) both Bayesian and frequentist inference methods when assessing uncertainty for an ASM-type model, which considers intracellular storage and biomass growth, simultaneously. Practical identifiability was addressed exclusively considering respirometric profiles based on the oxygen uptake rate and with the aid of probabilistic global sensitivity analysis. Parameter uncertainty was thus estimated according to both the Bayesian and frequentist inferential procedures. Results were compared in order to evidence the strengths and weaknesses of both approaches. Since it was demonstrated that Bayesian inference could be reduced to a frequentist approach under particular hypotheses, the former can be considered as a more generalist methodology. Hence, the use of Bayesian inference is encouraged for tackling inferential issues in ASM environments.

  17. Low-Complexity Bayesian Estimation of Cluster-Sparse Channels

    KAUST Repository

    Ballal, Tarig

    2015-09-18

    This paper addresses the problem of channel impulse response estimation for cluster-sparse channels under the Bayesian estimation framework. We develop a novel low-complexity minimum mean squared error (MMSE) estimator by exploiting the sparsity of the received signal profile and the structure of the measurement matrix. It is shown that due to the banded Toeplitz/circulant structure of the measurement matrix, a channel impulse response, such as underwater acoustic channel impulse responses, can be partitioned into a number of orthogonal or approximately orthogonal clusters. The orthogonal clusters, the sparsity of the channel impulse response and the structure of the measurement matrix, all combined, result in a computationally superior realization of the MMSE channel estimator. The MMSE estimator calculations boil down to simpler in-cluster calculations that can be reused in different clusters. The reduction in computational complexity allows for a more accurate implementation of the MMSE estimator. The proposed approach is tested using synthetic Gaussian channels, as well as simulated underwater acoustic channels. Symbol-error-rate performance and computation time confirm the superiority of the proposed method compared to selected benchmark methods in systems with preamble-based training signals transmitted over clustersparse channels.

  18. Low-Complexity Bayesian Estimation of Cluster-Sparse Channels

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.; Ahmed, Syed

    2015-01-01

    This paper addresses the problem of channel impulse response estimation for cluster-sparse channels under the Bayesian estimation framework. We develop a novel low-complexity minimum mean squared error (MMSE) estimator by exploiting the sparsity of the received signal profile and the structure of the measurement matrix. It is shown that due to the banded Toeplitz/circulant structure of the measurement matrix, a channel impulse response, such as underwater acoustic channel impulse responses, can be partitioned into a number of orthogonal or approximately orthogonal clusters. The orthogonal clusters, the sparsity of the channel impulse response and the structure of the measurement matrix, all combined, result in a computationally superior realization of the MMSE channel estimator. The MMSE estimator calculations boil down to simpler in-cluster calculations that can be reused in different clusters. The reduction in computational complexity allows for a more accurate implementation of the MMSE estimator. The proposed approach is tested using synthetic Gaussian channels, as well as simulated underwater acoustic channels. Symbol-error-rate performance and computation time confirm the superiority of the proposed method compared to selected benchmark methods in systems with preamble-based training signals transmitted over clustersparse channels.

  19. Bayesian approach to estimate AUC, partition coefficient and drug targeting index for studies with serial sacrifice design.

    Science.gov (United States)

    Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William

    2014-03-01

    The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.

  20. A Bayesian nonparametric estimation of distributions and quantiles

    International Nuclear Information System (INIS)

    Poern, K.

    1988-11-01

    The report describes a Bayesian, nonparametric method for the estimation of a distribution function and its quantiles. The method, presupposing random sampling, is nonparametric, so the user has to specify a prior distribution on a space of distributions (and not on a parameter space). In the current application, where the method is used to estimate the uncertainty of a parametric calculational model, the Dirichlet prior distribution is to a large extent determined by the first batch of Monte Carlo-realizations. In this case the results of the estimation technique is very similar to the conventional empirical distribution function. The resulting posterior distribution is also Dirichlet, and thus facilitates the determination of probability (confidence) intervals at any given point in the space of interest. Another advantage is that also the posterior distribution of a specified quantitle can be derived and utilized to determine a probability interval for that quantile. The method was devised for use in the PROPER code package for uncertainty and sensitivity analysis. (orig.)

  1. Bayesian Mediation Analysis

    OpenAIRE

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    This article proposes Bayesian analysis of mediation effects. Compared to conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian mediation analysis, inference is straightforward and exact, which makes it appealing for studies with small samples. Third, the Bayesian approach is conceptua...

  2. Age estimation by assessment of pulp chamber volume: a Bayesian network for the evaluation of dental evidence.

    Science.gov (United States)

    Sironi, Emanuele; Taroni, Franco; Baldinotti, Claudio; Nardi, Cosimo; Norelli, Gian-Aristide; Gallidabino, Matteo; Pinchi, Vilma

    2017-11-14

    The present study aimed to investigate the performance of a Bayesian method in the evaluation of dental age-related evidence collected by means of a geometrical approximation procedure of the pulp chamber volume. Measurement of this volume was based on three-dimensional cone beam computed tomography images. The Bayesian method was applied by means of a probabilistic graphical model, namely a Bayesian network. Performance of that method was investigated in terms of accuracy and bias of the decisional outcomes. Influence of an informed elicitation of the prior belief of chronological age was also studied by means of a sensitivity analysis. Outcomes in terms of accuracy were adequate with standard requirements for forensic adult age estimation. Findings also indicated that the Bayesian method does not show a particular tendency towards under- or overestimation of the age variable. Outcomes of the sensitivity analysis showed that results on estimation are improved with a ration elicitation of the prior probabilities of age.

  3. A Bayesian analysis of sensible heat flux estimation: Quantifying uncertainty in meteorological forcing to improve model prediction

    KAUST Repository

    Ershadi, Ali

    2013-05-01

    The influence of uncertainty in land surface temperature, air temperature, and wind speed on the estimation of sensible heat flux is analyzed using a Bayesian inference technique applied to the Surface Energy Balance System (SEBS) model. The Bayesian approach allows for an explicit quantification of the uncertainties in input variables: a source of error generally ignored in surface heat flux estimation. An application using field measurements from the Soil Moisture Experiment 2002 is presented. The spatial variability of selected input meteorological variables in a multitower site is used to formulate the prior estimates for the sampling uncertainties, and the likelihood function is formulated assuming Gaussian errors in the SEBS model. Land surface temperature, air temperature, and wind speed were estimated by sampling their posterior distribution using a Markov chain Monte Carlo algorithm. Results verify that Bayesian-inferred air temperature and wind speed were generally consistent with those observed at the towers, suggesting that local observations of these variables were spatially representative. Uncertainties in the land surface temperature appear to have the strongest effect on the estimated sensible heat flux, with Bayesian-inferred values differing by up to ±5°C from the observed data. These differences suggest that the footprint of the in situ measured land surface temperature is not representative of the larger-scale variability. As such, these measurements should be used with caution in the calculation of surface heat fluxes and highlight the importance of capturing the spatial variability in the land surface temperature: particularly, for remote sensing retrieval algorithms that use this variable for flux estimation.

  4. Maximum a posteriori Bayesian estimation of mycophenolic Acid area under the concentration-time curve: is this clinically useful for dosage prediction yet?

    Science.gov (United States)

    Staatz, Christine E; Tett, Susan E

    2011-12-01

    This review seeks to summarize the available data about Bayesian estimation of area under the plasma concentration-time curve (AUC) and dosage prediction for mycophenolic acid (MPA) and evaluate whether sufficient evidence is available for routine use of Bayesian dosage prediction in clinical practice. A literature search identified 14 studies that assessed the predictive performance of maximum a posteriori Bayesian estimation of MPA AUC and one report that retrospectively evaluated how closely dosage recommendations based on Bayesian forecasting achieved targeted MPA exposure. Studies to date have mostly been undertaken in renal transplant recipients, with limited investigation in patients treated with MPA for autoimmune disease or haematopoietic stem cell transplantation. All of these studies have involved use of the mycophenolate mofetil (MMF) formulation of MPA, rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation. Bias associated with estimation of MPA AUC using Bayesian forecasting was generally less than 10%. However some difficulties with imprecision was evident, with values ranging from 4% to 34% (based on estimation involving two or more concentration measurements). Evaluation of whether MPA dosing decisions based on Bayesian forecasting (by the free website service https://pharmaco.chu-limoges.fr) achieved target drug exposure has only been undertaken once. When MMF dosage recommendations were applied by clinicians, a higher proportion (72-80%) of subsequent estimated MPA AUC values were within the 30-60 mg · h/L target range, compared with when dosage recommendations were not followed (only 39-57% within target range). Such findings provide evidence that Bayesian dosage prediction is clinically useful for achieving target MPA AUC. This study, however, was retrospective and focussed only on adult renal transplant recipients. Furthermore, in this study, Bayesian-generated AUC estimations and dosage predictions were not compared

  5. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  6. Bayesian Estimation of Fish Disease Prevalence from Pooled Samples Incorporating Sensitivity and Specificity

    Science.gov (United States)

    Williams, Christopher J.; Moffitt, Christine M.

    2003-03-01

    An important emerging issue in fisheries biology is the health of free-ranging populations of fish, particularly with respect to the prevalence of certain pathogens. For many years, pathologists focused on captive populations and interest was in the presence or absence of certain pathogens, so it was economically attractive to test pooled samples of fish. Recently, investigators have begun to study individual fish prevalence from pooled samples. Estimation of disease prevalence from pooled samples is straightforward when assay sensitivity and specificity are perfect, but this assumption is unrealistic. Here we illustrate the use of a Bayesian approach for estimating disease prevalence from pooled samples when sensitivity and specificity are not perfect. We also focus on diagnostic plots to monitor the convergence of the Gibbs-sampling-based Bayesian analysis. The methods are illustrated with a sample data set.

  7. Careful with Those Priors: A Note on Bayesian Estimation in Two-Parameter Logistic Item Response Theory Models

    Science.gov (United States)

    Marcoulides, Katerina M.

    2018-01-01

    This study examined the use of Bayesian analysis methods for the estimation of item parameters in a two-parameter logistic item response theory model. Using simulated data under various design conditions with both informative and non-informative priors, the parameter recovery of Bayesian analysis methods were examined. Overall results showed that…

  8. A robust bayesian estimate of the concordance correlation coefficient.

    Science.gov (United States)

    Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir

    2015-01-01

    A need for assessment of agreement arises in many situations including statistical biomarker qualification or assay or method validation. Concordance correlation coefficient (CCC) is one of the most popular scaled indices reported in evaluation of agreement. Robust methods for CCC estimation currently present an important statistical challenge. Here, we propose a novel Bayesian method of robust estimation of CCC based on multivariate Student's t-distribution and compare it with its alternatives. Furthermore, we extend the method to practically relevant settings, enabling incorporation of confounding covariates and replications. The superiority of the new approach is demonstrated using simulation as well as real datasets from biomarker application in electroencephalography (EEG). This biomarker is relevant in neuroscience for development of treatments for insomnia.

  9. Estimation of insurance premiums for coverage against natural disaster risk: an application of Bayesian Inference

    NARCIS (Netherlands)

    Paudel, Y.; Botzen, W.J.W.; Aerts, J.C.J.H.

    2013-01-01

    This study applies Bayesian Inference to estimate flood risk for 53 dyke ring areas in the Netherlands, and focuses particularly on the data scarcity and extreme behaviour of catastrophe risk. The probability density curves of flood damage are estimated through Monte Carlo simulations. Based on

  10. A Bayesian framework to estimate diversification rates and their variation through time and space

    Directory of Open Access Journals (Sweden)

    Silvestro Daniele

    2011-10-01

    Full Text Available Abstract Background Patterns of species diversity are the result of speciation and extinction processes, and molecular phylogenetic data can provide valuable information to derive their variability through time and across clades. Bayesian Markov chain Monte Carlo methods offer a promising framework to incorporate phylogenetic uncertainty when estimating rates of diversification. Results We introduce a new approach to estimate diversification rates in a Bayesian framework over a distribution of trees under various constant and variable rate birth-death and pure-birth models, and test it on simulated phylogenies. Furthermore, speciation and extinction rates and their posterior credibility intervals can be estimated while accounting for non-random taxon sampling. The framework is particularly suitable for hypothesis testing using Bayes factors, as we demonstrate analyzing dated phylogenies of Chondrostoma (Cyprinidae and Lupinus (Fabaceae. In addition, we develop a model that extends the rate estimation to a meta-analysis framework in which different data sets are combined in a single analysis to detect general temporal and spatial trends in diversification. Conclusions Our approach provides a flexible framework for the estimation of diversification parameters and hypothesis testing while simultaneously accounting for uncertainties in the divergence times and incomplete taxon sampling.

  11. Bayesian benefits with JASP

    NARCIS (Netherlands)

    Marsman, M.; Wagenmakers, E.-J.

    2017-01-01

    We illustrate the Bayesian approach to data analysis using the newly developed statistical software program JASP. With JASP, researchers are able to take advantage of the benefits that the Bayesian framework has to offer in terms of parameter estimation and hypothesis testing. The Bayesian

  12. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors

    OpenAIRE

    Rens van de Schoot; Joris J. Broere; Koen H. Perryck; Mariëlle Zondervan-Zwijnenburg; Nancy E. van Loey

    2015-01-01

    Background: The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation.Methods: ...

  13. Uncertainty estimation of a complex water quality model: The influence of Box-Cox transformation on Bayesian approaches and comparison with a non-Bayesian method

    Science.gov (United States)

    Freni, Gabriele; Mannina, Giorgio

    In urban drainage modelling, uncertainty analysis is of undoubted necessity. However, uncertainty analysis in urban water-quality modelling is still in its infancy and only few studies have been carried out. Therefore, several methodological aspects still need to be experienced and clarified especially regarding water quality modelling. The use of the Bayesian approach for uncertainty analysis has been stimulated by its rigorous theoretical framework and by the possibility of evaluating the impact of new knowledge on the modelling predictions. Nevertheless, the Bayesian approach relies on some restrictive hypotheses that are not present in less formal methods like the Generalised Likelihood Uncertainty Estimation (GLUE). One crucial point in the application of Bayesian method is the formulation of a likelihood function that is conditioned by the hypotheses made regarding model residuals. Statistical transformations, such as the use of Box-Cox equation, are generally used to ensure the homoscedasticity of residuals. However, this practice may affect the reliability of the analysis leading to a wrong uncertainty estimation. The present paper aims to explore the influence of the Box-Cox equation for environmental water quality models. To this end, five cases were considered one of which was the “real” residuals distributions (i.e. drawn from available data). The analysis was applied to the Nocella experimental catchment (Italy) which is an agricultural and semi-urbanised basin where two sewer systems, two wastewater treatment plants and a river reach were monitored during both dry and wet weather periods. The results show that the uncertainty estimation is greatly affected by residual transformation and a wrong assumption may also affect the evaluation of model uncertainty. The use of less formal methods always provide an overestimation of modelling uncertainty with respect to Bayesian method but such effect is reduced if a wrong assumption is made regarding the

  14. Counting and confusion: Bayesian rate estimation with multiple populations

    Science.gov (United States)

    Farr, Will M.; Gair, Jonathan R.; Mandel, Ilya; Cutler, Curt

    2015-01-01

    We show how to obtain a Bayesian estimate of the rates or numbers of signal and background events from a set of events when the shapes of the signal and background distributions are known, can be estimated, or approximated; our method works well even if the foreground and background event distributions overlap significantly and the nature of any individual event cannot be determined with any certainty. We give examples of determining the rates of gravitational-wave events in the presence of background triggers from a template bank when noise parameters are known and/or can be fit from the trigger data. We also give an example of determining globular-cluster shape, location, and density from an observation of a stellar field that contains a nonuniform background density of stars superimposed on the cluster stars.

  15. Direction-of-Arrival Estimation for Coherent Sources via Sparse Bayesian Learning

    Directory of Open Access Journals (Sweden)

    Zhang-Meng Liu

    2014-01-01

    Full Text Available A spatial filtering-based relevance vector machine (RVM is proposed in this paper to separate coherent sources and estimate their directions-of-arrival (DOA, with the filter parameters and DOA estimates initialized and refined via sparse Bayesian learning. The RVM is used to exploit the spatial sparsity of the incident signals and gain improved adaptability to much demanding scenarios, such as low signal-to-noise ratio (SNR, limited snapshots, and spatially adjacent sources, and the spatial filters are introduced to enhance global convergence of the original RVM in the case of coherent sources. The proposed method adapts to arbitrary array geometry, and simulation results show that it surpasses the existing methods in DOA estimation performance.

  16. A Bayesian approach to estimating variance components within a multivariate generalizability theory framework.

    Science.gov (United States)

    Jiang, Zhehan; Skorupski, William

    2017-12-12

    In many behavioral research areas, multivariate generalizability theory (mG theory) has been typically used to investigate the reliability of certain multidimensional assessments. However, traditional mG-theory estimation-namely, using frequentist approaches-has limits, leading researchers to fail to take full advantage of the information that mG theory can offer regarding the reliability of measurements. Alternatively, Bayesian methods provide more information than frequentist approaches can offer. This article presents instructional guidelines on how to implement mG-theory analyses in a Bayesian framework; in particular, BUGS code is presented to fit commonly seen designs from mG theory, including single-facet designs, two-facet crossed designs, and two-facet nested designs. In addition to concrete examples that are closely related to the selected designs and the corresponding BUGS code, a simulated dataset is provided to demonstrate the utility and advantages of the Bayesian approach. This article is intended to serve as a tutorial reference for applied researchers and methodologists conducting mG-theory studies.

  17. EEG-fMRI Bayesian framework for neural activity estimation: a simulation study

    Science.gov (United States)

    Croce, Pierpaolo; Basti, Alessio; Marzetti, Laura; Zappasodi, Filippo; Del Gratta, Cosimo

    2016-12-01

    Objective. Due to the complementary nature of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and given the possibility of simultaneous acquisition, the joint data analysis can afford a better understanding of the underlying neural activity estimation. In this simulation study we want to show the benefit of the joint EEG-fMRI neural activity estimation in a Bayesian framework. Approach. We built a dynamic Bayesian framework in order to perform joint EEG-fMRI neural activity time course estimation. The neural activity is originated by a given brain area and detected by means of both measurement techniques. We have chosen a resting state neural activity situation to address the worst case in terms of the signal-to-noise ratio. To infer information by EEG and fMRI concurrently we used a tool belonging to the sequential Monte Carlo (SMC) methods: the particle filter (PF). Main results. First, despite a high computational cost, we showed the feasibility of such an approach. Second, we obtained an improvement in neural activity reconstruction when using both EEG and fMRI measurements. Significance. The proposed simulation shows the improvements in neural activity reconstruction with EEG-fMRI simultaneous data. The application of such an approach to real data allows a better comprehension of the neural dynamics.

  18. Bayesian estimation of multicomponent relaxation parameters in magnetic resonance fingerprinting.

    Science.gov (United States)

    McGivney, Debra; Deshmane, Anagha; Jiang, Yun; Ma, Dan; Badve, Chaitra; Sloan, Andrew; Gulani, Vikas; Griswold, Mark

    2018-07-01

    To estimate multiple components within a single voxel in magnetic resonance fingerprinting when the number and types of tissues comprising the voxel are not known a priori. Multiple tissue components within a single voxel are potentially separable with magnetic resonance fingerprinting as a result of differences in signal evolutions of each component. The Bayesian framework for inverse problems provides a natural and flexible setting for solving this problem when the tissue composition per voxel is unknown. Assuming that only a few entries from the dictionary contribute to a mixed signal, sparsity-promoting priors can be placed upon the solution. An iterative algorithm is applied to compute the maximum a posteriori estimator of the posterior probability density to determine the magnetic resonance fingerprinting dictionary entries that contribute most significantly to mixed or pure voxels. Simulation results show that the algorithm is robust in finding the component tissues of mixed voxels. Preliminary in vivo data confirm this result, and show good agreement in voxels containing pure tissue. The Bayesian framework and algorithm shown provide accurate solutions for the partial-volume problem in magnetic resonance fingerprinting. The flexibility of the method will allow further study into different priors and hyperpriors that can be applied in the model. Magn Reson Med 80:159-170, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Macroeconomic Forecasts in Models with Bayesian Averaging of Classical Estimates

    Directory of Open Access Journals (Sweden)

    Piotr Białowolski

    2012-03-01

    Full Text Available The aim of this paper is to construct a forecasting model oriented on predicting basic macroeconomic variables, namely: the GDP growth rate, the unemployment rate, and the consumer price inflation. In order to select the set of the best regressors, Bayesian Averaging of Classical Estimators (BACE is employed. The models are atheoretical (i.e. they do not reflect causal relationships postulated by the macroeconomic theory and the role of regressors is played by business and consumer tendency survey-based indicators. Additionally, survey-based indicators are included with a lag that enables to forecast the variables of interest (GDP, unemployment, and inflation for the four forthcoming quarters without the need to make any additional assumptions concerning the values of predictor variables in the forecast period.  Bayesian Averaging of Classical Estimators is a method allowing for full and controlled overview of all econometric models which can be obtained out of a particular set of regressors. In this paper authors describe the method of generating a family of econometric models and the procedure for selection of a final forecasting model. Verification of the procedure is performed by means of out-of-sample forecasts of main economic variables for the quarters of 2011. The accuracy of the forecasts implies that there is still a need to search for new solutions in the atheoretical modelling.

  20. Uncertainty Estimation of Shear-wave Velocity Structure from Bayesian Inversion of Microtremor Array Dispersion Data

    Science.gov (United States)

    Dosso, S. E.; Molnar, S.; Cassidy, J.

    2010-12-01

    Bayesian inversion of microtremor array dispersion data is applied, with evaluation of data errors and model parameterization, to produce the most-probable shear-wave velocity (VS) profile together with quantitative uncertainty estimates. Generally, the most important property characterizing earthquake site response is the subsurface VS structure. The microtremor array method determines phase velocity dispersion of Rayleigh surface waves from multi-instrument recordings of urban noise. Inversion of dispersion curves for VS structure is a non-unique and nonlinear problem such that meaningful evaluation of confidence intervals is required. Quantitative uncertainty estimation requires not only a nonlinear inversion approach that samples models proportional to their probability, but also rigorous estimation of the data error statistics and an appropriate model parameterization. A Bayesian formulation represents the solution of the inverse problem in terms of the posterior probability density (PPD) of the geophysical model parameters. Markov-chain Monte Carlo methods are used with an efficient implementation of Metropolis-Hastings sampling to provide an unbiased sample from the PPD to compute parameter uncertainties and inter-relationships. Nonparametric estimation of a data error covariance matrix from residual analysis is applied with rigorous a posteriori statistical tests to validate the covariance estimate and the assumption of a Gaussian error distribution. The most appropriate model parameterization is determined using the Bayesian information criterion (BIC), which provides the simplest model consistent with the resolving power of the data. Parameter uncertainties are found to be under-estimated when data error correlations are neglected and when compressional-wave velocity and/or density (nuisance) parameters are fixed in the inversion. Bayesian inversion of microtremor array data is applied at two sites in British Columbia, the area of highest seismic risk in

  1. Bayesian inference in genetic parameter estimation of visual scores in Nellore beef-cattle

    Science.gov (United States)

    2009-01-01

    The aim of this study was to estimate the components of variance and genetic parameters for the visual scores which constitute the Morphological Evaluation System (MES), such as body structure (S), precocity (P) and musculature (M) in Nellore beef-cattle at the weaning and yearling stages, by using threshold Bayesian models. The information used for this was gleaned from visual scores of 5,407 animals evaluated at the weaning and 2,649 at the yearling stages. The genetic parameters for visual score traits were estimated through two-trait analysis, using the threshold animal model, with Bayesian statistics methodology and MTGSAM (Multiple Trait Gibbs Sampler for Animal Models) threshold software. Heritability estimates for S, P and M were 0.68, 0.65 and 0.62 (at weaning) and 0.44, 0.38 and 0.32 (at the yearling stage), respectively. Heritability estimates for S, P and M were found to be high, and so it is expected that these traits should respond favorably to direct selection. The visual scores evaluated at the weaning and yearling stages might be used in the composition of new selection indexes, as they presented sufficient genetic variability to promote genetic progress in such morphological traits. PMID:21637450

  2. Robust bayesian analysis of an autoregressive model with ...

    African Journals Online (AJOL)

    In this work, robust Bayesian analysis of the Bayesian estimation of an autoregressive model with exponential innovations is performed. Using a Bayesian robustness methodology, we show that, using a suitable generalized quadratic loss, we obtain optimal Bayesian estimators of the parameters corresponding to the ...

  3. Dictionary-Based Tensor Canonical Polyadic Decomposition

    Science.gov (United States)

    Cohen, Jeremy Emile; Gillis, Nicolas

    2018-04-01

    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.

  4. A tensor approach to the estimation of hydraulic conductivities in ...

    African Journals Online (AJOL)

    Based on the field measurements of the physical properties of fractured rocks, the anisotropic properties of hydraulic conductivity (HC) of the fractured rock aquifer can be assessed and presented using a tensor approach called hydraulic conductivity tensor. Three types of HC values, namely point value, axial value and flow ...

  5. Bayesian methods for data analysis

    CERN Document Server

    Carlin, Bradley P.

    2009-01-01

    Approaches for statistical inference Introduction Motivating Vignettes Defining the Approaches The Bayes-Frequentist Controversy Some Basic Bayesian Models The Bayes approach Introduction Prior Distributions Bayesian Inference Hierarchical Modeling Model Assessment Nonparametric Methods Bayesian computation Introduction Asymptotic Methods Noniterative Monte Carlo Methods Markov Chain Monte Carlo Methods Model criticism and selection Bayesian Modeling Bayesian Robustness Model Assessment Bayes Factors via Marginal Density Estimation Bayes Factors

  6. QCD vacuum tensor susceptibility and properties of transversely polarized mesons

    International Nuclear Information System (INIS)

    Bakulev, A.P.; Mikhajlov, S.V.

    1999-01-01

    We re-estimate the tensor susceptibility of QCD vacuum, χ, and to this end, we re-estimate the leptonic decay constants for transversely polarized ρ-, ρ'- and b 1 -mesons. The origin of the susceptibility is analyzed using duality between ρ- and b 1 -channels in a 2-point correlator of tensor currents and disagree with [2] on both OPE expansion and the value of QCD vacuum tensor susceptibility. Using our value for the latter we determine new estimations of nucleon tensor charges related to the first moment of the transverse structure functions h 1 of a nucleon

  7. Simultaneous two-view epipolar geometry estimation and motion segmentation by 4D tensor voting.

    Science.gov (United States)

    Tong, Wai-Shun; Tang, Chi-Keung; Medioni, Gérard

    2004-09-01

    We address the problem of simultaneous two-view epipolar geometry estimation and motion segmentation from nonstatic scenes. Given a set of noisy image pairs containing matches of n objects, we propose an unconventional, efficient, and robust method, 4D tensor voting, for estimating the unknown n epipolar geometries, and segmenting the static and motion matching pairs into n independent motions. By considering the 4D isotropic and orthogonal joint image space, only two tensor voting passes are needed, and a very high noise to signal ratio (up to five) can be tolerated. Epipolar geometries corresponding to multiple, rigid motions are extracted in succession. Only two uncalibrated frames are needed, and no simplifying assumption (such as affine camera model or homographic model between images) other than the pin-hole camera model is made. Our novel approach consists of propagating a local geometric smoothness constraint in the 4D joint image space, followed by global consistency enforcement for extracting the fundamental matrices corresponding to independent motions. We have performed extensive experiments to compare our method with some representative algorithms to show that better performance on nonstatic scenes are achieved. Results on challenging data sets are presented.

  8. Bayesian and Classical Estimation of Stress-Strength Reliability for Inverse Weibull Lifetime Models

    Directory of Open Access Journals (Sweden)

    Qixuan Bi

    2017-06-01

    Full Text Available In this paper, we consider the problem of estimating stress-strength reliability for inverse Weibull lifetime models having the same shape parameters but different scale parameters. We obtain the maximum likelihood estimator and its asymptotic distribution. Since the classical estimator doesn’t hold explicit forms, we propose an approximate maximum likelihood estimator. The asymptotic confidence interval and two bootstrap intervals are obtained. Using the Gibbs sampling technique, Bayesian estimator and the corresponding credible interval are obtained. The Metropolis-Hastings algorithm is used to generate random variates. Monte Carlo simulations are conducted to compare the proposed methods. Analysis of a real dataset is performed.

  9. Top-down approach in protein RDC data analysis: de novo estimation of the alignment tensor

    International Nuclear Information System (INIS)

    Chen Kang; Tjandra, Nico

    2007-01-01

    In solution NMR spectroscopy the residual dipolar coupling (RDC) is invaluable in improving both the precision and accuracy of NMR structures during their structural refinement. The RDC also provides a potential to determine protein structure de novo. These procedures are only effective when an accurate estimate of the alignment tensor has already been made. Here we present a top-down approach, starting from the secondary structure elements and finishing at the residue level, for RDC data analysis in order to obtain a better estimate of the alignment tensor. Using only the RDCs from N-H bonds of residues in α-helices and CA-CO bonds in β-strands, we are able to determine the offset and the approximate amplitude of the RDC modulation-curve for each secondary structure element, which are subsequently used as targets for global minimization. The alignment order parameters and the orientation of the major principal axis of individual helix or strand, with respect to the alignment frame, can be determined in each of the eight quadrants of a sphere. The following minimization against RDC of all residues within the helix or strand segment can be carried out with fixed alignment order parameters to improve the accuracy of the orientation. For a helical protein Bax, the three components A xx , A yy and A zz , of the alignment order can be determined with this method in average to within 2.3% deviation from the values calculated with the available atomic coordinates. Similarly for β-sheet protein Ubiquitin they agree in average to within 8.5%. The larger discrepancy in β-strand parameters comes from both the diversity of the β-sheet structure and the lower precision of CA-CO RDCs. This top-down approach is a robust method for alignment tensor estimation and also holds a promise for providing a protein topological fold using limited sets of RDCs

  10. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    Science.gov (United States)

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  11. Bayesian networks of age estimation and classification based on dental evidence: A study on the third molar mineralization.

    Science.gov (United States)

    Sironi, Emanuele; Pinchi, Vilma; Pradella, Francesco; Focardi, Martina; Bozza, Silvia; Taroni, Franco

    2018-04-01

    Not only does the Bayesian approach offer a rational and logical environment for evidence evaluation in a forensic framework, but it also allows scientists to coherently deal with uncertainty related to a collection of multiple items of evidence, due to its flexible nature. Such flexibility might come at the expense of elevated computational complexity, which can be handled by using specific probabilistic graphical tools, namely Bayesian networks. In the current work, such probabilistic tools are used for evaluating dental evidence related to the development of third molars. A set of relevant properties characterizing the graphical models are discussed and Bayesian networks are implemented to deal with the inferential process laying beyond the estimation procedure, as well as to provide age estimates. Such properties include operationality, flexibility, coherence, transparence and sensitivity. A data sample composed of Italian subjects was employed for the analysis; results were in agreement with previous studies in terms of point estimate and age classification. The influence of the prior probability elicitation in terms of Bayesian estimate and classifies was also analyzed. Findings also supported the opportunity to take into consideration multiple teeth in the evaluative procedure, since it can be shown this results in an increased robustness towards the prior probability elicitation process, as well as in more favorable outcomes from a forensic perspective. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  12. Efficient Bayesian estimates for discrimination among topologically different systems biology models.

    Science.gov (United States)

    Hagen, David R; Tidor, Bruce

    2015-02-01

    A major effort in systems biology is the development of mathematical models that describe complex biological systems at multiple scales and levels of abstraction. Determining the topology-the set of interactions-of a biological system from observations of the system's behavior is an important and difficult problem. Here we present and demonstrate new methodology for efficiently computing the probability distribution over a set of topologies based on consistency with existing measurements. Key features of the new approach include derivation in a Bayesian framework, incorporation of prior probability distributions of topologies and parameters, and use of an analytically integrable linearization based on the Fisher information matrix that is responsible for large gains in efficiency. The new method was demonstrated on a collection of four biological topologies representing a kinase and phosphatase that operate in opposition to each other with either processive or distributive kinetics, giving 8-12 parameters for each topology. The linearization produced an approximate result very rapidly (CPU minutes) that was highly accurate on its own, as compared to a Monte Carlo method guaranteed to converge to the correct answer but at greater cost (CPU weeks). The Monte Carlo method developed and applied here used the linearization method as a starting point and importance sampling to approach the Bayesian answer in acceptable time. Other inexpensive methods to estimate probabilities produced poor approximations for this system, with likelihood estimation showing its well-known bias toward topologies with more parameters and the Akaike and Schwarz Information Criteria showing a strong bias toward topologies with fewer parameters. These results suggest that this linear approximation may be an effective compromise, providing an answer whose accuracy is near the true Bayesian answer, but at a cost near the common heuristics.

  13. Bayesian Inference on Gravitational Waves

    Directory of Open Access Journals (Sweden)

    Asad Ali

    2015-12-01

    Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an  overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.

  14. BAYES-HEP: Bayesian belief networks for estimation of human error probability

    International Nuclear Information System (INIS)

    Karthick, M.; Senthil Kumar, C.; Paul, Robert T.

    2017-01-01

    Human errors contribute a significant portion of risk in safety critical applications and methods for estimation of human error probability have been a topic of research for over a decade. The scarce data available on human errors and large uncertainty involved in the prediction of human error probabilities make the task difficult. This paper presents a Bayesian belief network (BBN) model for human error probability estimation in safety critical functions of a nuclear power plant. The developed model using BBN would help to estimate HEP with limited human intervention. A step-by-step illustration of the application of the method and subsequent evaluation is provided with a relevant case study and the model is expected to provide useful insights into risk assessment studies

  15. Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2014-01-01

    The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model

  16. Simultaneous discovery, estimation and prediction analysis of complex traits using a bayesian mixture model.

    Directory of Open Access Journals (Sweden)

    Gerhard Moser

    2015-04-01

    Full Text Available Gene discovery, estimation of heritability captured by SNP arrays, inference on genetic architecture and prediction analyses of complex traits are usually performed using different statistical models and methods, leading to inefficiency and loss of power. Here we use a Bayesian mixture model that simultaneously allows variant discovery, estimation of genetic variance explained by all variants and prediction of unobserved phenotypes in new samples. We apply the method to simulated data of quantitative traits and Welcome Trust Case Control Consortium (WTCCC data on disease and show that it provides accurate estimates of SNP-based heritability, produces unbiased estimators of risk in new samples, and that it can estimate genetic architecture by partitioning variation across hundreds to thousands of SNPs. We estimated that, depending on the trait, 2,633 to 9,411 SNPs explain all of the SNP-based heritability in the WTCCC diseases. The majority of those SNPs (>96% had small effects, confirming a substantial polygenic component to common diseases. The proportion of the SNP-based variance explained by large effects (each SNP explaining 1% of the variance varied markedly between diseases, ranging from almost zero for bipolar disorder to 72% for type 1 diabetes. Prediction analyses demonstrate that for diseases with major loci, such as type 1 diabetes and rheumatoid arthritis, Bayesian methods outperform profile scoring or mixed model approaches.

  17. Projection-based Bayesian recursive estimation of ARX model with uniform innovations

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav; Pavelková, Lenka

    2007-01-01

    Roč. 56, 9/10 (2007), s. 646-655 ISSN 0167-6911 R&D Projects: GA AV ČR 1ET100750401; GA MŠk 2C06001; GA MDS 1F43A/003/120 Institutional research plan: CEZ:AV0Z10750506 Keywords : ARX model * Bayesian recursive estimation * Uniform distribution Subject RIV: BC - Control Systems Theory Impact factor: 1.634, year: 2007 http://dx.doi.org/10.1016/j.sysconle.2007.03.005

  18. Bayesian Mediation Analysis

    Science.gov (United States)

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…

  19. Prediction of myelopathic level in cervical spondylotic myelopathy using diffusion tensor imaging.

    Science.gov (United States)

    Wang, Shu-Qiang; Li, Xiang; Cui, Jiao-Long; Li, Han-Xiong; Luk, Keith D K; Hu, Yong

    2015-06-01

    To investigate the use of a newly designed machine learning-based classifier in the automatic identification of myelopathic levels in cervical spondylotic myelopathy (CSM). In all, 58 normal volunteers and 16 subjects with CSM were recruited for diffusion tensor imaging (DTI) acquisition. The eigenvalues were extracted as the selected features from DTI images. Three classifiers, naive Bayesian, support vector machine, and support tensor machine, and fractional anisotropy (FA) were employed to identify myelopathic levels. The results were compared with clinical level diagnosis results and accuracy, sensitivity, and specificity were calculated to evaluate the performance of the developed classifiers. The accuracy by support tensor machine was the highest (93.62%) among the three classifiers. The support tensor machine also showed excellent capacity to identify true positives (sensitivity: 84.62%) and true negatives (specificity: 97.06%). The accuracy by FA value was the lowest (76%) in all the methods. The classifiers-based method using eigenvalues had a better performance in identifying the levels of CSM than the diagnosis using FA values. The support tensor machine was the best among three classifiers. © 2014 Wiley Periodicals, Inc.

  20. Estimating mountain basin-mean precipitation from streamflow using Bayesian inference

    Science.gov (United States)

    Henn, Brian; Clark, Martyn P.; Kavetski, Dmitri; Lundquist, Jessica D.

    2015-10-01

    Estimating basin-mean precipitation in complex terrain is difficult due to uncertainty in the topographical representativeness of precipitation gauges relative to the basin. To address this issue, we use Bayesian methodology coupled with a multimodel framework to infer basin-mean precipitation from streamflow observations, and we apply this approach to snow-dominated basins in the Sierra Nevada of California. Using streamflow observations, forcing data from lower-elevation stations, the Bayesian Total Error Analysis (BATEA) methodology and the Framework for Understanding Structural Errors (FUSE), we infer basin-mean precipitation, and compare it to basin-mean precipitation estimated using topographically informed interpolation from gauges (PRISM, the Parameter-elevation Regression on Independent Slopes Model). The BATEA-inferred spatial patterns of precipitation show agreement with PRISM in terms of the rank of basins from wet to dry but differ in absolute values. In some of the basins, these differences may reflect biases in PRISM, because some implied PRISM runoff ratios may be inconsistent with the regional climate. We also infer annual time series of basin precipitation using a two-step calibration approach. Assessment of the precision and robustness of the BATEA approach suggests that uncertainty in the BATEA-inferred precipitation is primarily related to uncertainties in hydrologic model structure. Despite these limitations, time series of inferred annual precipitation under different model and parameter assumptions are strongly correlated with one another, suggesting that this approach is capable of resolving year-to-year variability in basin-mean precipitation.

  1. Cross-Cultural Invariance of the Mental Toughness Inventory Among Australian, Chinese, and Malaysian Athletes: A Bayesian Estimation Approach.

    Science.gov (United States)

    Gucciardi, Daniel F; Zhang, Chun-Qing; Ponnusamy, Vellapandian; Si, Gangyan; Stenling, Andreas

    2016-04-01

    The aims of this study were to assess the cross-cultural invariance of athletes' self-reports of mental toughness and to introduce and illustrate the application of approximate measurement invariance using Bayesian estimation for sport and exercise psychology scholars. Athletes from Australia (n = 353, Mage = 19.13, SD = 3.27, men = 161), China (n = 254, Mage = 17.82, SD = 2.28, men = 138), and Malaysia (n = 341, Mage = 19.13, SD = 3.27, men = 200) provided a cross-sectional snapshot of their mental toughness. The cross-cultural invariance of the mental toughness inventory in terms of (a) the factor structure (configural invariance), (b) factor loadings (metric invariance), and (c) item intercepts (scalar invariance) was tested using an approximate measurement framework with Bayesian estimation. Results indicated that approximate metric and scalar invariance was established. From a methodological standpoint, this study demonstrated the usefulness and flexibility of Bayesian estimation for single-sample and multigroup analyses of measurement instruments. Substantively, the current findings suggest that the measurement of mental toughness requires cultural adjustments to better capture the contextually salient (emic) aspects of this concept.

  2. A Bayesian model for binary Markov chains

    Directory of Open Access Journals (Sweden)

    Belkheir Essebbar

    2004-02-01

    Full Text Available This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.

  3. Recursive Bayesian estimation of autoregressive model with uniform noise using approximation by parallelotopes

    Czech Academy of Sciences Publication Activity Database

    Pavelková, Lenka; Jirsa, Ladislav

    2017-01-01

    Roč. 31, č. 8 (2017), s. 1184-1192 ISSN 0890-6327 R&D Projects: GA MŠk 7D12004 Institutional support: RVO:67985556 Keywords : approximate parameter estimation * ARX model * Bayesian estimation * bounded noise * Kullback-Leibler divergence * parallelotope Subject RIV: BC - Control Systems Theory OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 1.708, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/pavelkova-0472081.pdf

  4. Air kerma rate estimation by means of in-situ gamma spectrometry: A Bayesian approach

    International Nuclear Information System (INIS)

    Cabal, Gonzalo; Kluson, Jaroslav

    2008-01-01

    Full text: Bayesian inference is used to determine the Air Kerma Rate based on a set of in situ environmental gamma spectra measurements performed with a NaI(Tl) scintillation detector. A natural advantage of such approach is the possibility to quantify uncertainty not only in the Air Kerma Rate estimation but also for the gamma spectra which is unfolded within the procedure. The measurements were performed using a 3'' x 3'' NaI(Tl) scintillation detector. The response matrices of such detection system were calculated using a Monte Carlo code. For the calculations of the spectra as well as the Air Kerma Rate the WinBugs program was used. WinBugs is a dedicated software for Bayesian inference using Monte Carlo Markov chain methods (MCMC). The results of such calculations are shown and compared with other non-Bayesian approachs such as the Scofield-Gold iterative method and the Maximum Entropy Method

  5. Structural Estimation of the Output Gap: A Bayesian DSGE Approach for the U.S. Economy

    OpenAIRE

    Yasuo Hirose; Saori Naganuma

    2007-01-01

    We estimate the output gap that is consistent with a fully specified DSGE model. Given the structural parameters estimated using Bayesian methods, we estimate the output gap that is defined as a deviation of output from its flexible-price equilibrium. Our output gap illustrates the U.S. business cycles well, compared with other estimates. We find that the main source of the output gap movements is the demand shocks, but that the productivity shocks contributed to the stable output gap in the ...

  6. Bayesian estimation of regularization and atlas building in diffeomorphic image registration.

    Science.gov (United States)

    Zhang, Miaomiao; Singh, Nikhil; Fletcher, P Thomas

    2013-01-01

    This paper presents a generative Bayesian model for diffeomorphic image registration and atlas building. We develop an atlas estimation procedure that simultaneously estimates the parameters controlling the smoothness of the diffeomorphic transformations. To achieve this, we introduce a Monte Carlo Expectation Maximization algorithm, where the expectation step is approximated via Hamiltonian Monte Carlo sampling on the manifold of diffeomorphisms. An added benefit of this stochastic approach is that it can successfully solve difficult registration problems involving large deformations, where direct geodesic optimization fails. Using synthetic data generated from the forward model with known parameters, we demonstrate the ability of our model to successfully recover the atlas and regularization parameters. We also demonstrate the effectiveness of the proposed method in the atlas estimation problem for 3D brain images.

  7. Estimation of under-reported visceral Leishmaniasis (Vl cases in Bihar: a Bayesian approach

    Directory of Open Access Journals (Sweden)

    A Ranjan

    2013-12-01

    Full Text Available Background: Visceral leishmaniasis (VL is a major health problem in the state of Bihar and adjoining areas in India. In absence of any active surveillance mechanism for the disease, there seems to be gross under-reporting of VL cases. Objective: The objective of this study was to estimate extent of under-reporting of VL cases in Bihar using pooled analysis of published papers. Method: We calculated the pooled common ratio (RRMH based on three studies and combined it with a prior distribution of ratio using inverse-variance weighting method. Bayesian method was used to estimate the posterior distribution of the “under-reporting factor” (ratio of unreported to reported cases. Results: The posterior distribution of ratio of unreported to reported cases yielded a mean of 3.558, with 95% posterior limits of 2.81 and 4.50. Conclusion: Bayesian approach gives evidence to the fact that the total number of VL cases in the state may be nearly more than three times that of currently reported figures. 

  8. An adaptive Bayesian inference algorithm to estimate the parameters of a hazardous atmospheric release

    Science.gov (United States)

    Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques

    2015-12-01

    In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.

  9. Estimated value of insurance premium due to Citarum River flood by using Bayesian method

    Science.gov (United States)

    Sukono; Aisah, I.; Tampubolon, Y. R. H.; Napitupulu, H.; Supian, S.; Subiyanto; Sidi, P.

    2018-03-01

    Citarum river flood in South Bandung, West Java Indonesia, often happens every year. It causes property damage, producing economic loss. The risk of loss can be mitigated by following the flood insurance program. In this paper, we discussed about the estimated value of insurance premiums due to Citarum river flood by Bayesian method. It is assumed that the risk data for flood losses follows the Pareto distribution with the right fat-tail. The estimation of distribution model parameters is done by using Bayesian method. First, parameter estimation is done with assumption that prior comes from Gamma distribution family, while observation data follow Pareto distribution. Second, flood loss data is simulated based on the probability of damage in each flood affected area. The result of the analysis shows that the estimated premium value of insurance based on pure premium principle is as follows: for the loss value of IDR 629.65 million of premium IDR 338.63 million; for a loss of IDR 584.30 million of its premium IDR 314.24 million; and the loss value of IDR 574.53 million of its premium IDR 308.95 million. The premium value estimator can be used as neither a reference in the decision of reasonable premium determination, so as not to incriminate the insured, nor it result in loss of the insurer.

  10. Basics of Bayesian methods.

    Science.gov (United States)

    Ghosh, Sujit K

    2010-01-01

    Bayesian methods are rapidly becoming popular tools for making statistical inference in various fields of science including biology, engineering, finance, and genetics. One of the key aspects of Bayesian inferential method is its logical foundation that provides a coherent framework to utilize not only empirical but also scientific information available to a researcher. Prior knowledge arising from scientific background, expert judgment, or previously collected data is used to build a prior distribution which is then combined with current data via the likelihood function to characterize the current state of knowledge using the so-called posterior distribution. Bayesian methods allow the use of models of complex physical phenomena that were previously too difficult to estimate (e.g., using asymptotic approximations). Bayesian methods offer a means of more fully understanding issues that are central to many practical problems by allowing researchers to build integrated models based on hierarchical conditional distributions that can be estimated even with limited amounts of data. Furthermore, advances in numerical integration methods, particularly those based on Monte Carlo methods, have made it possible to compute the optimal Bayes estimators. However, there is a reasonably wide gap between the background of the empirically trained scientists and the full weight of Bayesian statistical inference. Hence, one of the goals of this chapter is to bridge the gap by offering elementary to advanced concepts that emphasize linkages between standard approaches and full probability modeling via Bayesian methods.

  11. Estimation of expected number of accidents and workforce unavailability through Bayesian population variability analysis and Markov-based model

    International Nuclear Information System (INIS)

    Chagas Moura, Márcio das; Azevedo, Rafael Valença; Droguett, Enrique López; Chaves, Leandro Rego; Lins, Isis Didier

    2016-01-01

    Occupational accidents pose several negative consequences to employees, employers, environment and people surrounding the locale where the accident takes place. Some types of accidents correspond to low frequency-high consequence (long sick leaves) events, and then classical statistical approaches are ineffective in these cases because the available dataset is generally sparse and contain censored recordings. In this context, we propose a Bayesian population variability method for the estimation of the distributions of the rates of accident and recovery. Given these distributions, a Markov-based model will be used to estimate the uncertainty over the expected number of accidents and the work time loss. Thus, the use of Bayesian analysis along with the Markov approach aims at investigating future trends regarding occupational accidents in a workplace as well as enabling a better management of the labor force and prevention efforts. One application example is presented in order to validate the proposed approach; this case uses available data gathered from a hydropower company in Brazil. - Highlights: • This paper proposes a Bayesian method to estimate rates of accident and recovery. • The model requires simple data likely to be available in the company database. • These results show the proposed model is not too sensitive to the prior estimates.

  12. Guideline for Bayesian Net based Software Fault Estimation Method for Reactor Protection System

    International Nuclear Information System (INIS)

    Eom, Heung Seop; Park, Gee Yong; Jang, Seung Cheol

    2011-01-01

    The purpose of this paper is to provide a preliminary guideline for the estimation of software faults in a safety-critical software, for example, reactor protection system's software. As the fault estimation method is based on Bayesian Net which intensively uses subjective probability and informal data, it is necessary to define formal procedure of the method to minimize the variability of the results. The guideline describes assumptions, limitations and uncertainties, and the product of the fault estimation method. The procedure for conducting a software fault-estimation method is then outlined, highlighting the major tasks involved. The contents of the guideline are based on our own experience and a review of research guidelines developed for a PSA

  13. Bayesian switching factor analysis for estimating time-varying functional connectivity in fMRI.

    Science.gov (United States)

    Taghia, Jalil; Ryali, Srikanth; Chen, Tianwen; Supekar, Kaustubh; Cai, Weidong; Menon, Vinod

    2017-07-15

    There is growing interest in understanding the dynamical properties of functional interactions between distributed brain regions. However, robust estimation of temporal dynamics from functional magnetic resonance imaging (fMRI) data remains challenging due to limitations in extant multivariate methods for modeling time-varying functional interactions between multiple brain areas. Here, we develop a Bayesian generative model for fMRI time-series within the framework of hidden Markov models (HMMs). The model is a dynamic variant of the static factor analysis model (Ghahramani and Beal, 2000). We refer to this model as Bayesian switching factor analysis (BSFA) as it integrates factor analysis into a generative HMM in a unified Bayesian framework. In BSFA, brain dynamic functional networks are represented by latent states which are learnt from the data. Crucially, BSFA is a generative model which estimates the temporal evolution of brain states and transition probabilities between states as a function of time. An attractive feature of BSFA is the automatic determination of the number of latent states via Bayesian model selection arising from penalization of excessively complex models. Key features of BSFA are validated using extensive simulations on carefully designed synthetic data. We further validate BSFA using fingerprint analysis of multisession resting-state fMRI data from the Human Connectome Project (HCP). Our results show that modeling temporal dependencies in the generative model of BSFA results in improved fingerprinting of individual participants. Finally, we apply BSFA to elucidate the dynamic functional organization of the salience, central-executive, and default mode networks-three core neurocognitive systems with central role in cognitive and affective information processing (Menon, 2011). Across two HCP sessions, we demonstrate a high level of dynamic interactions between these networks and determine that the salience network has the highest temporal

  14. Direction-of-arrival estimation for co-located multiple-input multiple-output radar using structural sparsity Bayesian learning

    International Nuclear Information System (INIS)

    Wen Fang-Qing; Zhang Gong; Ben De

    2015-01-01

    This paper addresses the direction of arrival (DOA) estimation problem for the co-located multiple-input multiple-output (MIMO) radar with random arrays. The spatially distributed sparsity of the targets in the background makes compressive sensing (CS) desirable for DOA estimation. A spatial CS framework is presented, which links the DOA estimation problem to support recovery from a known over-complete dictionary. A modified statistical model is developed to accurately represent the intra-block correlation of the received signal. A structural sparsity Bayesian learning algorithm is proposed for the sparse recovery problem. The proposed algorithm, which exploits intra-signal correlation, is capable being applied to limited data support and low signal-to-noise ratio (SNR) scene. Furthermore, the proposed algorithm has less computation load compared to the classical Bayesian algorithm. Simulation results show that the proposed algorithm has a more accurate DOA estimation than the traditional multiple signal classification (MUSIC) algorithm and other CS recovery algorithms. (paper)

  15. Nonparametric Bayesian density estimation on manifolds with applications to planar shapes.

    Science.gov (United States)

    Bhattacharya, Abhishek; Dunson, David B

    2010-12-01

    Statistical analysis on landmark-based shape spaces has diverse applications in morphometrics, medical diagnostics, machine vision and other areas. These shape spaces are non-Euclidean quotient manifolds. To conduct nonparametric inferences, one may define notions of centre and spread on this manifold and work with their estimates. However, it is useful to consider full likelihood-based methods, which allow nonparametric estimation of the probability density. This article proposes a broad class of mixture models constructed using suitable kernels on a general compact metric space and then on the planar shape space in particular. Following a Bayesian approach with a nonparametric prior on the mixing distribution, conditions are obtained under which the Kullback-Leibler property holds, implying large support and weak posterior consistency. Gibbs sampling methods are developed for posterior computation, and the methods are applied to problems in density estimation and classification with shape-based predictors. Simulation studies show improved estimation performance relative to existing approaches.

  16. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    Science.gov (United States)

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected

  17. Tensor rank is not multiplicative under the tensor product

    NARCIS (Netherlands)

    M. Christandl (Matthias); A. K. Jensen (Asger Kjærulff); J. Zuiddam (Jeroen)

    2018-01-01

    textabstractThe tensor rank of a tensor t is the smallest number r such that t can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an ℓ-tensor. The tensor product of s and t is a (k+ℓ)-tensor. Tensor rank is sub-multiplicative under the tensor product. We revisit the

  18. Tensor rank is not multiplicative under the tensor product

    NARCIS (Netherlands)

    M. Christandl (Matthias); A. K. Jensen (Asger Kjærulff); J. Zuiddam (Jeroen)

    2017-01-01

    textabstractThe tensor rank of a tensor is the smallest number r such that the tensor can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an l-tensor. The tensor product of s and t is a (k + l)-tensor (not to be confused with the "tensor Kronecker product" used in

  19. Tensor rank is not multiplicative under the tensor product

    OpenAIRE

    Christandl, Matthias; Jensen, Asger Kjærulff; Zuiddam, Jeroen

    2017-01-01

    The tensor rank of a tensor t is the smallest number r such that t can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an l-tensor. The tensor product of s and t is a (k + l)-tensor. Tensor rank is sub-multiplicative under the tensor product. We revisit the connection between restrictions and degenerations. A result of our study is that tensor rank is not in general multiplicative under the tensor product. This answers a question of Draisma and Saptharishi. Specif...

  20. Tensor surgery and tensor rank

    NARCIS (Netherlands)

    M. Christandl (Matthias); J. Zuiddam (Jeroen)

    2018-01-01

    textabstractWe introduce a method for transforming low-order tensors into higher-order tensors and apply it to tensors defined by graphs and hypergraphs. The transformation proceeds according to a surgery-like procedure that splits vertices, creates and absorbs virtual edges and inserts new vertices

  1. Tensor surgery and tensor rank

    NARCIS (Netherlands)

    M. Christandl (Matthias); J. Zuiddam (Jeroen)

    2016-01-01

    textabstractWe introduce a method for transforming low-order tensors into higher-order tensors and apply it to tensors defined by graphs and hypergraphs. The transformation proceeds according to a surgery-like procedure that splits vertices, creates and absorbs virtual edges and inserts new

  2. Tensor rank is not multiplicative under the tensor product

    DEFF Research Database (Denmark)

    Christandl, Matthias; Jensen, Asger Kjærulff; Zuiddam, Jeroen

    2018-01-01

    The tensor rank of a tensor t is the smallest number r such that t can be decomposed as a sum of r simple tensors. Let s be a k-tensor and let t be an ℓ-tensor. The tensor product of s and t is a (k+ℓ)-tensor. Tensor rank is sub-multiplicative under the tensor product. We revisit the connection b...

  3. Comparison of sampling techniques for Bayesian parameter estimation

    Science.gov (United States)

    Allison, Rupert; Dunkley, Joanna

    2014-02-01

    The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.

  4. Can natural selection encode Bayesian priors?

    Science.gov (United States)

    Ramírez, Juan Camilo; Marshall, James A R

    2017-08-07

    The evolutionary success of many organisms depends on their ability to make decisions based on estimates of the state of their environment (e.g., predation risk) from uncertain information. These decision problems have optimal solutions and individuals in nature are expected to evolve the behavioural mechanisms to make decisions as if using the optimal solutions. Bayesian inference is the optimal method to produce estimates from uncertain data, thus natural selection is expected to favour individuals with the behavioural mechanisms to make decisions as if they were computing Bayesian estimates in typically-experienced environments, although this does not necessarily imply that favoured decision-makers do perform Bayesian computations exactly. Each individual should evolve to behave as if updating a prior estimate of the unknown environment variable to a posterior estimate as it collects evidence. The prior estimate represents the decision-maker's default belief regarding the environment variable, i.e., the individual's default 'worldview' of the environment. This default belief has been hypothesised to be shaped by natural selection and represent the environment experienced by the individual's ancestors. We present an evolutionary model to explore how accurately Bayesian prior estimates can be encoded genetically and shaped by natural selection when decision-makers learn from uncertain information. The model simulates the evolution of a population of individuals that are required to estimate the probability of an event. Every individual has a prior estimate of this probability and collects noisy cues from the environment in order to update its prior belief to a Bayesian posterior estimate with the evidence gained. The prior is inherited and passed on to offspring. Fitness increases with the accuracy of the posterior estimates produced. Simulations show that prior estimates become accurate over evolutionary time. In addition to these 'Bayesian' individuals, we also

  5. Application of Bayesian Networks for Estimation of Individual Psychological Characteristics

    KAUST Repository

    Litvinenko, Alexander

    2017-07-19

    In this paper we apply Bayesian networks for developing more accurate final overall estimations of psychological characteristics of an individual, based on psychological test results. Psychological tests which identify how much an individual possesses a certain factor are very popular and quite common in the modern world. We call this value for a given factor -- the final overall estimation. Examples of factors could be stress resistance, the readiness to take a risk, the ability to concentrate on certain complicated work and many others. An accurate qualitative and comprehensive assessment of human potential is one of the most important challenges in any company or collective. The most common way of studying psychological characteristics of each single person is testing. Psychologists and sociologists are constantly working on improvement of the quality of their tests. Despite serious work, done by psychologists, the questions in tests often do not produce enough feedback due to the use of relatively poor estimation systems. The overall estimation is usually based on personal experiences and the subjective perception of a psychologist or a group of psychologists about the investigated psychological personality factors.

  6. Application of Bayesian Networks for Estimation of Individual Psychological Characteristics

    KAUST Repository

    Litvinenko, Alexander; Litvinenko, Natalya

    2017-01-01

    In this paper we apply Bayesian networks for developing more accurate final overall estimations of psychological characteristics of an individual, based on psychological test results. Psychological tests which identify how much an individual possesses a certain factor are very popular and quite common in the modern world. We call this value for a given factor -- the final overall estimation. Examples of factors could be stress resistance, the readiness to take a risk, the ability to concentrate on certain complicated work and many others. An accurate qualitative and comprehensive assessment of human potential is one of the most important challenges in any company or collective. The most common way of studying psychological characteristics of each single person is testing. Psychologists and sociologists are constantly working on improvement of the quality of their tests. Despite serious work, done by psychologists, the questions in tests often do not produce enough feedback due to the use of relatively poor estimation systems. The overall estimation is usually based on personal experiences and the subjective perception of a psychologist or a group of psychologists about the investigated psychological personality factors.

  7. Bayesian Approaches to Imputation, Hypothesis Testing, and Parameter Estimation

    Science.gov (United States)

    Ross, Steven J.; Mackey, Beth

    2015-01-01

    This chapter introduces three applications of Bayesian inference to common and novel issues in second language research. After a review of the critiques of conventional hypothesis testing, our focus centers on ways Bayesian inference can be used for dealing with missing data, for testing theory-driven substantive hypotheses without a default null…

  8. Tensor Factorization for Low-Rank Tensor Completion.

    Science.gov (United States)

    Zhou, Pan; Lu, Canyi; Lin, Zhouchen; Zhang, Chao

    2018-03-01

    Recently, a tensor nuclear norm (TNN) based method was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the low-rank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.

  9. Speech Enhancement Using Gaussian Mixture Models, Explicit Bayesian Estimation and Wiener Filtering

    Directory of Open Access Journals (Sweden)

    M. H. Savoji

    2014-09-01

    Full Text Available Gaussian Mixture Models (GMMs of power spectral densities of speech and noise are used with explicit Bayesian estimations in Wiener filtering of noisy speech. No assumption is made on the nature or stationarity of the noise. No voice activity detection (VAD or any other means is employed to estimate the input SNR. The GMM mean vectors are used to form sets of over-determined system of equations whose solutions lead to the first estimates of speech and noise power spectra. The noise source is also identified and the input SNR estimated in this first step. These first estimates are then refined using approximate but explicit MMSE and MAP estimation formulations. The refined estimates are then used in a Wiener filter to reduce noise and enhance the noisy speech. The proposed schemes show good results. Nevertheless, it is shown that the MAP explicit solution, introduced here for the first time, reduces the computation time to less than one third with a slight higher improvement in SNR and PESQ score and also less distortion in comparison to the MMSE solution.

  10. Bayesian methods in reliability

    Science.gov (United States)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  11. An introduction to Bayesian statistics in health psychology.

    Science.gov (United States)

    Depaoli, Sarah; Rus, Holly M; Clifton, James P; van de Schoot, Rens; Tiemensma, Jitske

    2017-09-01

    The aim of the current article is to provide a brief introduction to Bayesian statistics within the field of health psychology. Bayesian methods are increasing in prevalence in applied fields, and they have been shown in simulation research to improve the estimation accuracy of structural equation models, latent growth curve (and mixture) models, and hierarchical linear models. Likewise, Bayesian methods can be used with small sample sizes since they do not rely on large sample theory. In this article, we discuss several important components of Bayesian statistics as they relate to health-based inquiries. We discuss the incorporation and impact of prior knowledge into the estimation process and the different components of the analysis that should be reported in an article. We present an example implementing Bayesian estimation in the context of blood pressure changes after participants experienced an acute stressor. We conclude with final thoughts on the implementation of Bayesian statistics in health psychology, including suggestions for reviewing Bayesian manuscripts and grant proposals. We have also included an extensive amount of online supplementary material to complement the content presented here, including Bayesian examples using many different software programmes and an extensive sensitivity analysis examining the impact of priors.

  12. A Review of Tensors and Tensor Signal Processing

    Science.gov (United States)

    Cammoun, L.; Castaño-Moraga, C. A.; Muñoz-Moreno, E.; Sosa-Cabrera, D.; Acar, B.; Rodriguez-Florido, M. A.; Brun, A.; Knutsson, H.; Thiran, J. P.

    Tensors have been broadly used in mathematics and physics, since they are a generalization of scalars or vectors and allow to represent more complex properties. In this chapter we present an overview of some tensor applications, especially those focused on the image processing field. From a mathematical point of view, a lot of work has been developed about tensor calculus, which obviously is more complex than scalar or vectorial calculus. Moreover, tensors can represent the metric of a vector space, which is very useful in the field of differential geometry. In physics, tensors have been used to describe several magnitudes, such as the strain or stress of materials. In solid mechanics, tensors are used to define the generalized Hooke’s law, where a fourth order tensor relates the strain and stress tensors. In fluid dynamics, the velocity gradient tensor provides information about the vorticity and the strain of the fluids. Also an electromagnetic tensor is defined, that simplifies the notation of the Maxwell equations. But tensors are not constrained to physics and mathematics. They have been used, for instance, in medical imaging, where we can highlight two applications: the diffusion tensor image, which represents how molecules diffuse inside the tissues and is broadly used for brain imaging; and the tensorial elastography, which computes the strain and vorticity tensor to analyze the tissues properties. Tensors have also been used in computer vision to provide information about the local structure or to define anisotropic image filters.

  13. Quantitative Precipitation Estimation over Ocean Using Bayesian Approach from Microwave Observations during the Typhoon Season

    Directory of Open Access Journals (Sweden)

    Jen-Chi Hu

    2009-01-01

    Full Text Available We have developed a new Bayesian approach to retrieve oceanic rain rate from the Tropical Rainfall Measuring Mission (TRMM Microwave Imager (TMI, with an emphasis on typhoon cases in the West Pacific. Retrieved rain rates are validated with measurements of rain gauges located on Japanese islands. To demonstrate improvement, retrievals are also compared with those from the TRMM/Precipitation Radar (PR, the Goddard Profiling Algorithm (GPROF, and a multi-channel linear regression statistical method (MLRS. We have found that qualitatively, all methods retrieved similar horizontal distributions in terms of locations of eyes and rain bands of typhoons. Quantitatively, our new Bayesian retrievals have the best linearity and the smallest root mean square (RMS error against rain gauge data for 16 typhoon over passes in 2004. The correlation coefficient and RMS of our retrievals are 0.95 and ~2 mm hr-1, respectively. In particular, at heavy rain rates, our Bayesian retrievals out perform those retrieved from GPROF and MLRS. Over all, the new Bayesian approach accurately retrieves surface rain rate for typhoon cases. Ac cu rate rain rate estimates from this method can be assimilated in models to improve forecast and prevent potential damages in Taiwan during typhoon seasons.

  14. Bayesian analysis for uncertainty estimation of a canopy transpiration model

    Science.gov (United States)

    Samanta, S.; Mackay, D. S.; Clayton, M. K.; Kruger, E. L.; Ewers, B. E.

    2007-04-01

    A Bayesian approach was used to fit a conceptual transpiration model to half-hourly transpiration rates for a sugar maple (Acer saccharum) stand collected over a 5-month period and probabilistically estimate its parameter and prediction uncertainties. The model used the Penman-Monteith equation with the Jarvis model for canopy conductance. This deterministic model was extended by adding a normally distributed error term. This extension enabled using Markov chain Monte Carlo simulations to sample the posterior parameter distributions. The residuals revealed approximate conformance to the assumption of normally distributed errors. However, minor systematic structures in the residuals at fine timescales suggested model changes that would potentially improve the modeling of transpiration. Results also indicated considerable uncertainties in the parameter and transpiration estimates. This simple methodology of uncertainty analysis would facilitate the deductive step during the development cycle of deterministic conceptual models by accounting for these uncertainties while drawing inferences from data.

  15. Applied tensor stereology

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Nyengaard, Jens Randel; Jensen, Eva B. Vedel

    In the present paper, statistical procedures for estimating shape and orientation of arbitrary three-dimensional particles are developed. The focus of this work is on the case where the particles cannot be observed directly, but only via sections. Volume tensors are used for describing particle s...

  16. A Bayesian inverse modeling approach to estimate soil hydraulic properties of a toposequence in southeastern Amazonia.

    Science.gov (United States)

    Stucchi Boschi, Raquel; Qin, Mingming; Gimenez, Daniel; Cooper, Miguel

    2016-04-01

    Modeling is an important tool for better understanding and assessing land use impacts on landscape processes. A key point for environmental modeling is the knowledge of soil hydraulic properties. However, direct determination of soil hydraulic properties is difficult and costly, particularly in vast and remote regions such as one constituting the Amazon Biome. One way to overcome this problem is to extrapolate accurately estimated data to pedologically similar sites. The van Genuchten (VG) parametric equation is the most commonly used for modeling SWRC. The use of a Bayesian approach in combination with the Markov chain Monte Carlo to estimate the VG parameters has several advantages compared to the widely used global optimization techniques. The Bayesian approach provides posterior distributions of parameters that are independent from the initial values and allow for uncertainty analyses. The main objectives of this study were: i) to estimate hydraulic parameters from data of pasture and forest sites by the Bayesian inverse modeling approach; and ii) to investigate the extrapolation of the estimated VG parameters to a nearby toposequence with pedologically similar soils to those used for its estimate. The parameters were estimated from volumetric water content and tension observations obtained after rainfall events during a 207-day period from pasture and forest sites located in the southeastern Amazon region. These data were used to run HYDRUS-1D under a Differential Evolution Adaptive Metropolis (DREAM) scheme 10,000 times, and only the last 2,500 times were used to calculate the posterior distributions of each hydraulic parameter along with 95% confidence intervals (CI) of volumetric water content and tension time series. Then, the posterior distributions were used to generate hydraulic parameters for two nearby toposequences composed by six soil profiles, three are under forest and three are under pasture. The parameters of the nearby site were accepted when

  17. An Improved Estimation Using Polya-Gamma Augmentation for Bayesian Structural Equation Models with Dichotomous Variables

    Science.gov (United States)

    Kim, Seohyun; Lu, Zhenqiu; Cohen, Allan S.

    2018-01-01

    Bayesian algorithms have been used successfully in the social and behavioral sciences to analyze dichotomous data particularly with complex structural equation models. In this study, we investigate the use of the Polya-Gamma data augmentation method with Gibbs sampling to improve estimation of structural equation models with dichotomous variables.…

  18. Bayesian Estimator for Angle Recovery: Event Classification and Reconstruction in Positron Emission Tomography

    International Nuclear Information System (INIS)

    Foudray, Angela M K; Levin, Craig S

    2007-01-01

    PET at the highest level is an inverse problem: reconstruct the location of the emission (which localize biological function) from detected photons. Ideally, one would like to directly measure an annihilation photon's incident direction on the detector. In the developed algorithm, Bayesian Estimation for Angle Recovery (BEAR), we utilized the increased information gathered from localizing photon interactions in the detector and developed a Bayesian estimator for a photon's incident direction. Probability distribution functions (PDFs) were filled using an interaction energy weighted mean or center of mass (COM) reference space, which had the following computational advantages: (1) a significant reduction in the size of the data in measurement space, making further manipulation and searches faster (2) the construction of COM space does not depend on measurement location, it takes advantage of measurement symmetries, and data can be added to the training set without knowledge and recalculation of prior training data, (3) calculation of posterior probability map is fully parallelizable, it can scale to any number of processors. These PDFs were used to estimate the point spread function (PSF) in incident angle space for (i) algorithm assessment and (ii) to provide probability selection criteria for classification. The algorithm calculates both the incident θ and φ angle, with ∼16 degrees RMS in both angles, limiting the incoming direction to a narrow cone. Feature size did not improve using the BEAR algorithm as an angle filter, but the contrast ratio improved 40% on average

  19. Density functionals for surface science: Exchange-correlation model development with Bayesian error estimation

    DEFF Research Database (Denmark)

    Wellendorff, Jess; Lundgård, Keld Troen; Møgelhøj, Andreas

    2012-01-01

    A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfit......A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding...... the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error...... sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this....

  20. A Bayesian analysis of sensible heat flux estimation: Quantifying uncertainty in meteorological forcing to improve model prediction

    KAUST Repository

    Ershadi, Ali; McCabe, Matthew; Evans, Jason P.; Mariethoz, Gregoire; Kavetski, Dmitri

    2013-01-01

    The influence of uncertainty in land surface temperature, air temperature, and wind speed on the estimation of sensible heat flux is analyzed using a Bayesian inference technique applied to the Surface Energy Balance System (SEBS) model

  1. Bayesian Estimation of Source Parameters and Associated Coulomb Failure Stress Changes for the 2005 Fukuoka (Japan) Earthquake

    KAUST Repository

    Dutta, Rishabh; Jonsson, Sigurjon; Wang, Teng; Vasyura-Bathke, Hannes

    2017-01-01

    solutions have been neglected, making it impossible to assess the reliability of the reported solutions. We use Bayesian inference to estimate the location, geometry and slip parameters of the fault and their uncertainties using Interferometric Synthetic

  2. Current trends in Bayesian methodology with applications

    CERN Document Server

    Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia

    2015-01-01

    Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on

  3. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    Science.gov (United States)

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  4. Accuracy of latent-variable estimation in Bayesian semi-supervised learning.

    Science.gov (United States)

    Yamazaki, Keisuke

    2015-09-01

    Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. A closed-form solution to tensor voting: theory and applications.

    Science.gov (United States)

    Wu, Tai-Pang; Yeung, Sai-Kit; Jia, Jiaya; Tang, Chi-Keung; Medioni, Gérard

    2012-08-01

    We prove a closed-form solution to tensor voting (CFTV): Given a point set in any dimensions, our closed-form solution provides an exact, continuous, and efficient algorithm for computing a structure-aware tensor that simultaneously achieves salient structure detection and outlier attenuation. Using CFTV, we prove the convergence of tensor voting on a Markov random field (MRF), thus termed as MRFTV, where the structure-aware tensor at each input site reaches a stationary state upon convergence in structure propagation. We then embed structure-aware tensor into expectation maximization (EM) for optimizing a single linear structure to achieve efficient and robust parameter estimation. Specifically, our EMTV algorithm optimizes both the tensor and fitting parameters and does not require random sampling consensus typically used in existing robust statistical techniques. We performed quantitative evaluation on its accuracy and robustness, showing that EMTV performs better than the original TV and other state-of-the-art techniques in fundamental matrix estimation for multiview stereo matching. The extensions of CFTV and EMTV for extracting multiple and nonlinear structures are underway.

  6. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    Science.gov (United States)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  7. Tensor gauge condition and tensor field decomposition

    Science.gov (United States)

    Zhu, Ben-Chao; Chen, Xiang-Song

    2015-10-01

    We discuss various proposals of separating a tensor field into pure-gauge and gauge-invariant components. Such tensor field decomposition is intimately related to the effort of identifying the real gravitational degrees of freedom out of the metric tensor in Einstein’s general relativity. We show that as for a vector field, the tensor field decomposition has exact correspondence to and can be derived from the gauge-fixing approach. The complication for the tensor field, however, is that there are infinitely many complete gauge conditions in contrast to the uniqueness of Coulomb gauge for a vector field. The cause of such complication, as we reveal, is the emergence of a peculiar gauge-invariant pure-gauge construction for any gauge field of spin ≥ 2. We make an extensive exploration of the complete tensor gauge conditions and their corresponding tensor field decompositions, regarding mathematical structures, equations of motion for the fields and nonlinear properties. Apparently, no single choice is superior in all aspects, due to an awkward fact that no gauge-fixing can reduce a tensor field to be purely dynamical (i.e. transverse and traceless), as can the Coulomb gauge in a vector case.

  8. Estimating mental states of a depressed person with bayesian networks

    NARCIS (Netherlands)

    Klein, Michel C.A.; Modena, Gabriele

    2013-01-01

    In this work in progress paper we present an approach based on Bayesian Networks to model the relationship between mental states and empirical observations in a depressed person. We encode relationships and domain expertise as a Hierarchical Bayesian Network. Mental states are represented as latent

  9. Uncertainty analysis for effluent trading planning using a Bayesian estimation-based simulation-optimization modeling approach.

    Science.gov (United States)

    Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J

    2017-06-01

    In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision

  10. Tensor spherical harmonics and tensor multipoles. II. Minkowski space

    International Nuclear Information System (INIS)

    Daumens, M.; Minnaert, P.

    1976-01-01

    The bases of tensor spherical harmonics and of tensor multipoles discussed in the preceding paper are generalized in the Hilbert space of Minkowski tensor fields. The transformation properties of the tensor multipoles under Lorentz transformation lead to the notion of irreducible tensor multipoles. We show that the usual 4-vector multipoles are themselves irreducible, and we build the irreducible tensor multipoles of the second order. We also give their relations with the symmetric tensor multipoles defined by Zerilli for application to the gravitational radiation

  11. Estimation of insurance premiums for coverage against natural disaster risk: an application of Bayesian Inference

    Directory of Open Access Journals (Sweden)

    Y. Paudel

    2013-03-01

    Full Text Available This study applies Bayesian Inference to estimate flood risk for 53 dyke ring areas in the Netherlands, and focuses particularly on the data scarcity and extreme behaviour of catastrophe risk. The probability density curves of flood damage are estimated through Monte Carlo simulations. Based on these results, flood insurance premiums are estimated using two different practical methods that each account in different ways for an insurer's risk aversion and the dispersion rate of loss data. This study is of practical relevance because insurers have been considering the introduction of flood insurance in the Netherlands, which is currently not generally available.

  12. Estimation of insurance premiums for coverage against natural disaster risk: an application of Bayesian Inference

    Science.gov (United States)

    Paudel, Y.; Botzen, W. J. W.; Aerts, J. C. J. H.

    2013-03-01

    This study applies Bayesian Inference to estimate flood risk for 53 dyke ring areas in the Netherlands, and focuses particularly on the data scarcity and extreme behaviour of catastrophe risk. The probability density curves of flood damage are estimated through Monte Carlo simulations. Based on these results, flood insurance premiums are estimated using two different practical methods that each account in different ways for an insurer's risk aversion and the dispersion rate of loss data. This study is of practical relevance because insurers have been considering the introduction of flood insurance in the Netherlands, which is currently not generally available.

  13. A Bayesian evidence synthesis approach to estimate disease prevalence in hard-to-reach populations: hepatitis C in New York City

    Directory of Open Access Journals (Sweden)

    Sarah Tan

    2018-06-01

    Full Text Available Existing methods to estimate the prevalence of chronic hepatitis C (HCV in New York City (NYC are limited in scope and fail to assess hard-to-reach subpopulations with highest risk such as injecting drug users (IDUs. To address these limitations, we employ a Bayesian multi-parameter evidence synthesis model to systematically combine multiple sources of data, account for bias in certain data sources, and provide unbiased HCV prevalence estimates with associated uncertainty. Our approach improves on previous estimates by explicitly accounting for injecting drug use and including data from high-risk subpopulations such as the incarcerated, and is more inclusive, utilizing ten NYC data sources. In addition, we derive two new equations to allow age at first injecting drug use data for former and current IDUs to be incorporated into the Bayesian evidence synthesis, a first for this type of model. Our estimated overall HCV prevalence as of 2012 among NYC adults aged 20–59 years is 2.78% (95% CI 2.61–2.94%, which represents between 124,900 and 140,000 chronic HCV cases. These estimates suggest that HCV prevalence in NYC is higher than previously indicated from household surveys (2.2% and the surveillance system (2.37%, and that HCV transmission is increasing among young injecting adults in NYC. An ancillary benefit from our results is an estimate of current IDUs aged 20–59 in NYC: 0.58% or 27,600 individuals. Keywords: Bayesian evidence synthesis, Disease prevalence estimation, Hard-to-reach populations, Injecting drug use, hepatitis C in New York City

  14. A Bayesian consistent dual ensemble Kalman filter for state-parameter estimation in subsurface hydrology

    KAUST Repository

    Ait-El-Fquih, Boujemaa; El Gharamti, Mohamad; Hoteit, Ibrahim

    2016-01-01

    Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface ground-water models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKF(OSA). Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25% more accurate state and parameter estimations than the joint and dual approaches.

  15. A Bayesian consistent dual ensemble Kalman filter for state-parameter estimation in subsurface hydrology

    KAUST Repository

    Ait-El-Fquih, Boujemaa

    2016-08-12

    Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface ground-water models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model\\'s state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKF(OSA). Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25% more accurate state and parameter estimations than the joint and dual approaches.

  16. Linear Invariant Tensor Interpolation Applied to Cardiac Diffusion Tensor MRI

    Science.gov (United States)

    Gahm, Jin Kyu; Wisniewski, Nicholas; Kindlmann, Gordon; Kung, Geoffrey L.; Klug, William S.; Garfinkel, Alan; Ennis, Daniel B.

    2015-01-01

    Purpose Various methods exist for interpolating diffusion tensor fields, but none of them linearly interpolate tensor shape attributes. Linear interpolation is expected not to introduce spurious changes in tensor shape. Methods Herein we define a new linear invariant (LI) tensor interpolation method that linearly interpolates components of tensor shape (tensor invariants) and recapitulates the interpolated tensor from the linearly interpolated tensor invariants and the eigenvectors of a linearly interpolated tensor. The LI tensor interpolation method is compared to the Euclidean (EU), affine-invariant Riemannian (AI), log-Euclidean (LE) and geodesic-loxodrome (GL) interpolation methods using both a synthetic tensor field and three experimentally measured cardiac DT-MRI datasets. Results EU, AI, and LE introduce significant microstructural bias, which can be avoided through the use of GL or LI. Conclusion GL introduces the least microstructural bias, but LI tensor interpolation performs very similarly and at substantially reduced computational cost. PMID:23286085

  17. Bayesian Estimation and Selection of Nonlinear Vector Error Correction Models: The Case of the Sugar-Ethanol-Oil Nexus in Brazil

    OpenAIRE

    Kelvin Balcombe; George Rapsomanikis

    2008-01-01

    Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest ...

  18. Fast Bayesian optimal experimental design for seismic source inversion

    KAUST Repository

    Long, Quan

    2015-07-01

    We develop a fast method for optimally designing experiments in the context of statistical seismic source inversion. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by elastodynamic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the "true" parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem. © 2015 Elsevier B.V.

  19. Fast Bayesian Optimal Experimental Design for Seismic Source Inversion

    KAUST Repository

    Long, Quan

    2016-01-06

    We develop a fast method for optimally designing experiments [1] in the context of statistical seismic source inversion [2]. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by the elastic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the true parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem.

  20. Fast Bayesian Optimal Experimental Design for Seismic Source Inversion

    KAUST Repository

    Long, Quan; Motamed, Mohammad; Tempone, Raul

    2016-01-01

    We develop a fast method for optimally designing experiments [1] in the context of statistical seismic source inversion [2]. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by the elastic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the true parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem.

  1. Learning Bayesian networks for discrete data

    KAUST Repository

    Liang, Faming

    2009-02-01

    Bayesian networks have received much attention in the recent literature. In this article, we propose an approach to learn Bayesian networks using the stochastic approximation Monte Carlo (SAMC) algorithm. Our approach has two nice features. Firstly, it possesses the self-adjusting mechanism and thus avoids essentially the local-trap problem suffered by conventional MCMC simulation-based approaches in learning Bayesian networks. Secondly, it falls into the class of dynamic importance sampling algorithms; the network features can be inferred by dynamically weighted averaging the samples generated in the learning process, and the resulting estimates can have much lower variation than the single model-based estimates. The numerical results indicate that our approach can mix much faster over the space of Bayesian networks than the conventional MCMC simulation-based approaches. © 2008 Elsevier B.V. All rights reserved.

  2. Model estimation of claim risk and premium for motor vehicle insurance by using Bayesian method

    Science.gov (United States)

    Sukono; Riaman; Lesmana, E.; Wulandari, R.; Napitupulu, H.; Supian, S.

    2018-01-01

    Risk models need to be estimated by the insurance company in order to predict the magnitude of the claim and determine the premiums charged to the insured. This is intended to prevent losses in the future. In this paper, we discuss the estimation of risk model claims and motor vehicle insurance premiums using Bayesian methods approach. It is assumed that the frequency of claims follow a Poisson distribution, while a number of claims assumed to follow a Gamma distribution. The estimation of parameters of the distribution of the frequency and amount of claims are made by using Bayesian methods. Furthermore, the estimator distribution of frequency and amount of claims are used to estimate the aggregate risk models as well as the value of the mean and variance. The mean and variance estimator that aggregate risk, was used to predict the premium eligible to be charged to the insured. Based on the analysis results, it is shown that the frequency of claims follow a Poisson distribution with parameter values λ is 5.827. While a number of claims follow the Gamma distribution with parameter values p is 7.922 and θ is 1.414. Therefore, the obtained values of the mean and variance of the aggregate claims respectively are IDR 32,667,489.88 and IDR 38,453,900,000,000.00. In this paper the prediction of the pure premium eligible charged to the insured is obtained, which amounting to IDR 2,722,290.82. The prediction of the claims and premiums aggregate can be used as a reference for the insurance company’s decision-making in management of reserves and premiums of motor vehicle insurance.

  3. Bayesian estimation of predator diet composition from fatty acids and stable isotopes

    Directory of Open Access Journals (Sweden)

    Philipp Neubauer

    2015-04-01

    Full Text Available Quantitative analysis of stable isotopes (SI and, more recently, fatty acid profiles (FAP are useful and complementary tools for estimating the relative contribution of different prey items in the diet of a predator. The combination of these two approaches, however, has thus far been limited and qualitative. We propose a mixing model for FAP that follows the Bayesian machinery employed in state-of-the-art mixing models for SI. This framework provides both point estimates and probability distributions for individual and population level diet proportions. Where fat content and conversion coefficients are available, they can be used to improve diet estimates. This model can be explicitly integrated with analogous models for SI to increase resolution and clarify predator–prey relationships. We apply our model to simulated data and an experimental dataset that allows us to illustrate modeling strategies and demonstrate model performance. Our methods are provided as an open source software package for the statistical computing environment R.

  4. On improving the efficiency of tensor voting.

    Science.gov (United States)

    Moreno, Rodrigo; Garcia, Miguel Angel; Puig, Domenec; Pizarro, Luis; Burgeth, Bernhard; Weickert, Joachim

    2011-11-01

    This paper proposes two alternative formulations to reduce the high computational complexity of tensor voting, a robust perceptual grouping technique used to extract salient information from noisy data. The first scheme consists of numerical approximations of the votes, which have been derived from an in-depth analysis of the plate and ball voting processes. The second scheme simplifies the formulation while keeping the same perceptual meaning of the original tensor voting: The stick tensor voting and the stick component of the plate tensor voting must reinforce surfaceness, the plate components of both the plate and ball tensor voting must boost curveness, whereas junctionness must be strengthened by the ball component of the ball tensor voting. Two new parameters have been proposed for the second formulation in order to control the potentially conflictive influence of the stick component of the plate vote and the ball component of the ball vote. Results show that the proposed formulations can be used in applications where efficiency is an issue since they have a complexity of order O(1). Moreover, the second proposed formulation has been shown to be more appropriate than the original tensor voting for estimating saliencies by appropriately setting the two new parameters.

  5. Bayesian Methods for Radiation Detection and Dosimetry

    CERN Document Server

    Groer, Peter G

    2002-01-01

    We performed work in three areas: radiation detection, external and internal radiation dosimetry. In radiation detection we developed Bayesian techniques to estimate the net activity of high and low activity radioactive samples. These techniques have the advantage that the remaining uncertainty about the net activity is described by probability densities. Graphs of the densities show the uncertainty in pictorial form. Figure 1 below demonstrates this point. We applied stochastic processes for a method to obtain Bayesian estimates of 222Rn-daughter products from observed counting rates. In external radiation dosimetry we studied and developed Bayesian methods to estimate radiation doses to an individual with radiation induced chromosome aberrations. We analyzed chromosome aberrations after exposure to gammas and neutrons and developed a method for dose-estimation after criticality accidents. The research in internal radiation dosimetry focused on parameter estimation for compartmental models from observed comp...

  6. Evaluation of Oceanic Transport Statistics By Use of Transient Tracers and Bayesian Methods

    Science.gov (United States)

    Trossman, D. S.; Thompson, L.; Mecking, S.; Bryan, F.; Peacock, S.

    2013-12-01

    Key variables that quantify the time scales over which atmospheric signals penetrate into the oceanic interior and their uncertainties are computed using Bayesian methods and transient tracers from both models and observations. First, the mean residence times, subduction rates, and formation rates of Subtropical Mode Water (STMW) and Subpolar Mode Water (SPMW) in the North Atlantic and Subantarctic Mode Water (SAMW) in the Southern Ocean are estimated by combining a model and observations of chlorofluorocarbon-11 (CFC-11) via Bayesian Model Averaging (BMA), statistical technique that weights model estimates according to how close they agree with observations. Second, a Bayesian method is presented to find two oceanic transport parameters associated with the age distribution of ocean waters, the transit-time distribution (TTD), by combining an eddying global ocean model's estimate of the TTD with hydrographic observations of CFC-11, temperature, and salinity. Uncertainties associated with objectively mapping irregularly spaced bottle data are quantified by making use of a thin-plate spline and then propagated via the two Bayesian techniques. It is found that the subduction of STMW, SPMW, and SAMW is mostly an advective process, but up to about one-third of STMW subduction likely owes to non-advective processes. Also, while the formation of STMW is mostly due to subduction, the formation of SPMW is mostly due to other processes. About half of the formation of SAMW is due to subduction and half is due to other processes. A combination of air-sea flux, acting on relatively short time scales, and turbulent mixing, acting on a wide range of time scales, is likely the dominant SPMW erosion mechanism. Air-sea flux is likely responsible for most STMW erosion, and turbulent mixing is likely responsible for most SAMW erosion. Two oceanic transport parameters, the mean age of a water parcel and the half-variance associated with the TTD, estimated using the model's tracers as

  7. Histogram equalization with Bayesian estimation for noise robust speech recognition.

    Science.gov (United States)

    Suh, Youngjoo; Kim, Hoirin

    2018-02-01

    The histogram equalization approach is an efficient feature normalization technique for noise robust automatic speech recognition. However, it suffers from performance degradation when some fundamental conditions are not satisfied in the test environment. To remedy these limitations of the original histogram equalization methods, class-based histogram equalization approach has been proposed. Although this approach showed substantial performance improvement under noise environments, it still suffers from performance degradation due to the overfitting problem when test data are insufficient. To address this issue, the proposed histogram equalization technique employs the Bayesian estimation method in the test cumulative distribution function estimation. It was reported in a previous study conducted on the Aurora-4 task that the proposed approach provided substantial performance gains in speech recognition systems based on the acoustic modeling of the Gaussian mixture model-hidden Markov model. In this work, the proposed approach was examined in speech recognition systems with deep neural network-hidden Markov model (DNN-HMM), the current mainstream speech recognition approach where it also showed meaningful performance improvement over the conventional maximum likelihood estimation-based method. The fusion of the proposed features with the mel-frequency cepstral coefficients provided additional performance gains in DNN-HMM systems, which otherwise suffer from performance degradation in the clean test condition.

  8. MATRIX-VECTOR ALGORITHMS OF LOCAL POSTERIORI INFERENCE IN ALGEBRAIC BAYESIAN NETWORKS ON QUANTA PROPOSITIONS

    Directory of Open Access Journals (Sweden)

    A. A. Zolotin

    2015-07-01

    Full Text Available Posteriori inference is one of the three kinds of probabilistic-logic inferences in the probabilistic graphical models theory and the base for processing of knowledge patterns with probabilistic uncertainty using Bayesian networks. The paper deals with a task of local posteriori inference description in algebraic Bayesian networks that represent a class of probabilistic graphical models by means of matrix-vector equations. The latter are essentially based on the use of tensor product of matrices, Kronecker degree and Hadamard product. Matrix equations for calculating posteriori probabilities vectors within posteriori inference in knowledge patterns with quanta propositions are obtained. Similar equations of the same type have already been discussed within the confines of the theory of algebraic Bayesian networks, but they were built only for the case of posteriori inference in the knowledge patterns on the ideals of conjuncts. During synthesis and development of matrix-vector equations on quanta propositions probability vectors, a number of earlier results concerning normalizing factors in posteriori inference and assignment of linear projective operator with a selector vector was adapted. We consider all three types of incoming evidences - deterministic, stochastic and inaccurate - combined with scalar and interval estimation of probability truth of propositional formulas in the knowledge patterns. Linear programming problems are formed. Their solution gives the desired interval values of posterior probabilities in the case of inaccurate evidence or interval estimates in a knowledge pattern. That sort of description of a posteriori inference gives the possibility to extend the set of knowledge pattern types that we can use in the local and global posteriori inference, as well as simplify complex software implementation by use of existing third-party libraries, effectively supporting submission and processing of matrices and vectors when

  9. Moment-tensor solutions estimated using optimal filter theory: Global seismicity, 2001

    Science.gov (United States)

    Sipkin, S.A.; Bufe, C.G.; Zirbes, M.D.

    2003-01-01

    This paper is the 12th in a series published yearly containing moment-tensor solutions computed at the US Geological Survey using an algorithm based on the theory of optimal filter design (Sipkin, 1982 and Sipkin, 1986b). An inversion has been attempted for all earthquakes with a magnitude, mb or MS, of 5.5 or greater. Previous listings include solutions for earthquakes that occurred from 1981 to 2000 (Sipkin, 1986b; Sipkin and Needham, 1989, Sipkin and Needham, 1991, Sipkin and Needham, 1992, Sipkin and Needham, 1993, Sipkin and Needham, 1994a and Sipkin and Needham, 1994b; Sipkin and Zirbes, 1996 and Sipkin and Zirbes, 1997; Sipkin et al., 1998, Sipkin et al., 1999, Sipkin et al., 2000a, Sipkin et al., 2000b and Sipkin et al., 2002).The entire USGS moment-tensor catalog can be obtained via anonymous FTP at ftp://ghtftp.cr.usgs.gov. After logging on, change directory to “momten”. This directory contains two compressed ASCII files that contain the finalized solutions, “mt.lis.Z” and “fmech.lis.Z”. “mt.lis.Z” contains the elements of the moment tensors along with detailed event information; “fmech.lis.Z” contains the decompositions into the principal axes and best double-couples. The fast moment-tensor solutions for more recent events that have not yet been finalized and added to the catalog, are gathered by month in the files “jan01.lis.Z”, etc. “fmech.doc.Z” describes the various fields.

  10. A default Bayesian hypothesis test for mediation.

    Science.gov (United States)

    Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan

    2015-03-01

    In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).

  11. A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)

    Science.gov (United States)

    Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.

    2017-12-01

    Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we

  12. Constraints on the tensor-to-scalar ratio for non-power-law models

    International Nuclear Information System (INIS)

    Vázquez, J. Alberto; Bridges, M.; Ma, Yin-Zhe; Hobson, M.P.

    2013-01-01

    Recent cosmological observations hint at a deviation from the simple power-law form of the primordial spectrum of curvature perturbations. In this paper we show that in the presence of a tensor component, a turn-over in the initial spectrum is preferred by current observations, and hence non-power-law models ought to be considered. For instance, for a power-law parameterisation with both a tensor component and running parameter, current data show a preference for a negative running at more than 2.5σ C.L. As a consequence of this deviation from a power-law, constraints on the tensor-to-scalar ratio r are slightly broader. We also present constraints on the inflationary parameters for a model-independent reconstruction and the Lasenby and Doran (LD) model. In particular, the constraints on the tensor-to-scalar ratio from the LD model are: r LD = 0.11±0.024. In addition to current data, we show expected constraints from Planck-like and CMB-Pol sensitivity experiments by using Markov-Chain-Monte-Carlo sampling chains. For all the models, we have included the Bayesian Evidence to perform a model selection analysis. The Bayes factor, using current observations, shows a strong preference for the LD model over the standard power-law parameterisation, and provides an insight into the accuracy of differentiating models through future surveys

  13. Maximum a posteriori probability estimates in infinite-dimensional Bayesian inverse problems

    International Nuclear Information System (INIS)

    Helin, T; Burger, M

    2015-01-01

    A demanding challenge in Bayesian inversion is to efficiently characterize the posterior distribution. This task is problematic especially in high-dimensional non-Gaussian problems, where the structure of the posterior can be very chaotic and difficult to analyse. Current inverse problem literature often approaches the problem by considering suitable point estimators for the task. Typically the choice is made between the maximum a posteriori (MAP) or the conditional mean (CM) estimate. The benefits of either choice are not well-understood from the perspective of infinite-dimensional theory. Most importantly, there exists no general scheme regarding how to connect the topological description of a MAP estimate to a variational problem. The recent results by Dashti and others (Dashti et al 2013 Inverse Problems 29 095017) resolve this issue for nonlinear inverse problems in Gaussian framework. In this work we improve the current understanding by introducing a novel concept called the weak MAP (wMAP) estimate. We show that any MAP estimate in the sense of Dashti et al (2013 Inverse Problems 29 095017) is a wMAP estimate and, moreover, how the wMAP estimate connects to a variational formulation in general infinite-dimensional non-Gaussian problems. The variational formulation enables to study many properties of the infinite-dimensional MAP estimate that were earlier impossible to study. In a recent work by the authors (Burger and Lucka 2014 Maximum a posteriori estimates in linear inverse problems with logconcave priors are proper bayes estimators preprint) the MAP estimator was studied in the context of the Bayes cost method. Using Bregman distances, proper convex Bayes cost functions were introduced for which the MAP estimator is the Bayes estimator. Here, we generalize these results to the infinite-dimensional setting. Moreover, we discuss the implications of our results for some examples of prior models such as the Besov prior and hierarchical prior. (paper)

  14. Minimum mean square error estimation and approximation of the Bayesian update

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann G.; Zander, Elmar

    2015-01-01

    Given: a physical system modeled by a PDE or ODE with uncertain coefficient q(w), a measurement operator Y (u(q); q), where u(q; w) uncertain solution. Aim: to identify q(w). The mapping from parameters to observations is usually not invertible, hence this inverse identification problem is generally ill-posed. To identify q(w) we derived non-linear Bayesian update from the variational problem associated with conditional expectation. To reduce cost of the Bayesian update we offer a functional approximation, e.g. polynomial chaos expansion (PCE). New: We derive linear, quadratic etc approximation of full Bayesian update.

  15. Minimum mean square error estimation and approximation of the Bayesian update

    KAUST Repository

    Litvinenko, Alexander

    2015-01-07

    Given: a physical system modeled by a PDE or ODE with uncertain coefficient q(w), a measurement operator Y (u(q); q), where u(q; w) uncertain solution. Aim: to identify q(w). The mapping from parameters to observations is usually not invertible, hence this inverse identification problem is generally ill-posed. To identify q(w) we derived non-linear Bayesian update from the variational problem associated with conditional expectation. To reduce cost of the Bayesian update we offer a functional approximation, e.g. polynomial chaos expansion (PCE). New: We derive linear, quadratic etc approximation of full Bayesian update.

  16. Comparison of Magnetic Susceptibility Tensor and Diffusion Tensor of the Brain.

    Science.gov (United States)

    Li, Wei; Liu, Chunlei

    2013-10-01

    Susceptibility tensor imaging (STI) provides a novel approach for noninvasive assessment of the white matter pathways of the brain. Using mouse brain ex vivo , we compared STI with diffusion tensor imaging (DTI), in terms of tensor values, principal tensor values, anisotropy values, and tensor orientations. Despite the completely different biophysical underpinnings, magnetic susceptibility tensors and diffusion tensors show many similarities in the tensor and principal tensor images, for example, the tensors perpendicular to the fiber direction have the highest gray-white matter contrast, and the largest principal tensor is along the fiber direction. Comparison to DTI fractional anisotropy, the susceptibility anisotropy provides much higher sensitivity to the chemical composition of the white matter, especially myelin. The high sensitivity can be further enhanced with the perfusion of ProHance, a gadolinium-based contrast agent. Regarding the tensor orientations, the direction of the largest principal susceptibility tensor agrees with that of diffusion tensors in major white matter fiber bundles. The STI fiber tractography can reconstruct the fiber pathways for the whole corpus callosum and for white matter fiber bundles that are in close contact but in different orientations. There are some differences between susceptibility and diffusion tensor orientations, which are likely due to the limitations in the current STI reconstruction. With the development of more accurate reconstruction methods, STI holds the promise for probing the white matter micro-architectures with more anatomical details and higher chemical sensitivity.

  17. Bayesian flood forecasting methods: A review

    Science.gov (United States)

    Han, Shasha; Coulibaly, Paulin

    2017-08-01

    Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been

  18. Diffusion tensor smoothing through weighted Karcher means

    Science.gov (United States)

    Carmichael, Owen; Chen, Jun; Paul, Debashis; Peng, Jie

    2014-01-01

    Diffusion tensor magnetic resonance imaging (MRI) quantifies the spatial distribution of water Diffusion at each voxel on a regular grid of locations in a biological specimen by Diffusion tensors– 3 × 3 positive definite matrices. Removal of noise from DTI is an important problem due to the high scientific relevance of DTI and relatively low signal to noise ratio it provides. Leading approaches to this problem amount to estimation of weighted Karcher means of Diffusion tensors within spatial neighborhoods, under various metrics imposed on the space of tensors. However, it is unclear how the behavior of these estimators varies with the magnitude of DTI sensor noise (the noise resulting from the thermal e!ects of MRI scanning) as well as the geometric structure of the underlying Diffusion tensor neighborhoods. In this paper, we combine theoretical analysis, empirical analysis of simulated DTI data, and empirical analysis of real DTI scans to compare the noise removal performance of three kernel-based DTI smoothers that are based on Euclidean, log-Euclidean, and affine-invariant metrics. The results suggest, contrary to conventional wisdom, that imposing a simplistic Euclidean metric may in fact provide comparable or superior noise removal, especially in relatively unstructured regions and/or in the presence of moderate to high levels of sensor noise. On the contrary, log-Euclidean and affine-invariant metrics may lead to better noise removal in highly structured anatomical regions, especially when the sensor noise is of low magnitude. These findings emphasize the importance of considering the interplay of sensor noise magnitude and tensor field geometric structure when assessing Diffusion tensor smoothing options. They also point to the necessity for continued development of smoothing methods that perform well across a large range of scenarios. PMID:25419264

  19. Killing tensors and conformal Killing tensors from conformal Killing vectors

    International Nuclear Information System (INIS)

    Rani, Raffaele; Edgar, S Brian; Barnes, Alan

    2003-01-01

    Koutras has proposed some methods to construct reducible proper conformal Killing tensors and Killing tensors (which are, in general, irreducible) when a pair of orthogonal conformal Killing vectors exist in a given space. We give the completely general result demonstrating that this severe restriction of orthogonality is unnecessary. In addition, we correct and extend some results concerning Killing tensors constructed from a single conformal Killing vector. A number of examples demonstrate that it is possible to construct a much larger class of reducible proper conformal Killing tensors and Killing tensors than permitted by the Koutras algorithms. In particular, by showing that all conformal Killing tensors are reducible in conformally flat spaces, we have a method of constructing all conformal Killing tensors, and hence all the Killing tensors (which will in general be irreducible) of conformally flat spaces using their conformal Killing vectors

  20. Tensors for physics

    CERN Document Server

    Hess, Siegfried

    2015-01-01

    This book presents the science of tensors in a didactic way. The various types and ranks of tensors and the physical basis is presented. Cartesian Tensors are needed for the description of directional phenomena in many branches of physics and for the characterization the anisotropy of material properties. The first sections of the book provide an introduction to the vector and tensor algebra and analysis, with applications to physics,  at undergraduate level. Second rank tensors, in particular their symmetries, are discussed in detail. Differentiation and integration of fields, including generalizations of the Stokes law and the Gauss theorem, are treated. The physics relevant for the applications in mechanics, quantum mechanics, electrodynamics and hydrodynamics is presented. The second part of the book is devoted to  tensors of any rank, at graduate level.  Special topics are irreducible, i.e. symmetric traceless tensors, isotropic tensors, multipole potential tensors, spin tensors, integration and spin-...

  1. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    Science.gov (United States)

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  2. Inference in hybrid Bayesian networks

    DEFF Research Database (Denmark)

    Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2009-01-01

    Since the 1980s, Bayesian Networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability-techniques (like fault trees...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....

  3. Tensor modes on the string theory landscape

    International Nuclear Information System (INIS)

    Westphal, Alexander

    2012-06-01

    We attempt an estimate for the distribution of the tensor mode fraction r over the landscape of vacua in string theory. The dynamics of eternal inflation and quantum tunneling lead to a kind of democracy on the landscape, providing no bias towards large-field or small-field inflation regardless of the class of measure. The tensor mode fraction then follows the number frequency distributions of inflationary mechanisms of string theory over the landscape. We show that an estimate of the relative number frequencies for small-field vs large-field inflation, while unattainable on the whole landscape, may be within reach as a regional answer for warped Calabi-Yau flux compactifications of type IIB string theory.

  4. Tensor modes on the string theory landscape

    Energy Technology Data Exchange (ETDEWEB)

    Westphal, Alexander

    2012-06-15

    We attempt an estimate for the distribution of the tensor mode fraction r over the landscape of vacua in string theory. The dynamics of eternal inflation and quantum tunneling lead to a kind of democracy on the landscape, providing no bias towards large-field or small-field inflation regardless of the class of measure. The tensor mode fraction then follows the number frequency distributions of inflationary mechanisms of string theory over the landscape. We show that an estimate of the relative number frequencies for small-field vs large-field inflation, while unattainable on the whole landscape, may be within reach as a regional answer for warped Calabi-Yau flux compactifications of type IIB string theory.

  5. Bayesian uncertainty quantification in linear models for diffusion MRI.

    Science.gov (United States)

    Sjölund, Jens; Eklund, Anders; Özarslan, Evren; Herberthson, Magnus; Bånkestad, Maria; Knutsson, Hans

    2018-03-29

    Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Estimating the true accuracy of diagnostic tests for dengue infection using bayesian latent class models.

    Directory of Open Access Journals (Sweden)

    Wirichada Pan-ngum

    Full Text Available Accuracy of rapid diagnostic tests for dengue infection has been repeatedly estimated by comparing those tests with reference assays. We hypothesized that those estimates might be inaccurate if the accuracy of the reference assays is not perfect. Here, we investigated this using statistical modeling.Data from a cohort study of 549 patients suspected of dengue infection presenting at Colombo North Teaching Hospital, Ragama, Sri Lanka, that described the application of our reference assay (a combination of Dengue IgM antibody capture ELISA and IgG antibody capture ELISA and of three rapid diagnostic tests (Panbio NS1 antigen, IgM antibody and IgG antibody rapid immunochromatographic cassette tests were re-evaluated using bayesian latent class models (LCMs. The estimated sensitivity and specificity of the reference assay were 62.0% and 99.6%, respectively. Prevalence of dengue infection (24.3%, and sensitivities and specificities of the Panbio NS1 (45.9% and 97.9%, IgM (54.5% and 95.5% and IgG (62.1% and 84.5% estimated by bayesian LCMs were significantly different from those estimated by assuming that the reference assay was perfect. Sensitivity, specificity, PPV and NPV for a combination of NS1, IgM and IgG cassette tests on admission samples were 87.0%, 82.8%, 62.0% and 95.2%, respectively.Our reference assay is an imperfect gold standard. In our setting, the combination of NS1, IgM and IgG rapid diagnostic tests could be used on admission to rule out dengue infection with a high level of accuracy (NPV 95.2%. Further evaluation of rapid diagnostic tests for dengue infection should include the use of appropriate statistical models.

  7. Empirical Bayesian estimation in graphical analysis: a voxel-based approach for the determination of the volume of distribution in PET studies

    Energy Technology Data Exchange (ETDEWEB)

    Zanderigo, Francesca [Department of Molecular Imaging and Neuropathology, New York State Psychiatric Institute, New York, NY (United States)], E-mail: francesca.zanderigo@gmail.com; Ogden, R. Todd [Department of Molecular Imaging and Neuropathology, New York State Psychiatric Institute, New York, NY (United States); Department of Psychiatry, College of Physicians and Surgeons, Columbia University, New York, NY (United States); Department of Biostatistics, Mailman School of Public Health, Columbia University, New York, NY (United States); Bertoldo, Alessandra; Cobelli, Claudio [Department of Information Engineering, University of Padova, Padova (Italy); Mann, J. John; Parsey, Ramin V. [Department of Molecular Imaging and Neuropathology, New York State Psychiatric Institute, New York, NY (United States); Department of Psychiatry, College of Physicians and Surgeons, Columbia University, New York, NY (United States)

    2010-05-15

    Introduction: Total volume of distribution (V{sub T}) determined by graphical analysis (GA) of PET data suffers from a noise-dependent bias. Likelihood estimation in GA (LEGA) eliminates this bias at the region of interest (ROI) level, but at voxel noise levels, the variance of estimators is high, yielding noisy images. We hypothesized that incorporating LEGA V{sub T} estimation in a Bayesian framework would shrink estimators towards prior means, reducing variability and producing meaningful and useful voxel images. Methods: Empirical Bayesian estimation in GA (EBEGA) determines prior distributions using a two-step k-means clustering of voxel activity. Results obtained on eight [{sup 11}C]-DASB studies are compared with estimators computed by ROI-based LEGA. Results: EBEGA reproduces the results obtained by ROI LEGA while providing low-variability V{sub T} images. Correlation coefficients between average EBEGA V{sub T} and corresponding ROI LEGA V{sub T} range from 0.963 to 0.994. Conclusions: EBEGA is a fully automatic and general approach that can be applied to voxel-level V{sub T} image creation and to any modeling strategy to reduce voxel-level estimation variability without prefiltering of the PET data.

  8. The 1/ N Expansion of Tensor Models with Two Symmetric Tensors

    Science.gov (United States)

    Gurau, Razvan

    2018-06-01

    It is well known that tensor models for a tensor with no symmetry admit a 1/ N expansion dominated by melonic graphs. This result relies crucially on identifying jackets, which are globally defined ribbon graphs embedded in the tensor graph. In contrast, no result of this kind has so far been established for symmetric tensors because global jackets do not exist. In this paper we introduce a new approach to the 1/ N expansion in tensor models adapted to symmetric tensors. In particular we do not use any global structure like the jackets. We prove that, for any rank D, a tensor model with two symmetric tensors and interactions the complete graph K D+1 admits a 1/ N expansion dominated by melonic graphs.

  9. Estimating the occurrence of foreign material in Advanced Gas-cooled Reactors: A Bayesian Monte Carlo approach

    International Nuclear Information System (INIS)

    Mason, Paolo

    2014-01-01

    Highlights: • The amount of a specific type of foreign material found in UK AGRs has been estimated. • The estimate is based on very few instances of detection in numerous inspections. • A Bayesian Monte Carlo approach was used. • The study supports safety case claims on coolant flow impairment. • The methodology is applicable to any inspection campaign on any plant system. - Abstract: The current occurrence of a particular sort of foreign material in eight UK Advanced Gas-cooled Reactors has been estimated by means of a parametric approach. The study includes both variability, treated in analytic fashion via the combination of standard probability distributions, and the uncertainty in the parameters of the model of choice, whose posterior distribution was inferred in Bayesian fashion by means of a Monte Carlo route consisting in the conditional acceptance of sets of model parameters drawn from a prior distribution based on engineering judgement. The model underlying the present study specifically refers to the re-loading and inspection routines of UK Advanced Gas-cooled Reactors. The approach to inference here presented, however, is of general validity and can be applied to the outcome of any inspection campaign on any plant system, and indeed to any situation in which the outcome of a stochastic process is more easily simulated than described by a probability density or mass function

  10. Estimation of Post-Test Probabilities by Residents: Bayesian Reasoning versus Heuristics?

    Science.gov (United States)

    Hall, Stacey; Phang, Sen Han; Schaefer, Jeffrey P.; Ghali, William; Wright, Bruce; McLaughlin, Kevin

    2014-01-01

    Although the process of diagnosing invariably begins with a heuristic, we encourage our learners to support their diagnoses by analytical cognitive processes, such as Bayesian reasoning, in an attempt to mitigate the effects of heuristics on diagnosing. There are, however, limited data on the use ± impact of Bayesian reasoning on the accuracy of…

  11. A generic method for estimating system reliability using Bayesian networks

    International Nuclear Information System (INIS)

    Doguc, Ozge; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples

  12. A generic method for estimating system reliability using Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Doguc, Ozge [Stevens Institute of Technology, Hoboken, NJ 07030 (United States); Ramirez-Marquez, Jose Emmanuel [Stevens Institute of Technology, Hoboken, NJ 07030 (United States)], E-mail: jmarquez@stevens.edu

    2009-02-15

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples.

  13. The effects of noise over the complete space of diffusion tensor shape.

    Science.gov (United States)

    Gahm, Jin Kyu; Kindlmann, Gordon; Ennis, Daniel B

    2014-01-01

    Diffusion tensor magnetic resonance imaging (DT-MRI) is a technique used to quantify the microstructural organization of biological tissues. Multiple images are necessary to reconstruct the tensor data and each acquisition is subject to complex thermal noise. As such, measures of tensor invariants, which characterize components of tensor shape, derived from the tensor data will be biased from their true values. Previous work has examined this bias, but over a narrow range of tensor shape. Herein, we define the mathematics for constructing a tensor from tensor invariants, which permits an intuitive and principled means for building tensors with a complete range of tensor shape and salient microstructural properties. Thereafter, we use this development to evaluate by simulation the effects of noise on characterizing tensor shape over the complete space of tensor shape for three encoding schemes with different SNR and gradient directions. We also define a new framework for determining the distribution of the true values of tensor invariants given their measures, which provides guidance about the confidence the observer should have in the measures. Finally, we present the statistics of tensor invariant estimates over the complete space of tensor shape to demonstrate how the noise sensitivity of tensor invariants varies across the space of tensor shape as well as how the imaging protocol impacts measures of tensor invariants. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Bias in tensor based morphometry Stat-ROI measures may result in unrealistic power estimates.

    Science.gov (United States)

    Thompson, Wesley K; Holland, Dominic

    2011-07-01

    A series of reports have recently appeared using tensor based morphometry statistically-defined regions of interest, Stat-ROIs, to quantify longitudinal atrophy in structural MRIs from the Alzheimer's Disease Neuroimaging Initiative (ADNI). This commentary focuses on one of these reports, Hua et al. (2010), but the issues raised here are relevant to the others as well. Specifically, we point out a temporal pattern of atrophy in subjects with Alzheimer's disease and mild cognitive impairment whereby the majority of atrophy in two years occurs within the first 6 months, resulting in overall elevated estimated rates of change. Using publicly-available ADNI data, this temporal pattern is also found in a group of identically-processed healthy controls, strongly suggesting that methodological bias is corrupting the measures. The resulting bias seriously impacts the validity of conclusions reached using these measures; for example, sample size estimates reported by Hua et al. (2010) may be underestimated by a factor of five to sixteen. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. On Data and Parameter Estimation Using the Variational Bayesian EM-algorithm for Block-fading Frequency-selective MIMO Channels

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.; Larsen, Jan

    2006-01-01

    A general Variational Bayesian framework for iterative data and parameter estimation for coherent detection is introduced as a generalization of the EM-algorithm. Explicit solutions are given for MIMO channel estimation with Gaussian prior and noise covariance estimation with inverse-Wishart prior....... Simulation of a GSM-like system provides empirical proof that the VBEM-algorithm is able to provide better performance than the EM-algorithm. However, if the posterior distribution is highly peaked, the VBEM-algorithm approaches the EM-algorithm and the gain disappears. The potential gain is therefore...

  16. Bayesian Methods for Radiation Detection and Dosimetry

    International Nuclear Information System (INIS)

    Peter G. Groer

    2002-01-01

    We performed work in three areas: radiation detection, external and internal radiation dosimetry. In radiation detection we developed Bayesian techniques to estimate the net activity of high and low activity radioactive samples. These techniques have the advantage that the remaining uncertainty about the net activity is described by probability densities. Graphs of the densities show the uncertainty in pictorial form. Figure 1 below demonstrates this point. We applied stochastic processes for a method to obtain Bayesian estimates of 222Rn-daughter products from observed counting rates. In external radiation dosimetry we studied and developed Bayesian methods to estimate radiation doses to an individual with radiation induced chromosome aberrations. We analyzed chromosome aberrations after exposure to gammas and neutrons and developed a method for dose-estimation after criticality accidents. The research in internal radiation dosimetry focused on parameter estimation for compartmental models from observed compartmental activities. From the estimated probability densities of the model parameters we were able to derive the densities for compartmental activities for a two compartment catenary model at different times. We also calculated the average activities and their standard deviation for a simple two compartment model

  17. The continuous determination of spacetime geometry by the Riemann curvature tensor

    International Nuclear Information System (INIS)

    Rendall, A.D.

    1988-01-01

    It is shown that generically the Riemann tensor of a Lorentz metric on an n-dimensional manifold (n ≥ 4) determines the metric up to a constant factor and hence determines the associated torsion-free connection uniquely. The resulting map from Riemann tensors to connections is continuous in the Whitney Csup(∞) topology but, at least for some manifolds, constant factors cannot be chosen so as to make the map from Riemann tensors to metrics continuous in that topology. The latter map is, however, continuous in the compact open Csup(∞) topology so that estimates of the metric and its derivatives on a compact set can be obtained from similar estimates on the curvature and its derivatives. (author)

  18. Bayesian Nonparametric Mixture Estimation for Time-Indexed Functional Data in R

    Directory of Open Access Journals (Sweden)

    Terrance D. Savitsky

    2016-08-01

    Full Text Available We present growfunctions for R that offers Bayesian nonparametric estimation models for analysis of dependent, noisy time series data indexed by a collection of domains. This data structure arises from combining periodically published government survey statistics, such as are reported in the Current Population Study (CPS. The CPS publishes monthly, by-state estimates of employment levels, where each state expresses a noisy time series. Published state-level estimates from the CPS are composed from household survey responses in a model-free manner and express high levels of volatility due to insufficient sample sizes. Existing software solutions borrow information over a modeled time-based dependence to extract a de-noised time series for each domain. These solutions, however, ignore the dependence among the domains that may be additionally leveraged to improve estimation efficiency. The growfunctions package offers two fully nonparametric mixture models that simultaneously estimate both a time and domain-indexed dependence structure for a collection of time series: (1 A Gaussian process (GP construction, which is parameterized through the covariance matrix, estimates a latent function for each domain. The covariance parameters of the latent functions are indexed by domain under a Dirichlet process prior that permits estimation of the dependence among functions across the domains: (2 An intrinsic Gaussian Markov random field prior construction provides an alternative to the GP that expresses different computation and estimation properties. In addition to performing denoised estimation of latent functions from published domain estimates, growfunctions allows estimation of collections of functions for observation units (e.g., households, rather than aggregated domains, by accounting for an informative sampling design under which the probabilities for inclusion of observation units are related to the response variable. growfunctions includes plot

  19. Rank of tensors of l-out-of-k functions: an application in probabilistic inference

    Czech Academy of Sciences Publication Activity Database

    Vomlel, Jiří

    2011-01-01

    Roč. 47, č. 3 (2011), s. 317-336 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA201/09/1891; GA ČR GEICC/08/E010 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : Bayesian network * probabilistic inference * tensor rank Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/MTR/vomlel-0361630.pdf

  20. STATE ESTIMATION IN ALCOHOLIC CONTINUOUS FERMENTATION OF ZYMOMONAS MOBILIS USING RECURSIVE BAYESIAN FILTERING: A SIMULATION APPROACH

    Directory of Open Access Journals (Sweden)

    Olga Lucia Quintero

    2008-05-01

    Full Text Available This work presents a state estimator for a continuous bioprocess. To this aim, the Non Linear Filtering theory based on the recursive application of Bayes rule and Monte Carlo techniques is used. Recursive Bayesian Filters Sampling Importance Resampling (SIR is employed, including different kinds of resampling. Generally, bio-processes have strong non-linear and non-Gaussian characteristics, and this tool becomes attractive. The estimator behavior and performance are illustrated with the continuous process of alcoholic fermentation of Zymomonas mobilis. Not too many applications with this tool have been reported in the biotechnological area.

  1. Uncertainties for seismic moment tensors and applications to nuclear explosions, volcanic events, and earthquakes

    Science.gov (United States)

    Tape, C.; Alvizuri, C. R.; Silwal, V.; Tape, W.

    2017-12-01

    When considered as a point source, a seismic source can be characterized in terms of its origin time, hypocenter, moment tensor, and source time function. The seismologist's task is to estimate these parameters--and their uncertainties--from three-component ground motion recorded at irregularly spaced stations. We will focus on one portion of this problem: the estimation of the moment tensor and its uncertainties. With magnitude estimated separately, we are left with five parameters describing the normalized moment tensor. A lune of normalized eigenvalue triples can be used to visualize the two parameters (lune longitude and lune latitude) describing the source type, while the conventional strike, dip, and rake angles can be used to characterize the orientation. Slight modifications of these five parameters lead to a uniform parameterization of moment tensors--uniform in the sense that equal volumes in the coordinate domain of the parameterization correspond to equal volumes of moment tensors. For a moment tensor m that we have inferred from seismic data for an earthquake, we define P(V) to be the probability that the true moment tensor for the earthquake lies in the neighborhood of m that has fractional volume V. The average value of P(V) is then a measure of our confidence in our inference of m. The calculation of P(V) requires knowing both the probability P(w) and the fractional volume V(w) of the set of moment tensors within a given angular radius w of m. We apply this approach to several different data sets, including nuclear explosions from the Nevada Test Site, volcanic events from Uturuncu (Bolivia), and earthquakes. Several challenges remain: choosing an appropriate misfit function, handling time shifts between data and synthetic waveforms, and extending the uncertainty estimation to include more source parameters (e.g., hypocenter and source time function).

  2. On a Bayesian estimation procedure for determining the average ore grade of a uranium deposit

    International Nuclear Information System (INIS)

    Heising, C.D.; Zamora-Reyes, J.A.

    1996-01-01

    A Bayesian procedure is applied to estimate the average ore grade of a specific uranium deposit (the Morrison formation in New Mexico). Experimental data taken from drilling tests for this formation constitute deposit specific information, E 2 . This information is combined, through a single stage application of Bayes' theorem, with the more extensive and well established information on all similar formations in the region, E 1 . It is assumed that the best estimate for the deposit specific case should include the relevant experimental evidence collected from other like formations giving incomplete information on the specific deposit. This follows traditional methods for resource estimation, which presume that previous collective experience obtained from similar formations in the geological region can be used to infer the geologic characteristics of a less well characterized formation. (Author)

  3. A Tensor Statistical Model for Quantifying Dynamic Functional Connectivity.

    Science.gov (United States)

    Zhu, Yingying; Zhu, Xiaofeng; Kim, Minjeong; Yan, Jin; Wu, Guorong

    2017-06-01

    Functional connectivity (FC) has been widely investigated in many imaging-based neuroscience and clinical studies. Since functional Magnetic Resonance Image (MRI) signal is just an indirect reflection of brain activity, it is difficult to accurately quantify the FC strength only based on signal correlation. To address this limitation, we propose a learning-based tensor model to derive high sensitivity and specificity connectome biomarkers at the individual level from resting-state fMRI images. First, we propose a learning-based approach to estimate the intrinsic functional connectivity. In addition to the low level region-to-region signal correlation, latent module-to-module connection is also estimated and used to provide high level heuristics for measuring connectivity strength. Furthermore, sparsity constraint is employed to automatically remove the spurious connections, thus alleviating the issue of searching for optimal threshold. Second, we integrate our learning-based approach with the sliding-window technique to further reveal the dynamics of functional connectivity. Specifically, we stack the functional connectivity matrix within each sliding window and form a 3D tensor where the third dimension denotes for time. Then we obtain dynamic functional connectivity (dFC) for each individual subject by simultaneously estimating the within-sliding-window functional connectivity and characterizing the across-sliding-window temporal dynamics. Third, in order to enhance the robustness of the connectome patterns extracted from dFC, we extend the individual-based 3D tensors to a population-based 4D tensor (with the fourth dimension stands for the training subjects) and learn the statistics of connectome patterns via 4D tensor analysis. Since our 4D tensor model jointly (1) optimizes dFC for each training subject and (2) captures the principle connectome patterns, our statistical model gains more statistical power of representing new subject than current state

  4. Direction-of-arrival estimation for co-located multiple-input multiple-output radar using structural sparsity Bayesian learning

    Science.gov (United States)

    Wen, Fang-Qing; Zhang, Gong; Ben, De

    2015-11-01

    This paper addresses the direction of arrival (DOA) estimation problem for the co-located multiple-input multiple-output (MIMO) radar with random arrays. The spatially distributed sparsity of the targets in the background makes compressive sensing (CS) desirable for DOA estimation. A spatial CS framework is presented, which links the DOA estimation problem to support recovery from a known over-complete dictionary. A modified statistical model is developed to accurately represent the intra-block correlation of the received signal. A structural sparsity Bayesian learning algorithm is proposed for the sparse recovery problem. The proposed algorithm, which exploits intra-signal correlation, is capable being applied to limited data support and low signal-to-noise ratio (SNR) scene. Furthermore, the proposed algorithm has less computation load compared to the classical Bayesian algorithm. Simulation results show that the proposed algorithm has a more accurate DOA estimation than the traditional multiple signal classification (MUSIC) algorithm and other CS recovery algorithms. Project supported by the National Natural Science Foundation of China (Grant Nos. 61071163, 61271327, and 61471191), the Funding for Outstanding Doctoral Dissertation in Nanjing University of Aeronautics and Astronautics, China (Grant No. BCXJ14-08), the Funding of Innovation Program for Graduate Education of Jiangsu Province, China (Grant No. KYLX 0277), the Fundamental Research Funds for the Central Universities, China (Grant No. 3082015NP2015504), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PADA), China.

  5. A Gentle Introduction to Bayesian Analysis : Applications to Developmental Research

    NARCIS (Netherlands)

    Van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B.; Neyer, Franz J.; van Aken, Marcel A G

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First,

  6. A gentle introduction to Bayesian analysis : Applications to developmental research

    NARCIS (Netherlands)

    van de Schoot, R.; Kaplan, D.; Denissen, J.J.A.; Asendorpf, J.B.; Neyer, F.J.; van Aken, M.A.G.

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First,

  7. A defect in holographic interpretations of tensor networks

    Energy Technology Data Exchange (ETDEWEB)

    Czech, Bartłomiej [Institute for Advanced Study,Princeton, NJ 08540 (United States); Nguyen, Phuc H.; Swaminathan, Sivaramakrishnan [Theory Group, Department of Physics and Texas Cosmology Center,The University of Texas at Austin,Austin, TX 78712 (United States)

    2017-03-16

    We initiate the study of how tensor networks reproduce properties of static holographic space-times, which are not locally pure anti-de Sitter. We consider geometries that are holographically dual to ground states of defect, interface and boundary CFTs and compare them to the structure of the requisite MERA networks predicted by the theory of minimal updates. When the CFT is deformed, certain tensors require updating. On the other hand, even identical tensors can contribute differently to estimates of entanglement entropies. We interpret these facts holographically by associating tensor updates to turning on non-normalizable modes in the bulk. In passing, we also clarify and complement existing arguments in support of the theory of minimal updates, propose a novel ansatz called rayed MERA that applies to a class of generalized interface CFTs, and analyze the kinematic spaces of the thin wall and AdS{sub 3}-Janus geometries.

  8. Sparse reconstruction using distribution agnostic bayesian matching pursuit

    KAUST Repository

    Masood, Mudassir; Al-Naffouri, Tareq Y.

    2013-01-01

    A fast matching pursuit method using a Bayesian approach is introduced for sparse signal recovery. This method performs Bayesian estimates of sparse signals even when the signal prior is non-Gaussian or unknown. It is agnostic on signal statistics

  9. Improved Tensor-Based Singular Spectrum Analysis Based on Single Channel Blind Source Separation Algorithm and Its Application to Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Dan Yang

    2017-04-01

    Full Text Available To solve the problem of multi-fault blind source separation (BSS in the case that the observed signals are under-determined, a novel approach for single channel blind source separation (SCBSS based on the improved tensor-based singular spectrum analysis (TSSA is proposed. As the most natural representation of high-dimensional data, tensor can preserve the intrinsic structure of the data to the maximum extent. Thus, TSSA method can be employed to extract the multi-fault features from the measured single-channel vibration signal. However, SCBSS based on TSSA still has some limitations, mainly including unsatisfactory convergence of TSSA in many cases and the number of source signals is hard to accurately estimate. Therefore, the improved TSSA algorithm based on canonical decomposition and parallel factors (CANDECOMP/PARAFAC weighted optimization, namely CP-WOPT, is proposed in this paper. CP-WOPT algorithm is applied to process the factor matrix using a first-order optimization approach instead of the original least square method in TSSA, so as to improve the convergence of this algorithm. In order to accurately estimate the number of the source signals in BSS, EMD-SVD-BIC (empirical mode decomposition—singular value decomposition—Bayesian information criterion method, instead of the SVD in the conventional TSSA, is introduced. To validate the proposed method, we applied it to the analysis of the numerical simulation signal and the multi-fault rolling bearing signals.

  10. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    Science.gov (United States)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  11. The energy–momentum tensor(s in classical gauge theories

    Directory of Open Access Journals (Sweden)

    Daniel N. Blaschke

    2016-11-01

    Full Text Available We give an introduction to, and review of, the energy–momentum tensors in classical gauge field theories in Minkowski space, and to some extent also in curved space–time. For the canonical energy–momentum tensor of non-Abelian gauge fields and of matter fields coupled to such fields, we present a new and simple improvement procedure based on gauge invariance for constructing a gauge invariant, symmetric energy–momentum tensor. The relationship with the Einstein–Hilbert tensor following from the coupling to a gravitational field is also discussed.

  12. Traffic speed data imputation method based on tensor completion.

    Science.gov (United States)

    Ran, Bin; Tan, Huachun; Feng, Jianshuai; Liu, Ying; Wang, Wuhong

    2015-01-01

    Traffic speed data plays a key role in Intelligent Transportation Systems (ITS); however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS). In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC), an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS) database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches.

  13. Approximate Bayesian evaluations of measurement uncertainty

    Science.gov (United States)

    Possolo, Antonio; Bodnar, Olha

    2018-04-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.

  14. An integrated approach to estimate storage reliability with initial failures based on E-Bayesian estimates

    International Nuclear Information System (INIS)

    Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun

    2017-01-01

    Storage reliability that measures the ability of products in a dormant state to keep their required functions is studied in this paper. For certain types of products, Storage reliability may not always be 100% at the beginning of storage, unlike the operational reliability, which exist possible initial failures that are normally neglected in the models of storage reliability. In this paper, a new integrated technique, the non-parametric measure based on the E-Bayesian estimates of current failure probabilities is combined with the parametric measure based on the exponential reliability function, is proposed to estimate and predict the storage reliability of products with possible initial failures, where the non-parametric method is used to estimate the number of failed products and the reliability at each testing time, and the parameter method is used to estimate the initial reliability and the failure rate of storage product. The proposed method has taken into consideration that, the reliability test data of storage products containing the unexamined before and during the storage process, is available for providing more accurate estimates of both the initial failure probability and the storage failure probability. When storage reliability prediction that is the main concern in this field should be made, the non-parametric estimates of failure numbers can be used into the parametric models for the failure process in storage. In the case of exponential models, the assessment and prediction method for storage reliability is presented in this paper. Finally, a numerical example is given to illustrate the method. Furthermore, a detailed comparison between the proposed and traditional method, for examining the rationality of assessment and prediction on the storage reliability, is investigated. The results should be useful for planning a storage environment, decision-making concerning the maximum length of storage, and identifying the production quality. - Highlights:

  15. A Bayesian framework for estimating moment magnitude and its uncertainty from macroseismic intensity measures

    Science.gov (United States)

    Kawabata, E.; Main, I. G.; Naylor, M.; Chandler, R. E.

    2016-12-01

    In moderate to low seismicity areas such as the UK, earthquakes represent a small but not negligible risk to sensitive structures such as nuclear power plants. As a part of the safety case in the planning and regulation of such structures, seismic activity must first be monitored and quantified to form a catalogue of past events. In a low or moderate seismicity zone, most of our knowledge of the most significant events comes from macroseismic intensity measures from the pre-instrumental period (before 1900). These historical records must then be combined and calibrated with modern analogue and digitally-recorded instrumental data on a common source magnitude scale, the most useful of which is the moment magnitude. The result is a unified catalogue that can be used for probabilistic seismic hazard analysis. An isoseismal map involves a set of contours that enclose the areas at which the event was felt at particular intensity values or higher, called felt areas. It has been common practice to draw these contours by hand with varying degrees of subjectivity. Here, we demonstrate a Bayesian method for constructing such maps objectively from macroseismic intensity measures and their observed locations. It involves using mathematical expressions to represent concentric ellipses and estimating their optimal parameters and uncertainties in a Bayesian framework. Inferred fault orientations in the UK are predominantly vertical, so the elliptical assumption is reasonable at least to first order or as a null hypothesis. Relevant physical constraints are used as priors where available. The resulting posterior distributions are used to calculate felt area at a given intensity, as well as a probability density function for the inferred epicentre. We then describe another Bayesian approach for deriving moment magnitude from felt areas based on their relationship and known constraints such as the frequency-magnitude distribution. The use of Bayesian inference allows us to quantify

  16. Efficient tensor completion for color image and video recovery: Low-rank tensor train

    OpenAIRE

    Bengua, Johann A.; Phien, Ho N.; Tuan, Hoang D.; Do, Minh N.

    2016-01-01

    This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via tensor tra...

  17. The Bayesian Covariance Lasso.

    Science.gov (United States)

    Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G

    2013-04-01

    Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size ( n ) is less than the dimension ( d ), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.

  18. A Bayesian method to estimate the neutron response matrix of a single crystal CVD diamond detector

    International Nuclear Information System (INIS)

    Reginatto, Marcel; Araque, Jorge Guerrero; Nolte, Ralf; Zbořil, Miroslav; Zimbal, Andreas; Gagnon-Moisan, Francis

    2015-01-01

    Detectors made from artificial chemical vapor deposition (CVD) single crystal diamond are very promising candidates for applications where high resolution neutron spectrometry in very high neutron fluxes is required, for example in fusion research. We propose a Bayesian method to estimate the neutron response function of the detector for a continuous range of neutron energies (in our case, 10 MeV ≤ E n ≤ 16 MeV) based on a few measurements with quasi-monoenergetic neutrons. This method is needed because a complete set of measurements is not available and the alternative approach of using responses based on Monte Carlo calculations is not feasible. Our approach uses Bayesian signal-background separation techniques and radial basis function interpolation methods. We present the analysis of data measured at the PTB accelerator facility PIAF. The method is quite general and it can be applied to other particle detectors with similar characteristics

  19. Tensor completion and low-n-rank tensor recovery via convex optimization

    International Nuclear Information System (INIS)

    Gandy, Silvia; Yamada, Isao; Recht, Benjamin

    2011-01-01

    In this paper we consider sparsity on a tensor level, as given by the n-rank of a tensor. In an important sparse-vector approximation problem (compressed sensing) and the low-rank matrix recovery problem, using a convex relaxation technique proved to be a valuable solution strategy. Here, we will adapt these techniques to the tensor setting. We use the n-rank of a tensor as a sparsity measure and consider the low-n-rank tensor recovery problem, i.e. the problem of finding the tensor of the lowest n-rank that fulfills some linear constraints. We introduce a tractable convex relaxation of the n-rank and propose efficient algorithms to solve the low-n-rank tensor recovery problem numerically. The algorithms are based on the Douglas–Rachford splitting technique and its dual variant, the alternating direction method of multipliers

  20. Tensor eigenvalues and their applications

    CERN Document Server

    Qi, Liqun; Chen, Yannan

    2018-01-01

    This book offers an introduction to applications prompted by tensor analysis, especially by the spectral tensor theory developed in recent years. It covers applications of tensor eigenvalues in multilinear systems, exponential data fitting, tensor complementarity problems, and tensor eigenvalue complementarity problems. It also addresses higher-order diffusion tensor imaging, third-order symmetric and traceless tensors in liquid crystals, piezoelectric tensors, strong ellipticity for elasticity tensors, and higher-order tensors in quantum physics. This book is a valuable reference resource for researchers and graduate students who are interested in applications of tensor eigenvalues.

  1. Tensor Transpose and Its Properties

    OpenAIRE

    Pan, Ran

    2014-01-01

    Tensor transpose is a higher order generalization of matrix transpose. In this paper, we use permutations and symmetry group to define? the tensor transpose. Then we discuss the classification and composition of tensor transposes. Properties of tensor transpose are studied in relation to tensor multiplication, tensor eigenvalues, tensor decompositions and tensor rank.

  2. Stress-energy tensors for vector fields outside a static black hole

    International Nuclear Information System (INIS)

    Barrios, F.A.; Vaz, C.

    1989-01-01

    We obtain new, approximate stress-energy tensors to describe gauge fields in the neighborhood of a Schwarzschild black hole. We assume that the coefficient of ∇ 2 R in the trace anomaly is correctly given by ζ-function regularization. Our approximation differs from that of Page and of Brown and Ottewill and relies upon a new, improved ansatz for the form of the stress-energy tensor in the ultrastatic optical metric of the black hole. The Israel-Hartle-Hawking thermal tensor is constructed to be regular on the horizon and possess the correct asymptotic behavior. Our approximation of Unruh's tensor is likewise constructed to be regular on the future horizon and exhibit a luminosity which agrees with Page's numerically obtained value. Geometric expressions for the approximate tensors are given, and the approximate energy density of the thermal tensor on the horizon is compared with recent numerical estimates

  3. Bayesian inference for psychology. Part I : Theoretical advantages and practical ramifications

    NARCIS (Netherlands)

    Wagenmakers, E.-J.; Marsman, M.; Jamil, T.; Ly, A.; Verhagen, J.; Love, J.; Selker, R.; Gronau, Q.F.; Šmíra, M.; Epskamp, S.; Matzke, D.; Rouder, J.N.; Morey, R.D.

    2018-01-01

    Bayesian parameter estimation and Bayesian hypothesis testing present attractive alternatives to classical inference using confidence intervals and p values. In part I of this series we outline ten prominent advantages of the Bayesian approach. Many of these advantages translate to concrete

  4. Estimating effectiveness in HIV prevention trials with a Bayesian hierarchical compound Poisson frailty model

    Science.gov (United States)

    Coley, Rebecca Yates; Browna, Elizabeth R.

    2016-01-01

    Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. PMID:26869051

  5. Sparse Variational Bayesian SAGE Algorithm With Application to the Estimation of Multipath Wireless Channels

    DEFF Research Database (Denmark)

    Shutin, Dmitriy; Fleury, Bernard Henri

    2011-01-01

    In this paper, we develop a sparse variational Bayesian (VB) extension of the space-alternating generalized expectation-maximization (SAGE) algorithm for the high resolution estimation of the parameters of relevant multipath components in the response of frequency and spatially selective wireless...... channels. The application context of the algorithm considered in this contribution is parameter estimation from channel sounding measurements for radio channel modeling purpose. The new sparse VB-SAGE algorithm extends the classical SAGE algorithm in two respects: i) by monotonically minimizing...... parametric sparsity priors for the weights of the multipath components. We revisit the Gaussian sparsity priors within the sparse VB-SAGE framework and extend the results by considering Laplace priors. The structure of the VB-SAGE algorithm allows for an analytical stability analysis of the update expression...

  6. Tensor force and debye screening in quarkonium-type mesons

    International Nuclear Information System (INIS)

    Kovacs, L.B.; Kovacs, T.G.; Lovas, I.

    1990-01-01

    We use a non-relativistic quantum-mechanical model to investigate the effect of a screening plasma on two quarkonium-type mesons: the charmonium and bottonium. The stability of these mesons in the plasma is estimated in two cases: including the tensor and spin-orbit term in the potential and without these terms. It turns out that while the bottonium is somewhat stabilized by the tensor force, the charmonium becomes less stabil due to this modification of the potential. Thus the charmonium seems to be a more sensitive probe of the quark-gluon plasma formation than it was thought to be without including the tensor force. (Authors)

  7. Bayesian Modeling for Identification and Estimation of the Learning Effects of Pointing Tasks

    Science.gov (United States)

    Kyo, Koki

    Recently, in the field of human-computer interaction, a model containing the systematic factor and human factor has been proposed to evaluate the performance of the input devices of a computer. This is called the SH-model. In this paper, in order to extend the range of application of the SH-model, we propose some new models based on the Box-Cox transformation and apply a Bayesian modeling method for identification and estimation of the learning effects of pointing tasks. We consider the parameters describing the learning effect as random variables and introduce smoothness priors for them. Illustrative results show that the newly-proposed models work well.

  8. A Bayesian Framework for Estimating the Concordance Correlation Coefficient Using Skew-elliptical Distributions.

    Science.gov (United States)

    Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir

    2018-04-05

    The concordance correlation coefficient (CCC) is a widely used scaled index in the study of agreement. In this article, we propose estimating the CCC by a unified Bayesian framework that can (1) accommodate symmetric or asymmetric and light- or heavy-tailed data; (2) select model from several candidates; and (3) address other issues frequently encountered in practice such as confounding covariates and missing data. The performance of the proposal was studied and demonstrated using simulated as well as real-life biomarker data from a clinical study of an insomnia drug. The implementation of the proposal is accessible through a package in the Comprehensive R Archive Network.

  9. A Bayesian Double Fusion Model for Resting-State Brain Connectivity Using Joint Functional and Structural Data

    KAUST Repository

    Kang, Hakmook

    2017-03-20

    Current approaches separately analyze concurrently acquired diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI) data. The primary limitation of these approaches is that they do not take advantage of the information from DTI that could potentially enhance estimation of resting-state functional connectivity (FC) between brain regions. To overcome this limitation, we develop a Bayesian hierarchical spatiotemporal model that incorporates structural connectivity (SC) into estimating FC. In our proposed approach, SC based on DTI data is used to construct an informative prior for FC based on resting-state fMRI data through the Cholesky decomposition. Simulation studies showed that incorporating the two data produced significantly reduced mean squared errors compared to the standard approach of separately analyzing the two data from different modalities. We applied our model to analyze the resting state DTI and fMRI data collected to estimate FC between the brain regions that were hypothetically important in the origination and spread of temporal lobe epilepsy seizures. Our analysis concludes that the proposed model achieves smaller false positive rates and is much robust to data decimation compared to the conventional approach.

  10. Bayesian estimation of extreme flood quantiles using a rainfall-runoff model and a stochastic daily rainfall generator

    Science.gov (United States)

    Costa, Veber; Fernandes, Wilson

    2017-11-01

    Extreme flood estimation has been a key research topic in hydrological sciences. Reliable estimates of such events are necessary as structures for flood conveyance are continuously evolving in size and complexity and, as a result, their failure-associated hazards become more and more pronounced. Due to this fact, several estimation techniques intended to improve flood frequency analysis and reducing uncertainty in extreme quantile estimation have been addressed in the literature in the last decades. In this paper, we develop a Bayesian framework for the indirect estimation of extreme flood quantiles from rainfall-runoff models. In the proposed approach, an ensemble of long daily rainfall series is simulated with a stochastic generator, which models extreme rainfall amounts with an upper-bounded distribution function, namely, the 4-parameter lognormal model. The rationale behind the generation model is that physical limits for rainfall amounts, and consequently for floods, exist and, by imposing an appropriate upper bound for the probabilistic model, more plausible estimates can be obtained for those rainfall quantiles with very low exceedance probabilities. Daily rainfall time series are converted into streamflows by routing each realization of the synthetic ensemble through a conceptual hydrologic model, the Rio Grande rainfall-runoff model. Calibration of parameters is performed through a nonlinear regression model, by means of the specification of a statistical model for the residuals that is able to accommodate autocorrelation, heteroscedasticity and nonnormality. By combining the outlined steps in a Bayesian structure of analysis, one is able to properly summarize the resulting uncertainty and estimating more accurate credible intervals for a set of flood quantiles of interest. The method for extreme flood indirect estimation was applied to the American river catchment, at the Folsom dam, in the state of California, USA. Results show that most floods

  11. Bayesian hierarchical models for smoothing in two-phase studies, with application to small area estimation.

    Science.gov (United States)

    Ross, Michelle; Wakefield, Jon

    2015-10-01

    Two-phase study designs are appealing since they allow for the oversampling of rare sub-populations which improves efficiency. In this paper we describe a Bayesian hierarchical model for the analysis of two-phase data. Such a model is particularly appealing in a spatial setting in which random effects are introduced to model between-area variability. In such a situation, one may be interested in estimating regression coefficients or, in the context of small area estimation, in reconstructing the population totals by strata. The efficiency gains of the two-phase sampling scheme are compared to standard approaches using 2011 birth data from the research triangle area of North Carolina. We show that the proposed method can overcome small sample difficulties and improve on existing techniques. We conclude that the two-phase design is an attractive approach for small area estimation.

  12. Comparison of Multi-Tensor Diffusion Models' Performance for White Matter Integrity Estimation in Chronic Stroke

    Directory of Open Access Journals (Sweden)

    Olena G. Filatova

    2018-04-01

    Full Text Available Better insight into white matter (WM alterations after stroke onset could help to understand the underlying recovery mechanisms and improve future interventions. MR diffusion imaging enables to assess such changes. Our goal was to investigate the relation of WM diffusion characteristics derived from diffusion models of increasing complexity with the motor function of the upper limb. Moreover, we aimed to evaluate the variation of such characteristics across different WM structures of chronic stroke patients in comparison to healthy subjects. Subjects were scanned with a two b-value diffusion-weighted MRI protocol to exploit multiple diffusion models: single tensor, single tensor with isotropic compartment, bi-tensor model, bi-tensor with isotropic compartment. From each model we derived the mean tract fractional anisotropy (FA, mean (MD, radial (RD and axial (AD diffusivities outside the lesion site based on a WM tracts atlas. Asymmetry of these measures was correlated with the Fugl-Meyer upper extremity assessment (FMA score and compared between patient and control groups. Eighteen chronic stroke patients and eight age-matched healthy individuals participated in the study. Significant correlation of the outcome measures with the clinical scores of stroke recovery was found. The lowest correlation of the corticospinal tract FAasymmetry and FMA was with the single tensor model (r = −0.3, p = 0.2 whereas the other models reported results in the range of r = −0.79 ÷ −0.81 and p = 4E-5 ÷ 8E-5. The corticospinal tract and superior longitudinal fasciculus showed most alterations in our patient group relative to controls. Multiple compartment models yielded superior correlation of the diffusion measures and FMA compared to the single tensor model.

  13. Traffic Speed Data Imputation Method Based on Tensor Completion

    Directory of Open Access Journals (Sweden)

    Bin Ran

    2015-01-01

    Full Text Available Traffic speed data plays a key role in Intelligent Transportation Systems (ITS; however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS. In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC, an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches.

  14. An Approach to Structure Determination and Estimation of Hierarchical Archimedean Copulas and its Application to Bayesian Classification

    Czech Academy of Sciences Publication Activity Database

    Górecki, J.; Hofert, M.; Holeňa, Martin

    2016-01-01

    Roč. 46, č. 1 (2016), s. 21-59 ISSN 0925-9902 R&D Projects: GA ČR GA13-17187S Grant - others:Slezská univerzita v Opavě(CZ) SGS/21/2014 Institutional support: RVO:67985807 Keywords : Copula * Hierarchical archimedean copula * Copula estimation * Structure determination * Kendall’s tau * Bayesian classification Subject RIV: IN - Informatics, Computer Science Impact factor: 1.294, year: 2016

  15. Bayesian image restoration, using configurations

    OpenAIRE

    Thorarinsdottir, Thordis

    2006-01-01

    In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the re...

  16. Joint Bayesian Estimation of Quasar Continua and the Lyα Forest Flux Probability Distribution Function

    Science.gov (United States)

    Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan

    2017-08-01

    We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.

  17. The tensor distribution function.

    Science.gov (United States)

    Leow, A D; Zhu, S; Zhan, L; McMahon, K; de Zubicaray, G I; Meredith, M; Wright, M J; Toga, A W; Thompson, P M

    2009-01-01

    Diffusion weighted magnetic resonance imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of six directions, second-order tensors (represented by three-by-three positive definite matrices) can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g., crossing fiber tracts. Recently, a number of high-angular resolution schemes with more than six gradient directions have been employed to address this issue. In this article, we introduce the tensor distribution function (TDF), a probability function defined on the space of symmetric positive definite matrices. Using the calculus of variations, we solve the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function. Moreover, a tensor orientation distribution function (TOD) may also be derived from the TDF, allowing for the estimation of principal fiber directions and their corresponding eigenvalues.

  18. The tensor rank of tensor product of two three-qubit W states is eight

    OpenAIRE

    Chen, Lin; Friedland, Shmuel

    2017-01-01

    We show that the tensor rank of tensor product of two three-qubit W states is not less than eight. Combining this result with the recent result of M. Christandl, A. K. Jensen, and J. Zuiddam that the tensor rank of tensor product of two three-qubit W states is at most eight, we deduce that the tensor rank of tensor product of two three-qubit W states is eight. We also construct the upper bound of the tensor rank of tensor product of many three-qubit W states.

  19. Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train.

    Science.gov (United States)

    Bengua, Johann A; Phien, Ho N; Tuan, Hoang Duong; Do, Minh N

    2017-05-01

    This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods.

  20. Bowen-York tensors

    International Nuclear Information System (INIS)

    Beig, Robert; Krammer, Werner

    2004-01-01

    For a conformally flat 3-space, we derive a family of linear second-order partial differential operators which sends vectors into trace-free, symmetric 2-tensors. These maps, which are parametrized by conformal Killing vectors on the 3-space, are such that the divergence of the resulting tensor field depends only on the divergence of the original vector field. In particular, these maps send source-free electric fields into TT tensors. Moreover, if the original vector field is the Coulomb field on R 3 {0}, the resulting tensor fields on R 3 {0} are nothing but the family of TT tensors originally written by Bowen and York

  1. Bayesian Correlation Analysis for Sequence Count Data.

    Directory of Open Access Journals (Sweden)

    Daniel Sánchez-Taltavull

    Full Text Available Evaluating the similarity of different measured variables is a fundamental task of statistics, and a key part of many bioinformatics algorithms. Here we propose a Bayesian scheme for estimating the correlation between different entities' measurements based on high-throughput sequencing data. These entities could be different genes or miRNAs whose expression is measured by RNA-seq, different transcription factors or histone marks whose expression is measured by ChIP-seq, or even combinations of different types of entities. Our Bayesian formulation accounts for both measured signal levels and uncertainty in those levels, due to varying sequencing depth in different experiments and to varying absolute levels of individual entities, both of which affect the precision of the measurements. In comparison with a traditional Pearson correlation analysis, we show that our Bayesian correlation analysis retains high correlations when measurement confidence is high, but suppresses correlations when measurement confidence is low-especially for entities with low signal levels. In addition, we consider the influence of priors on the Bayesian correlation estimate. Perhaps surprisingly, we show that naive, uniform priors on entities' signal levels can lead to highly biased correlation estimates, particularly when different experiments have widely varying sequencing depths. However, we propose two alternative priors that provably mitigate this problem. We also prove that, like traditional Pearson correlation, our Bayesian correlation calculation constitutes a kernel in the machine learning sense, and thus can be used as a similarity measure in any kernel-based machine learning algorithm. We demonstrate our approach on two RNA-seq datasets and one miRNA-seq dataset.

  2. An introduction to Bayesian statistics in health psychology

    NARCIS (Netherlands)

    Depaoli, Sarah; Rus, Holly; Clifton, James; van de Schoot, A.G.J.; Tiemensma, Jitske

    2017-01-01

    The aim of the current article is to provide a brief introduction to Bayesian statistics within the field of Health Psychology. Bayesian methods are increasing in prevalence in applied fields, and they have been shown in simulation research to improve the estimation accuracy of structural equation

  3. Bayesian Dark Knowledge

    NARCIS (Netherlands)

    Korattikara, A.; Rathod, V.; Murphy, K.; Welling, M.; Cortes, C.; Lawrence, N.D.; Lee, D.D.; Sugiyama, M.; Garnett, R.

    2015-01-01

    We consider the problem of Bayesian parameter estimation for deep neural networks, which is important in problem settings where we may have little data, and/ or where we need accurate posterior predictive densities p(y|x, D), e.g., for applications involving bandits or active learning. One simple

  4. Human dental age estimation using third molar developmental stages: does a Bayesian approach outperform regression models to discriminate between juveniles and adults?

    Science.gov (United States)

    Thevissen, P W; Fieuws, S; Willems, G

    2010-01-01

    Dental age estimation methods based on the radiologically detected third molar developmental stages are implemented in forensic age assessments to discriminate between juveniles and adults considering the judgment of young unaccompanied asylum seekers. Accurate and unbiased age estimates combined with appropriate quantified uncertainties are the required properties for accurate forensic reporting. In this study, a subset of 910 individuals uniformly distributed in age between 16 and 22 years was selected from an existing dataset collected by Gunst et al. containing 2,513 panoramic radiographs with known third molar developmental stages of Belgian Caucasian men and women. This subset was randomly split in a training set to develop a classical regression analysis and a Bayesian model for the multivariate distribution of the third molar developmental stages conditional on age and in a test set to assess the performance of both models. The aim of this study was to verify if the Bayesian approach differentiates the age of maturity more precisely and removes the bias, which disadvantages the systematically overestimated young individuals. The Bayesian model offers the discrimination of subjects being older than 18 years more appropriate and produces more meaningful prediction intervals but does not strongly outperform the classical approaches.

  5. Bayesian estimation of the hydraulic and solute transport properties of a small-scale unsaturated soil column

    Directory of Open Access Journals (Sweden)

    Moreira Paulo H. S.

    2016-03-01

    Full Text Available In this study the hydraulic and solute transport properties of an unsaturated soil were estimated simultaneously from a relatively simple small-scale laboratory column infiltration/outflow experiment. As governing equations we used the Richards equation for variably saturated flow and a physical non-equilibrium dual-porosity type formulation for solute transport. A Bayesian parameter estimation approach was used in which the unknown parameters were estimated with the Markov Chain Monte Carlo (MCMC method through implementation of the Metropolis-Hastings algorithm. Sensitivity coefficients were examined in order to determine the most meaningful measurements for identifying the unknown hydraulic and transport parameters. Results obtained using the measured pressure head and solute concentration data collected during the unsaturated soil column experiment revealed the robustness of the proposed approach.

  6. Bayesian reliability analysis for non-periodic inspection with estimation of uncertain parameters; Bayesian shinraisei kaiseki wo tekiyoshita hiteiki kozo kensa ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Itagaki, H. [Yokohama National University, Yokohama (Japan). Faculty of Engineering; Asada, H.; Ito, S. [National Aerospace Laboratory, Tokyo (Japan); Shinozuka, M.

    1996-12-31

    Risk assessed structural positions in a pressurized fuselage of a transport-type aircraft applied with damage tolerance design are taken up as the subject of discussion. A small number of data obtained from inspections on the positions was used to discuss the Bayesian reliability analysis that can estimate also a proper non-periodic inspection schedule, while estimating proper values for uncertain factors. As a result, time period of generating fatigue cracks was determined according to procedure of detailed visual inspections. The analysis method was found capable of estimating values that are thought reasonable and the proper inspection schedule using these values, in spite of placing the fatigue crack progress expression in a very simple form and estimating both factors as the uncertain factors. Thus, the present analysis method was verified of its effectiveness. This study has discussed at the same time the structural positions, modeling of fatigue cracks generated and develop in the positions, conditions for destruction, damage factors, and capability of the inspection from different viewpoints. This reliability analysis method is thought effective also on such other structures as offshore structures. 18 refs., 8 figs., 1 tab.

  7. Bayesian reliability analysis for non-periodic inspection with estimation of uncertain parameters; Bayesian shinraisei kaiseki wo tekiyoshita hiteiki kozo kensa ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Itagaki, H [Yokohama National University, Yokohama (Japan). Faculty of Engineering; Asada, H; Ito, S [National Aerospace Laboratory, Tokyo (Japan); Shinozuka, M

    1997-12-31

    Risk assessed structural positions in a pressurized fuselage of a transport-type aircraft applied with damage tolerance design are taken up as the subject of discussion. A small number of data obtained from inspections on the positions was used to discuss the Bayesian reliability analysis that can estimate also a proper non-periodic inspection schedule, while estimating proper values for uncertain factors. As a result, time period of generating fatigue cracks was determined according to procedure of detailed visual inspections. The analysis method was found capable of estimating values that are thought reasonable and the proper inspection schedule using these values, in spite of placing the fatigue crack progress expression in a very simple form and estimating both factors as the uncertain factors. Thus, the present analysis method was verified of its effectiveness. This study has discussed at the same time the structural positions, modeling of fatigue cracks generated and develop in the positions, conditions for destruction, damage factors, and capability of the inspection from different viewpoints. This reliability analysis method is thought effective also on such other structures as offshore structures. 18 refs., 8 figs., 1 tab.

  8. Bayesian `hyper-parameters' approach to joint estimation: the Hubble constant from CMB measurements

    Science.gov (United States)

    Lahav, O.; Bridle, S. L.; Hobson, M. P.; Lasenby, A. N.; Sodré, L.

    2000-07-01

    Recently several studies have jointly analysed data from different cosmological probes with the motivation of estimating cosmological parameters. Here we generalize this procedure to allow freedom in the relative weights of various probes. This is done by including in the joint χ2 function a set of `hyper-parameters', which are dealt with using Bayesian considerations. The resulting algorithm, which assumes uniform priors on the log of the hyper-parameters, is very simple: instead of minimizing \\sum \\chi_j2 (where \\chi_j2 is per data set j) we propose to minimize \\sum Nj (\\chi_j2) (where Nj is the number of data points per data set j). We illustrate the method by estimating the Hubble constant H0 from different sets of recent cosmic microwave background (CMB) experiments (including Saskatoon, Python V, MSAM1, TOCO and Boomerang). The approach can be generalized for combinations of cosmic probes, and for other priors on the hyper-parameters.

  9. Testing students' e-learning via Facebook through Bayesian structural equation modeling.

    Science.gov (United States)

    Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.

  10. Testing students' e-learning via Facebook through Bayesian structural equation modeling.

    Directory of Open Access Journals (Sweden)

    Hashem Salarzadeh Jenatabadi

    Full Text Available Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.

  11. Incorporating Parameter Uncertainty in Bayesian Segmentation Models: Application to Hippocampal Subfield Volumetry

    DEFF Research Database (Denmark)

    Iglesias, J. E.; Sabuncu, M. R.; Van Leemput, Koen

    2012-01-01

    Many successful segmentation algorithms are based on Bayesian models in which prior anatomical knowledge is combined with the available image information. However, these methods typically have many free parameters that are estimated to obtain point estimates only, whereas a faithful Bayesian anal...

  12. Bayesian analysis of rare events

    Energy Technology Data Exchange (ETDEWEB)

    Straub, Daniel, E-mail: straub@tum.de; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  13. Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Peng, E-mail: peng@ices.utexas.edu [The Institute for Computational Engineering and Sciences, The University of Texas at Austin, 201 East 24th Street, Stop C0200, Austin, TX 78712-1229 (United States); Schwab, Christoph, E-mail: christoph.schwab@sam.math.ethz.ch [Seminar für Angewandte Mathematik, Eidgenössische Technische Hochschule, Römistrasse 101, CH-8092 Zürich (Switzerland)

    2016-07-01

    We extend the reduced basis (RB) accelerated Bayesian inversion methods for affine-parametric, linear operator equations which are considered in [16,17] to non-affine, nonlinear parametric operator equations. We generalize the analysis of sparsity of parametric forward solution maps in [20] and of Bayesian inversion in [48,49] to the fully discrete setting, including Petrov–Galerkin high-fidelity (“HiFi”) discretization of the forward maps. We develop adaptive, stochastic collocation based reduction methods for the efficient computation of reduced bases on the parametric solution manifold. The nonaffinity and nonlinearity with respect to (w.r.t.) the distributed, uncertain parameters and the unknown solution is collocated; specifically, by the so-called Empirical Interpolation Method (EIM). For the corresponding Bayesian inversion problems, computational efficiency is enhanced in two ways: first, expectations w.r.t. the posterior are computed by adaptive quadratures with dimension-independent convergence rates proposed in [49]; the present work generalizes [49] to account for the impact of the PG discretization in the forward maps on the convergence rates of the Quantities of Interest (QoI for short). Second, we propose to perform the Bayesian estimation only w.r.t. a parsimonious, RB approximation of the posterior density. Based on the approximation results in [49], the infinite-dimensional parametric, deterministic forward map and operator admit N-term RB and EIM approximations which converge at rates which depend only on the sparsity of the parametric forward map. In several numerical experiments, the proposed algorithms exhibit dimension-independent convergence rates which equal, at least, the currently known rate estimates for N-term approximation. We propose to accelerate Bayesian estimation by first offline construction of reduced basis surrogates of the Bayesian posterior density. The parsimonious surrogates can then be employed for online data

  14. Harmonic d-tensors

    Energy Technology Data Exchange (ETDEWEB)

    Hohmann, Manuel [Physikalisches Institut, Universitaet Tartu (Estonia)

    2016-07-01

    Tensor harmonics are a useful mathematical tool for finding solutions to differential equations which transform under a particular representation of the rotation group SO(3). In order to make use of this tool also in the setting of Finsler geometry, where the objects of relevance are d-tensors instead of tensors, we construct a set of d-tensor harmonics for both SO(3) and SO(4) symmetries and show how these can be used for calculations in Finsler geometry and gravity.

  15. A Bayesian evidence synthesis approach to estimate disease prevalence in hard-to-reach populations: hepatitis C in New York City.

    Science.gov (United States)

    Tan, Sarah; Makela, Susanna; Heller, Daliah; Konty, Kevin; Balter, Sharon; Zheng, Tian; Stark, James H

    2018-06-01

    Existing methods to estimate the prevalence of chronic hepatitis C (HCV) in New York City (NYC) are limited in scope and fail to assess hard-to-reach subpopulations with highest risk such as injecting drug users (IDUs). To address these limitations, we employ a Bayesian multi-parameter evidence synthesis model to systematically combine multiple sources of data, account for bias in certain data sources, and provide unbiased HCV prevalence estimates with associated uncertainty. Our approach improves on previous estimates by explicitly accounting for injecting drug use and including data from high-risk subpopulations such as the incarcerated, and is more inclusive, utilizing ten NYC data sources. In addition, we derive two new equations to allow age at first injecting drug use data for former and current IDUs to be incorporated into the Bayesian evidence synthesis, a first for this type of model. Our estimated overall HCV prevalence as of 2012 among NYC adults aged 20-59 years is 2.78% (95% CI 2.61-2.94%), which represents between 124,900 and 140,000 chronic HCV cases. These estimates suggest that HCV prevalence in NYC is higher than previously indicated from household surveys (2.2%) and the surveillance system (2.37%), and that HCV transmission is increasing among young injecting adults in NYC. An ancillary benefit from our results is an estimate of current IDUs aged 20-59 in NYC: 0.58% or 27,600 individuals. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Quantifying uncertainty in soot volume fraction estimates using Bayesian inference of auto-correlated laser-induced incandescence measurements

    Science.gov (United States)

    Hadwin, Paul J.; Sipkens, T. A.; Thomson, K. A.; Liu, F.; Daun, K. J.

    2016-01-01

    Auto-correlated laser-induced incandescence (AC-LII) infers the soot volume fraction (SVF) of soot particles by comparing the spectral incandescence from laser-energized particles to the pyrometrically inferred peak soot temperature. This calculation requires detailed knowledge of model parameters such as the absorption function of soot, which may vary with combustion chemistry, soot age, and the internal structure of the soot. This work presents a Bayesian methodology to quantify such uncertainties. This technique treats the additional "nuisance" model parameters, including the soot absorption function, as stochastic variables and incorporates the current state of knowledge of these parameters into the inference process through maximum entropy priors. While standard AC-LII analysis provides a point estimate of the SVF, Bayesian techniques infer the posterior probability density, which will allow scientists and engineers to better assess the reliability of AC-LII inferred SVFs in the context of environmental regulations and competing diagnostics.

  17. Current density tensors

    Science.gov (United States)

    Lazzeretti, Paolo

    2018-04-01

    It is shown that nonsymmetric second-rank current density tensors, related to the current densities induced by magnetic fields and nuclear magnetic dipole moments, are fundamental properties of a molecule. Together with magnetizability, nuclear magnetic shielding, and nuclear spin-spin coupling, they completely characterize its response to magnetic perturbations. Gauge invariance, resolution into isotropic, deviatoric, and antisymmetric parts, and contributions of current density tensors to magnetic properties are discussed. The components of the second-rank tensor properties are rationalized via relationships explicitly connecting them to the direction of the induced current density vectors and to the components of the current density tensors. The contribution of the deviatoric part to the average value of magnetizability, nuclear shielding, and nuclear spin-spin coupling, uniquely determined by the antisymmetric part of current density tensors, vanishes identically. The physical meaning of isotropic and anisotropic invariants of current density tensors has been investigated, and the connection between anisotropy magnitude and electron delocalization has been discussed.

  18. Bayesian fuzzy logic-based estimation of electron cyclotron heating (ECH) power deposition in MHD control systems

    Energy Technology Data Exchange (ETDEWEB)

    Davoudi, Mehdi, E-mail: mehdi.davoudi@polimi.it [Department of Electrical and Computer Engineering, Buein Zahra Technical University, Buein Zahra, Qazvin (Iran, Islamic Republic of); Davoudi, Mohsen, E-mail: davoudi@eng.ikiu.ac.ir [Department of Electrical Engineering, Imam Khomeini International University, Qazvin, 34148-96818 (Iran, Islamic Republic of)

    2017-06-15

    Highlights: • A couple of algorithms to diagnose if Electron Cyclotron Heating (ECH) power is deposited properly on the expected deposition minor radius are proposed. • The algorithms are based on Bayesian theory and Fuzzy logic. • The algorithms are tested on the off-line experimental data acquired from Frascati Tokamak Upgrade (FTU), Frascati, Italy. • Uncertainties and evidences derived from the combination of online information formed by the measured diagnostic data and the prior information are also estimated. - Abstract: In the thermonuclear fusion systems, the new plasma control systems use some measured on-line information acquired from different sensors and prior information obtained by predictive plasma models in order to stabilize magnetic hydro dynamics (MHD) activity in a tokamak. Suppression of plasma instabilities is a key issue to improve the confinement time of controlled thermonuclear fusion with tokamaks. This paper proposes a couple of algorithms based on Bayesian theory and Fuzzy logic to diagnose if Electron Cyclotron Heating (ECH) power is deposited properly on the expected deposition minor radius (r{sub DEP}). Both algorithms also estimate uncertainties and evidences derived from the combination of the online information formed by the measured diagnostic data and the prior information. The algorithms have been employed on a set of off-line ECE channels data which have been acquired from the experimental shot number 21364 at Frascati Tokamak Upgrade (FTU), Frascati, Italy.

  19. Sparse reconstruction using distribution agnostic bayesian matching pursuit

    KAUST Repository

    Masood, Mudassir

    2013-11-01

    A fast matching pursuit method using a Bayesian approach is introduced for sparse signal recovery. This method performs Bayesian estimates of sparse signals even when the signal prior is non-Gaussian or unknown. It is agnostic on signal statistics and utilizes a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data if not available. The method utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean-square error (MMSE) estimate of the sparse signal. Simulation results demonstrate the power and robustness of our proposed estimator. © 2013 IEEE.

  20. First- and Second-level Bayesian Inference of Flow Resistivity of Sound Absorber and Room’s Influence

    DEFF Research Database (Denmark)

    Choi, Sang-Hyeon; Lee, Ikjin; Jeong, Cheol-Ho

    2016-01-01

    Sabine absorption coefficient is a widely used one deduced from reverberation time measurements via the Sabine equation. First- and second-level Bayesian analysis are used to estimate the flow resistivity of a sound absorber and the influences of the test chambers from Sabine absorption...... coefficients measured in 13 different reverberation chambers. The first-level Bayesian analysis is more general than the second-level Bayesian analysis. Sharper posterior distribution can be acquired by the second-level Bayesian analysis than the one by the first-level Bayesian analysis because more data...... are used to set more reliable prior distribution. The estimated room’s influences by the first- and the second-level Bayesian analyses are similar to the estimated results by the mean absolute error minimization....

  1. Bayesian methods to estimate urban growth potential

    Science.gov (United States)

    Smith, Jordan W.; Smart, Lindsey S.; Dorning, Monica; Dupéy, Lauren Nicole; Méley, Andréanne; Meentemeyer, Ross K.

    2017-01-01

    Urban growth often influences the production of ecosystem services. The impacts of urbanization on landscapes can subsequently affect landowners’ perceptions, values and decisions regarding their land. Within land-use and land-change research, very few models of dynamic landscape-scale processes like urbanization incorporate empirically-grounded landowner decision-making processes. Very little attention has focused on the heterogeneous decision-making processes that aggregate to influence broader-scale patterns of urbanization. We examine the land-use tradeoffs faced by individual landowners in one of the United States’ most rapidly urbanizing regions − the urban area surrounding Charlotte, North Carolina. We focus on the land-use decisions of non-industrial private forest owners located across the region’s development gradient. A discrete choice experiment is used to determine the critical factors influencing individual forest owners’ intent to sell their undeveloped properties across a series of experimentally varied scenarios of urban growth. Data are analyzed using a hierarchical Bayesian approach. The estimates derived from the survey data are used to modify a spatially-explicit trend-based urban development potential model, derived from remotely-sensed imagery and observed changes in the region’s socioeconomic and infrastructural characteristics between 2000 and 2011. This modeling approach combines the theoretical underpinnings of behavioral economics with spatiotemporal data describing a region’s historical development patterns. By integrating empirical social preference data into spatially-explicit urban growth models, we begin to more realistically capture processes as well as patterns that drive the location, magnitude and rates of urban growth.

  2. Review of bayesian statistical analysis methods for cytogenetic radiation biodosimetry, with a practical example

    International Nuclear Information System (INIS)

    Ainsbury, Elizabeth A.; Lloyd, David C.; Rothkamm, Kai; Vinnikov, Volodymyr A.; Maznyk, Nataliya A.; Puig, Pedro; Higueras, Manuel

    2014-01-01

    Classical methods of assessing the uncertainty associated with radiation doses estimated using cytogenetic techniques are now extremely well defined. However, several authors have suggested that a Bayesian approach to uncertainty estimation may be more suitable for cytogenetic data, which are inherently stochastic in nature. The Bayesian analysis framework focuses on identification of probability distributions (for yield of aberrations or estimated dose), which also means that uncertainty is an intrinsic part of the analysis, rather than an 'afterthought'. In this paper Bayesian, as well as some more advanced classical, data analysis methods for radiation cytogenetics are reviewed that have been proposed in the literature. A practical overview of Bayesian cytogenetic dose estimation is also presented, with worked examples from the literature. (authors)

  3. Extracting the diffusion tensor from molecular dynamics simulation with Milestoning

    International Nuclear Information System (INIS)

    Mugnai, Mauro L.; Elber, Ron

    2015-01-01

    We propose an algorithm to extract the diffusion tensor from Molecular Dynamics simulations with Milestoning. A Kramers-Moyal expansion of a discrete master equation, which is the Markovian limit of the Milestoning theory, determines the diffusion tensor. To test the algorithm, we analyze overdamped Langevin trajectories and recover a multidimensional Fokker-Planck equation. The recovery process determines the flux through a mesh and estimates local kinetic parameters. Rate coefficients are converted to the derivatives of the potential of mean force and to coordinate dependent diffusion tensor. We illustrate the computation on simple models and on an atomically detailed system—the diffusion along the backbone torsions of a solvated alanine dipeptide

  4. Partition-based Collaborative Tensor Factorization for POI Recommendation

    Institute of Scientific and Technical Information of China (English)

    Wenjing Luan; Guanjun Liu; Changjun Jiang; Liang Qi

    2017-01-01

    The rapid development of location-based social networks (LBSNs) provides people with an opportunity of better understanding their mobility behavior which enables them to decide their next location.For example,it can help travelers to choose where to go next,or recommend salesmen the most potential places to deliver advertisements or sell products.In this paper,a method for recommending points of interest (POIs) is proposed based on a collaborative tensor factorization (CTF) technique.Firstly,a generalized objective function is constructed for collaboratively factorizing a tensor with several feature matrices.Secondly,a 3-mode tensor is used to model all users' check-in behaviors,and three feature matrices are extracted to characterize the time distribution,category distribution and POI correlation,respectively.Thirdly,each user's preference to a POI at a specific time can be estimated by using CTF.In order to further improve the recommendation accuracy,PCTF (Partitionbased CTF) is proposed to fill the missing entries of a tensor after clustering its every mode.Experiments on a real checkin database show that the proposed method can provide more accurate location recommendation.

  5. Estimating Population Parameters using the Structured Serial Coalescent with Bayesian MCMC Inference when some Demes are Hidden

    Directory of Open Access Journals (Sweden)

    Allen Rodrigo

    2006-01-01

    Full Text Available Using the structured serial coalescent with Bayesian MCMC and serial samples, we estimate population size when some demes are not sampled or are hidden, ie ghost demes. It is found that even with the presence of a ghost deme, accurate inference was possible if the parameters are estimated with the true model. However with an incorrect model, estimates were biased and can be positively misleading. We extend these results to the case where there are sequences from the ghost at the last time sample. This case can arise in HIV patients, when some tissue samples and viral sequences only become available after death. When some sequences from the ghost deme are available at the last sampling time, estimation bias is reduced and accurate estimation of parameters associated with the ghost deme is possible despite sampling bias. Migration rates for this case are also shown to be good estimates when migration values are low.

  6. Estimating micro area behavioural risk factor prevalence from large population-based surveys: a full Bayesian approach

    Directory of Open Access Journals (Sweden)

    L. Seliske

    2016-06-01

    Full Text Available Abstract Background An important public health goal is to decrease the prevalence of key behavioural risk factors, such as tobacco use and obesity. Survey information is often available at the regional level, but heterogeneity within large geographic regions cannot be assessed. Advanced spatial analysis techniques are demonstrated to produce sensible micro area estimates of behavioural risk factors that enable identification of areas with high prevalence. Methods A spatial Bayesian hierarchical model was used to estimate the micro area prevalence of current smoking and excess bodyweight for the Erie-St. Clair region in southwestern Ontario. Estimates were mapped for male and female respondents of five cycles of the Canadian Community Health Survey (CCHS. The micro areas were 2006 Census Dissemination Areas, with an average population of 400–700 people. Two individual-level models were specified: one controlled for survey cycle and age group (model 1, and one controlled for survey cycle, age group and micro area median household income (model 2. Post-stratification was used to derive micro area behavioural risk factor estimates weighted to the population structure. SaTScan analyses were conducted on the granular, postal-code level CCHS data to corroborate findings of elevated prevalence. Results Current smoking was elevated in two urban areas for both sexes (Sarnia and Windsor, and an additional small community (Chatham for males only. Areas of excess bodyweight were prevalent in an urban core (Windsor among males, but not females. Precision of the posterior post-stratified current smoking estimates was improved in model 2, as indicated by narrower credible intervals and a lower coefficient of variation. For excess bodyweight, both models had similar precision. Aggregation of the micro area estimates to CCHS design-based estimates validated the findings. Conclusions This is among the first studies to apply a full Bayesian model to complex

  7. Bayesian Noise Estimation for Non-ideal Cosmic Microwave Background Experiments

    Science.gov (United States)

    Wehus, I. K.; Næss, S. K.; Eriksen, H. K.

    2012-03-01

    We describe a Bayesian framework for estimating the time-domain noise covariance of cosmic microwave background (CMB) observations, typically parameterized in terms of a 1/f frequency profile. This framework is based on the Gibbs sampling algorithm, which allows for exact marginalization over nuisance parameters through conditional probability distributions. In this paper, we implement support for gaps in the data streams and marginalization over fixed time-domain templates, and also outline how to marginalize over confusion from CMB fluctuations, which may be important for high signal-to-noise experiments. As a by-product of the method, we obtain proper constrained realizations, which themselves can be useful for map making. To validate the algorithm, we demonstrate that the reconstructed noise parameters and corresponding uncertainties are unbiased using simulated data. The CPU time required to process a single data stream of 100,000 samples with 1000 samples removed by gaps is 3 s if only the maximum posterior parameters are required, and 21 s if one also wants to obtain the corresponding uncertainties by Gibbs sampling.

  8. BAYESIAN NOISE ESTIMATION FOR NON-IDEAL COSMIC MICROWAVE BACKGROUND EXPERIMENTS

    International Nuclear Information System (INIS)

    Wehus, I. K.; Næss, S. K.; Eriksen, H. K.

    2012-01-01

    We describe a Bayesian framework for estimating the time-domain noise covariance of cosmic microwave background (CMB) observations, typically parameterized in terms of a 1/f frequency profile. This framework is based on the Gibbs sampling algorithm, which allows for exact marginalization over nuisance parameters through conditional probability distributions. In this paper, we implement support for gaps in the data streams and marginalization over fixed time-domain templates, and also outline how to marginalize over confusion from CMB fluctuations, which may be important for high signal-to-noise experiments. As a by-product of the method, we obtain proper constrained realizations, which themselves can be useful for map making. To validate the algorithm, we demonstrate that the reconstructed noise parameters and corresponding uncertainties are unbiased using simulated data. The CPU time required to process a single data stream of 100,000 samples with 1000 samples removed by gaps is 3 s if only the maximum posterior parameters are required, and 21 s if one also wants to obtain the corresponding uncertainties by Gibbs sampling.

  9. BAYESIAN NOISE ESTIMATION FOR NON-IDEAL COSMIC MICROWAVE BACKGROUND EXPERIMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Wehus, I. K. [Theoretical Physics, Imperial College London, London SW7 2AZ (United Kingdom); Naess, S. K.; Eriksen, H. K., E-mail: i.k.wehus@fys.uio.no, E-mail: sigurdkn@astro.uio.no, E-mail: h.k.k.eriksen@astro.uio.no [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029, Blindern, N-0315 Oslo (Norway)

    2012-03-01

    We describe a Bayesian framework for estimating the time-domain noise covariance of cosmic microwave background (CMB) observations, typically parameterized in terms of a 1/f frequency profile. This framework is based on the Gibbs sampling algorithm, which allows for exact marginalization over nuisance parameters through conditional probability distributions. In this paper, we implement support for gaps in the data streams and marginalization over fixed time-domain templates, and also outline how to marginalize over confusion from CMB fluctuations, which may be important for high signal-to-noise experiments. As a by-product of the method, we obtain proper constrained realizations, which themselves can be useful for map making. To validate the algorithm, we demonstrate that the reconstructed noise parameters and corresponding uncertainties are unbiased using simulated data. The CPU time required to process a single data stream of 100,000 samples with 1000 samples removed by gaps is 3 s if only the maximum posterior parameters are required, and 21 s if one also wants to obtain the corresponding uncertainties by Gibbs sampling.

  10. Internal Dosimetry Intake Estimation using Bayesian Methods

    International Nuclear Information System (INIS)

    Miller, G.; Inkret, W.C.; Martz, H.F.

    1999-01-01

    New methods for the inverse problem of internal dosimetry are proposed based on evaluating expectations of the Bayesian posterior probability distribution of intake amounts, given bioassay measurements. These expectation integrals are normally of very high dimension and hence impractical to use. However, the expectations can be algebraically transformed into a sum of terms representing different numbers of intakes, with a Poisson distribution of the number of intakes. This sum often rapidly converges, when the average number of intakes for a population is small. A simplified algorithm using data unfolding is described (UF code). (author)

  11. Kernel-density estimation and approximate Bayesian computation for flexible epidemiological model fitting in Python.

    Science.gov (United States)

    Irvine, Michael A; Hollingsworth, T Déirdre

    2018-05-26

    Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Estimation of gross land-use change and its uncertainty using a Bayesian data assimilation approach

    Science.gov (United States)

    Levy, Peter; van Oijen, Marcel; Buys, Gwen; Tomlinson, Sam

    2018-03-01

    We present a method for estimating land-use change using a Bayesian data assimilation approach. The approach provides a general framework for combining multiple disparate data sources with a simple model. This allows us to constrain estimates of gross land-use change with reliable national-scale census data, whilst retaining the detailed information available from several other sources. Eight different data sources, with three different data structures, were combined in our posterior estimate of land use and land-use change, and other data sources could easily be added in future. The tendency for observations to underestimate gross land-use change is accounted for by allowing for a skewed distribution in the likelihood function. The data structure produced has high temporal and spatial resolution, and is appropriate for dynamic process-based modelling. Uncertainty is propagated appropriately into the output, so we have a full posterior distribution of output and parameters. The data are available in the widely used netCDF file format from http://eidc.ceh.ac.uk/.

  13. Hierarchical Bayesian sparse image reconstruction with application to MRFM.

    Science.gov (United States)

    Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves

    2009-09-01

    This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.

  14. How few countries will do? Comparative survey analysis from a Bayesian perspective

    Directory of Open Access Journals (Sweden)

    Joop J.C.M. Hox

    2012-07-01

    Full Text Available Meuleman and Billiet (2009 have carried out a simulation study aimed at the question how many countries are needed for accurate multilevel SEM estimation in comparative studies. The authors concluded that a sample of 50 to 100 countries is needed for accurate estimation. Recently, Bayesian estimation methods have been introduced in structural equation modeling which should work well with much lower sample sizes. The current study reanalyzes the simulation of Meuleman and Billiet using Bayesian estimation to find the lowest number of countries needed when conducting multilevel SEM. The main result of our simulations is that a sample of about 20 countries is sufficient for accurate Bayesian estimation, which makes multilevel SEM practicable for the number of countries commonly available in large scale comparative surveys.

  15. TensorFlow Agents: Efficient Batched Reinforcement Learning in TensorFlow

    OpenAIRE

    Hafner, Danijar; Davidson, James; Vanhoucke, Vincent

    2017-01-01

    We introduce TensorFlow Agents, an efficient infrastructure paradigm for building parallel reinforcement learning algorithms in TensorFlow. We simulate multiple environments in parallel, and group them to perform the neural network computation on a batch rather than individual observations. This allows the TensorFlow execution engine to parallelize computation, without the need for manual synchronization. Environments are stepped in separate Python processes to progress them in parallel witho...

  16. A RENORMALIZATION PROCEDURE FOR TENSOR MODELS AND SCALAR-TENSOR THEORIES OF GRAVITY

    OpenAIRE

    SASAKURA, NAOKI

    2010-01-01

    Tensor models are more-index generalizations of the so-called matrix models, and provide models of quantum gravity with the idea that spaces and general relativity are emergent phenomena. In this paper, a renormalization procedure for the tensor models whose dynamical variable is a totally symmetric real three-tensor is discussed. It is proven that configurations with certain Gaussian forms are the attractors of the three-tensor under the renormalization procedure. Since these Gaussian config...

  17. Ultrasound elastic tensor imaging: comparison with MR diffusion tensor imaging in the myocardium

    Science.gov (United States)

    Lee, Wei-Ning; Larrat, Benoît; Pernot, Mathieu; Tanter, Mickaël

    2012-08-01

    We have previously proven the feasibility of ultrasound-based shear wave imaging (SWI) to non-invasively characterize myocardial fiber orientation in both in vitro porcine and in vivo ovine hearts. The SWI-estimated results were in good correlation with histology. In this study, we proposed a new and robust fiber angle estimation method through a tensor-based approach for SWI, coined together as elastic tensor imaging (ETI), and compared it with magnetic resonance diffusion tensor imaging (DTI), a current gold standard and extensively reported non-invasive imaging technique for mapping fiber architecture. Fresh porcine (n = 5) and ovine (n = 5) myocardial samples (20 × 20 × 30 mm3) were studied. ETI was firstly performed to generate shear waves and to acquire the wave events at ultrafast frame rate (8000 fps). A 2.8 MHz phased array probe (pitch = 0.28 mm), connected to a prototype ultrasound scanner, was mounted on a customized MRI-compatible rotation device, which allowed both the rotation of the probe from -90° to 90° at 5° increments and co-registration between two imaging modalities. Transmural shear wave speed at all propagation directions realized was firstly estimated. The fiber angles were determined from the shear wave speed map using the least-squares method and eigen decomposition. The test myocardial sample together with the rotation device was then placed inside a 7T MRI scanner. Diffusion was encoded in six directions. A total of 270 diffusion-weighted images (b = 1000 s mm-2, FOV = 30 mm, matrix size = 60 × 64, TR = 6 s, TE = 19 ms, 24 averages) and 45 B0 images were acquired in 14 h 30 min. The fiber structure was analyzed by the fiber-tracking module in software, MedINRIA. The fiber orientation in the overlapped myocardial region which both ETI and DTI accessed was therefore compared, thanks to the co-registered imaging system. Results from all ten samples showed good correlation (r2 = 0.81, p 0.05, unpaired, one-tailed t-test, N = 10). In

  18. Correct Bayesian and frequentist intervals are similar

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1986-01-01

    This paper argues that Bayesians and frequentists will normally reach numerically similar conclusions, when dealing with vague data or sparse data. It is shown that both statistical methodologies can deal reasonably with vague data. With sparse data, in many important practical cases Bayesian interval estimates and frequentist confidence intervals are approximately equal, although with discrete data the frequentist intervals are somewhat longer. This is not to say that the two methodologies are equally easy to use: The construction of a frequentist confidence interval may require new theoretical development. Bayesians methods typically require numerical integration, perhaps over many variables. Also, Bayesian can easily fall into the trap of over-optimism about their amount of prior knowledge. But in cases where both intervals are found correctly, the two intervals are usually not very different. (orig.)

  19. Bayesian molecular dating: opening up the black box.

    Science.gov (United States)

    Bromham, Lindell; Duchêne, Sebastián; Hua, Xia; Ritchie, Andrew M; Duchêne, David A; Ho, Simon Y W

    2018-05-01

    Molecular dating analyses allow evolutionary timescales to be estimated from genetic data, offering an unprecedented capacity for investigating the evolutionary past of all species. These methods require us to make assumptions about the relationship between genetic change and evolutionary time, often referred to as a 'molecular clock'. Although initially regarded with scepticism, molecular dating has now been adopted in many areas of biology. This broad uptake has been due partly to the development of Bayesian methods that allow complex aspects of molecular evolution, such as variation in rates of change across lineages, to be taken into account. But in order to do this, Bayesian dating methods rely on a range of assumptions about the evolutionary process, which vary in their degree of biological realism and empirical support. These assumptions can have substantial impacts on the estimates produced by molecular dating analyses. The aim of this review is to open the 'black box' of Bayesian molecular dating and have a look at the machinery inside. We explain the components of these dating methods, the important decisions that researchers must make in their analyses, and the factors that need to be considered when interpreting results. We illustrate the effects that the choices of different models and priors can have on the outcome of the analysis, and suggest ways to explore these impacts. We describe some major research directions that may improve the reliability of Bayesian dating. The goal of our review is to help researchers to make informed choices when using Bayesian phylogenetic methods to estimate evolutionary rates and timescales. © 2017 Cambridge Philosophical Society.

  20. Bayesian analyses of seasonal runoff forecasts

    Science.gov (United States)

    Krzysztofowicz, R.; Reese, S.

    1991-12-01

    Forecasts of seasonal snowmelt runoff volume provide indispensable information for rational decision making by water project operators, irrigation district managers, and farmers in the western United States. Bayesian statistical models and communication frames have been researched in order to enhance the forecast information disseminated to the users, and to characterize forecast skill from the decision maker's point of view. Four products are presented: (i) a Bayesian Processor of Forecasts, which provides a statistical filter for calibrating the forecasts, and a procedure for estimating the posterior probability distribution of the seasonal runoff; (ii) the Bayesian Correlation Score, a new measure of forecast skill, which is related monotonically to the ex ante economic value of forecasts for decision making; (iii) a statistical predictor of monthly cumulative runoffs within the snowmelt season, conditional on the total seasonal runoff forecast; and (iv) a framing of the forecast message that conveys the uncertainty associated with the forecast estimates to the users. All analyses are illustrated with numerical examples of forecasts for six gauging stations from the period 1971 1988.

  1. Time integration of tensor trains

    OpenAIRE

    Lubich, Christian; Oseledets, Ivan; Vandereycken, Bart

    2014-01-01

    A robust and efficient time integrator for dynamical tensor approximation in the tensor train or matrix product state format is presented. The method is based on splitting the projector onto the tangent space of the tensor manifold. The algorithm can be used for updating time-dependent tensors in the given data-sparse tensor train / matrix product state format and for computing an approximate solution to high-dimensional tensor differential equations within this data-sparse format. The formul...

  2. Bayesian inference for disease prevalence using negative binomial group testing

    Science.gov (United States)

    Pritchard, Nicholas A.; Tebbs, Joshua M.

    2011-01-01

    Group testing, also known as pooled testing, and inverse sampling are both widely used methods of data collection when the goal is to estimate a small proportion. Taking a Bayesian approach, we consider the new problem of estimating disease prevalence from group testing when inverse (negative binomial) sampling is used. Using different distributions to incorporate prior knowledge of disease incidence and different loss functions, we derive closed form expressions for posterior distributions and resulting point and credible interval estimators. We then evaluate our new estimators, on Bayesian and classical grounds, and apply our methods to a West Nile Virus data set. PMID:21259308

  3. Tensor spaces and exterior algebra

    CERN Document Server

    Yokonuma, Takeo

    1992-01-01

    This book explains, as clearly as possible, tensors and such related topics as tensor products of vector spaces, tensor algebras, and exterior algebras. You will appreciate Yokonuma's lucid and methodical treatment of the subject. This book is useful in undergraduate and graduate courses in multilinear algebra. Tensor Spaces and Exterior Algebra begins with basic notions associated with tensors. To facilitate understanding of the definitions, Yokonuma often presents two or more different ways of describing one object. Next, the properties and applications of tensors are developed, including the classical definition of tensors and the description of relative tensors. Also discussed are the algebraic foundations of tensor calculus and applications of exterior algebra to determinants and to geometry. This book closes with an examination of algebraic systems with bilinear multiplication. In particular, Yokonuma discusses the theory of replicas of Chevalley and several properties of Lie algebras deduced from them.

  4. An automated method for estimating reliability of grid systems using Bayesian networks

    International Nuclear Information System (INIS)

    Doguc, Ozge; Emmanuel Ramirez-Marquez, Jose

    2012-01-01

    Grid computing has become relevant due to its applications to large-scale resource sharing, wide-area information transfer, and multi-institutional collaborating. In general, in grid computing a service requests the use of a set of resources, available in a grid, to complete certain tasks. Although analysis tools and techniques for these types of systems have been studied, grid reliability analysis is generally computation-intensive to obtain due to the complexity of the system. Moreover, conventional reliability models have some common assumptions that cannot be applied to the grid systems. Therefore, new analytical methods are needed for effective and accurate assessment of grid reliability. This study presents a new method for estimating grid service reliability, which does not require prior knowledge about the grid system structure unlike the previous studies. Moreover, the proposed method does not rely on any assumptions about the link and node failure rates. This approach is based on a data-mining algorithm, the K2, to discover the grid system structure from raw historical system data, that allows to find minimum resource spanning trees (MRST) within the grid then, uses Bayesian networks (BN) to model the MRST and estimate grid service reliability.

  5. Estimation of CO2 flux from targeted satellite observations: a Bayesian approach

    International Nuclear Information System (INIS)

    Cox, Graham

    2014-01-01

    We consider the estimation of carbon dioxide flux at the ocean–atmosphere interface, given weighted averages of the mixing ratio in a vertical atmospheric column. In particular we examine the dependence of the posterior covariance on the weighting function used in taking observations, motivated by the fact that this function is instrument-dependent, hence one needs the ability to compare different weights. The estimation problem is considered using a variational data assimilation method, which is shown to admit an equivalent infinite-dimensional Bayesian formulation. The main tool in our investigation is an explicit formula for the posterior covariance in terms of the prior covariance and observation operator. Using this formula, we compare weighting functions concentrated near the surface of the earth with those concentrated near the top of the atmosphere, in terms of the resulting covariance operators. We also consider the problem of observational targeting, and ask if it is possible to reduce the covariance in a prescribed direction through an appropriate choice of weighting function. We find that this is not the case—there exist directions in which one can never gain information, regardless of the choice of weight. (paper)

  6. Bayesian Independent Component Analysis

    DEFF Research Database (Denmark)

    Winther, Ole; Petersen, Kaare Brandt

    2007-01-01

    In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...

  7. STRUCTURE TENSOR IMAGE FILTERING USING RIEMANNIAN L1 AND L∞ CENTER-OF-MASS

    Directory of Open Access Journals (Sweden)

    Jesus Angulo

    2014-06-01

    Full Text Available Structure tensor images are obtained by a Gaussian smoothing of the dyadic product of gradient image. These images give at each pixel a n×n symmetric positive definite matrix SPD(n, representing the local orientation and the edge information. Processing such images requires appropriate algorithms working on the Riemannian manifold on the SPD(n matrices. This contribution deals with structure tensor image filtering based on Lp geometric averaging. In particular, L1 center-of-mass (Riemannian median or Fermat-Weber point and L∞ center-of-mass (Riemannian circumcenter can be obtained for structure tensors using recently proposed algorithms. Our contribution in this paper is to study the interest of L1 and L∞ Riemannian estimators for structure tensor image processing. In particular, we compare both for two image analysis tasks: (i structure tensor image denoising; (ii anomaly detection in structure tensor images.

  8. The direct tensor solution and higher-order acquisition schemes for generalized diffusion tensor imaging

    NARCIS (Netherlands)

    Akkerman, Erik M.

    2010-01-01

    Both in diffusion tensor imaging (DTI) and in generalized diffusion tensor imaging (GDTI) the relation between the diffusion tensor and the measured apparent diffusion coefficients is given by a tensorial equation, which needs to be inverted in order to solve the diffusion tensor. The traditional

  9. Parameterizing Bayesian network Representations of Social-Behavioral Models by Expert Elicitation

    Energy Technology Data Exchange (ETDEWEB)

    Walsh, Stephen J.; Dalton, Angela C.; Whitney, Paul D.; White, Amanda M.

    2010-05-23

    Bayesian networks provide a general framework with which to model many natural phenomena. The mathematical nature of Bayesian networks enables a plethora of model validation and calibration techniques: e.g parameter estimation, goodness of fit tests, and diagnostic checking of the model assumptions. However, they are not free of shortcomings. Parameter estimation from relevant extant data is a common approach to calibrating the model parameters. In practice it is not uncommon to find oneself lacking adequate data to reliably estimate all model parameters. In this paper we present the early development of a novel application of conjoint analysis as a method for eliciting and modeling expert opinions and using the results in a methodology for calibrating the parameters of a Bayesian network.

  10. Tucker Tensor analysis of Matern functions in spatial statistics

    KAUST Repository

    Litvinenko, Alexander

    2018-03-09

    In this work, we describe advanced numerical tools for working with multivariate functions and for the analysis of large data sets. These tools will drastically reduce the required computing time and the storage cost, and, therefore, will allow us to consider much larger data sets or finer meshes. Covariance matrices are crucial in spatio-temporal statistical tasks, but are often very expensive to compute and store, especially in 3D. Therefore, we approximate covariance functions by cheap surrogates in a low-rank tensor format. We apply the Tucker and canonical tensor decompositions to a family of Matern- and Slater-type functions with varying parameters and demonstrate numerically that their approximations exhibit exponentially fast convergence. We prove the exponential convergence of the Tucker and canonical approximations in tensor rank parameters. Several statistical operations are performed in this low-rank tensor format, including evaluating the conditional covariance matrix, spatially averaged estimation variance, computing a quadratic form, determinant, trace, loglikelihood, inverse, and Cholesky decomposition of a large covariance matrix. Low-rank tensor approximations reduce the computing and storage costs essentially. For example, the storage cost is reduced from an exponential O(n^d) to a linear scaling O(drn), where d is the spatial dimension, n is the number of mesh points in one direction, and r is the tensor rank. Prerequisites for applicability of the proposed techniques are the assumptions that the data, locations, and measurements lie on a tensor (axes-parallel) grid and that the covariance function depends on a distance, ||x-y||.

  11. Friction tensor for a pair of Brownian particles: Spurious finite-size effects and molecular dynamics estimates

    International Nuclear Information System (INIS)

    Bocquet, L.; Hansen, J.P.; Piasecki, J.

    1997-01-01

    In this work, we show that in any finite system, the binary friction tenser for two Brownian particles cannot be directly estimated from an evaluation of the microscopic Green Kubo formula, involving the time integral of force-force autocorrelation functions. This pitfall is associated with a subtle inversion of the thermodynamic and long-time limits and leads to spurious results for the estimates of the friction matrix based on molecular dynamics simulations. Starting from a careful analysis of the coupled Langevin equations for two interacting Brownian particles, we derive a method to circumvent these effects and extract the binary friction tenser from the correlation function matrix of the instantaneous forces exerted by the bath particles on the fixed Brownian particles, and from the relaxation of the total momentum of the bath in a finite system. The general methodology is applied to the case of two hard or soft Brownian spheres in a bath of light particles. Numerical estimates of the relevant correlation functions and of the resulting self and mutual components of the matrix of friction tensors are obtained by molecular dynamics simulations for various spacings between the Brownian particles

  12. Identification of transmissivity fields using a Bayesian strategy and perturbative approach

    Science.gov (United States)

    Zanini, Andrea; Tanda, Maria Giovanna; Woodbury, Allan D.

    2017-10-01

    The paper deals with the crucial problem of the groundwater parameter estimation that is the basis for efficient modeling and reclamation activities. A hierarchical Bayesian approach is developed: it uses the Akaike's Bayesian Information Criteria in order to estimate the hyperparameters (related to the covariance model chosen) and to quantify the unknown noise variance. The transmissivity identification proceeds in two steps: the first, called empirical Bayesian interpolation, uses Y* (Y = lnT) observations to interpolate Y values on a specified grid; the second, called empirical Bayesian update, improve the previous Y estimate through the addition of hydraulic head observations. The relationship between the head and the lnT has been linearized through a perturbative solution of the flow equation. In order to test the proposed approach, synthetic aquifers from literature have been considered. The aquifers in question contain a variety of boundary conditions (both Dirichelet and Neuman type) and scales of heterogeneities (σY2 = 1.0 and σY2 = 5.3). The estimated transmissivity fields were compared to the true one. The joint use of Y* and head measurements improves the estimation of Y considering both degrees of heterogeneity. Even if the variance of the strong transmissivity field can be considered high for the application of the perturbative approach, the results show the same order of approximation of the non-linear methods proposed in literature. The procedure allows to compute the posterior probability distribution of the target quantities and to quantify the uncertainty in the model prediction. Bayesian updating has advantages related both to the Monte-Carlo (MC) and non-MC approaches. In fact, as the MC methods, Bayesian updating allows computing the direct posterior probability distribution of the target quantities and as non-MC methods it has computational times in the order of seconds.

  13. Bayesian Estimation of the Logistic Positive Exponent IRT Model

    Science.gov (United States)

    Bolfarine, Heleno; Bazan, Jorge Luis

    2010-01-01

    A Bayesian inference approach using Markov Chain Monte Carlo (MCMC) is developed for the logistic positive exponent (LPE) model proposed by Samejima and for a new skewed Logistic Item Response Theory (IRT) model, named Reflection LPE model. Both models lead to asymmetric item characteristic curves (ICC) and can be appropriate because a symmetric…

  14. Bayesian Methods for Predicting the Shape of Chinese Yam in Terms of Key Diameters

    Directory of Open Access Journals (Sweden)

    Mitsunori Kayano

    2017-01-01

    Full Text Available This paper proposes Bayesian methods for the shape estimation of Chinese yam (Dioscorea opposita using a few key diameters of yam. Shape prediction of yam is applicable to determining optimal cutoff positions of a yam for producing seed yams. Our Bayesian method, which is a combination of Bayesian estimation model and predictive model, enables automatic, rapid, and low-cost processing of yam. After the construction of the proposed models using a sample data set in Japan, the models provide whole shape prediction of yam based on only a few key diameters. The Bayesian method performed well on the shape prediction in terms of minimizing the mean squared error between measured shape and the prediction. In particular, a multiple regression method with key diameters at two fixed positions attained the highest performance for shape prediction. We have developed automatic, rapid, and low-cost yam-processing machines based on the Bayesian estimation model and predictive model. Development of such shape prediction approaches, including our Bayesian method, can be a valuable aid in reducing the cost and time in food processing.

  15. Empirical Bayesian inference and model uncertainty

    International Nuclear Information System (INIS)

    Poern, K.

    1994-01-01

    This paper presents a hierarchical or multistage empirical Bayesian approach for the estimation of uncertainty concerning the intensity of a homogeneous Poisson process. A class of contaminated gamma distributions is considered to describe the uncertainty concerning the intensity. These distributions in turn are defined through a set of secondary parameters, the knowledge of which is also described and updated via Bayes formula. This two-stage Bayesian approach is an example where the modeling uncertainty is treated in a comprehensive way. Each contaminated gamma distributions, represented by a point in the 3D space of secondary parameters, can be considered as a specific model of the uncertainty about the Poisson intensity. Then, by the empirical Bayesian method each individual model is assigned a posterior probability

  16. Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models

    DEFF Research Database (Denmark)

    Vehtari, Aki; Mononen, Tommi; Tolvanen, Ville

    2016-01-01

    The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validation. In this article, we consider Gaussian latent variable models where the integration over the latent values is approximated using the Laplace method or expectation propagation (EP). We study...... the properties of several Bayesian leave-one-out (LOO) cross-validation approximations that in most cases can be computed with a small additional cost after forming the posterior approximation given the full data. Our main objective is to assess the accuracy of the approximative LOO cross-validation estimators...

  17. Diffusion tensor and diffusion weighted imaging. Pictorial mathematics

    Energy Technology Data Exchange (ETDEWEB)

    Nakada, Tsutomu [California Univ., Davis, CA (United States)

    1995-06-01

    A new imaging algorithm for the treatment of a second order apparent diffusion tensor, D{sub app}{sup {xi}} is described. The method calls for only mathematics of images (pictorial mathematics) without necessity of eigenvalues/eigenvectors estimation. Nevertheless, it is capable of extracting properties of D{sub app}{sup {xi}} invariant to observation axes. While trace image is an example of images weighted by invariance of the tensor matrix, three dimensional anisotropy (3DAC) contrast represents the imaging method making use to anisotropic direction of tensor ellipsoid producing color coded contrast of exceptionally high anatomic resolution. Contrary to intuition, the processes require only a simple algorithm directly applicable to clinical magnetic resonance imaging (MRI). As a contrast method which precisely represents physical characteristics of a target tissue, invariant D{sub app}{sup {xi}} images produced by pictorial mathematics possess significant potential for a number of biological and clinical applications. (author).

  18. A Bayesian and Physics-Based Ground Motion Parameters Map Generation System

    Science.gov (United States)

    Ramirez-Guzman, L.; Quiroz, A.; Sandoval, H.; Perez-Yanez, C.; Ruiz, A. L.; Delgado, R.; Macias, M. A.; Alcántara, L.

    2014-12-01

    We present the Ground Motion Parameters Map Generation (GMPMG) system developed by the Institute of Engineering at the National Autonomous University of Mexico (UNAM). The system delivers estimates of information associated with the social impact of earthquakes, engineering ground motion parameters (gmp), and macroseismic intensity maps. The gmp calculated are peak ground acceleration and velocity (pga and pgv) and response spectral acceleration (SA). The GMPMG relies on real-time data received from strong ground motion stations belonging to UNAM's networks throughout Mexico. Data are gathered via satellite and internet service providers, and managed with the data acquisition software Earthworm. The system is self-contained and can perform all calculations required for estimating gmp and intensity maps due to earthquakes, automatically or manually. An initial data processing, by baseline correcting and removing records containing glitches or low signal-to-noise ratio, is performed. The system then assigns a hypocentral location using first arrivals and a simplified 3D model, followed by a moment tensor inversion, which is performed using a pre-calculated Receiver Green's Tensors (RGT) database for a realistic 3D model of Mexico. A backup system to compute epicentral location and magnitude is in place. A Bayesian Kriging is employed to combine recorded values with grids of computed gmp. The latter are obtained by using appropriate ground motion prediction equations (for pgv, pga and SA with T=0.3, 0.5, 1 and 1.5 s ) and numerical simulations performed in real time, using the aforementioned RGT database (for SA with T=2, 2.5 and 3 s). Estimated intensity maps are then computed using SA(T=2S) to Modified Mercalli Intensity correlations derived for central Mexico. The maps are made available to the institutions in charge of the disaster prevention systems. In order to analyze the accuracy of the maps, we compare them against observations not considered in the

  19. Gogny interactions with tensor terms

    Energy Technology Data Exchange (ETDEWEB)

    Anguiano, M.; Lallena, A.M.; Bernard, R.N. [Universidad de Granada, Departamento de Fisica Atomica, Molecular y Nuclear, Granada (Spain); Co' , G. [INFN, Lecce (Italy); De Donno, V. [Universita del Salento, Dipartimento di Matematica e Fisica ' ' E. De Giorgi' ' , Lecce (Italy); Grasso, M. [Universite Paris-Sud, Institut de Physique Nucleaire, IN2P3-CNRS, Orsay (France)

    2016-07-15

    We present a perturbative approach to include tensor terms in the Gogny interaction. We do not change the values of the usual parameterisations, with the only exception of the spin-orbit term, and we add tensor terms whose only free parameters are the strengths of the interactions. We identify observables sensitive to the presence of the tensor force in Hartree-Fock, Hartree-Fock-Bogoliubov and random phase approximation calculations. We show the need of including two tensor contributions, at least: a pure tensor term and a tensor-isospin term. We show results relevant for the inclusion of the tensor term for single-particle energies, charge-conserving magnetic excitations and Gamow-Teller excitations. (orig.)

  20. Adaptive estimation of multivariate functions using conditionally Gaussian tensor-product spline priors

    NARCIS (Netherlands)

    Jonge, de R.; Zanten, van J.H.

    2012-01-01

    We investigate posterior contraction rates for priors on multivariate functions that are constructed using tensor-product B-spline expansions. We prove that using a hierarchical prior with an appropriate prior distribution on the partition size and Gaussian prior weights on the B-spline

  1. Development and comparison in uncertainty assessment based Bayesian modularization method in hydrological modeling

    Science.gov (United States)

    Li, Lu; Xu, Chong-Yu; Engeland, Kolbjørn

    2013-04-01

    SummaryWith respect to model calibration, parameter estimation and analysis of uncertainty sources, various regression and probabilistic approaches are used in hydrological modeling. A family of Bayesian methods, which incorporates different sources of information into a single analysis through Bayes' theorem, is widely used for uncertainty assessment. However, none of these approaches can well treat the impact of high flows in hydrological modeling. This study proposes a Bayesian modularization uncertainty assessment approach in which the highest streamflow observations are treated as suspect information that should not influence the inference of the main bulk of the model parameters. This study includes a comprehensive comparison and evaluation of uncertainty assessments by our new Bayesian modularization method and standard Bayesian methods using the Metropolis-Hastings (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions were used in combination with standard Bayesian method: the AR(1) plus Normal model independent of time (Model 1), the AR(1) plus Normal model dependent on time (Model 2) and the AR(1) plus Multi-normal model (Model 3). The results reveal that the Bayesian modularization method provides the most accurate streamflow estimates measured by the Nash-Sutcliffe efficiency and provide the best in uncertainty estimates for low, medium and entire flows compared to standard Bayesian methods. The study thus provides a new approach for reducing the impact of high flows on the discharge uncertainty assessment of hydrological models via Bayesian method.

  2. Tensor structure for Nori motives

    OpenAIRE

    Barbieri-Viale, Luca; Huber, Annette; Prest, Mike

    2018-01-01

    We construct a tensor product on Freyd's universal abelian category attached to an additive tensor category or a tensor quiver and establish a universal property. This is used to give an alternative construction for the tensor product on Nori motives.

  3. Tensor SOM and tensor GTM: Nonlinear tensor analysis by topographic mappings.

    Science.gov (United States)

    Iwasaki, Tohru; Furukawa, Tetsuo

    2016-05-01

    In this paper, we propose nonlinear tensor analysis methods: the tensor self-organizing map (TSOM) and the tensor generative topographic mapping (TGTM). TSOM is a straightforward extension of the self-organizing map from high-dimensional data to tensorial data, and TGTM is an extension of the generative topographic map, which provides a theoretical background for TSOM using a probabilistic generative model. These methods are useful tools for analyzing and visualizing tensorial data, especially multimodal relational data. For given n-mode relational data, TSOM and TGTM can simultaneously organize a set of n-topographic maps. Furthermore, they can be used to explore the tensorial data space by interactively visualizing the relationships between modes. We present the TSOM algorithm and a theoretical description from the viewpoint of TGTM. Various TSOM variations and visualization techniques are also described, along with some applications to real relational datasets. Additionally, we attempt to build a comprehensive description of the TSOM family by adapting various data structures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Time-Dependent Moment Tensors of the First Four Source Physics Experiments (SPE) Explosions

    Science.gov (United States)

    Yang, X.

    2015-12-01

    We use mainly vertical-component geophone data within 2 km from the epicenter to invert for time-dependent moment tensors of the first four SPE explosions: SPE-1, SPE-2, SPE-3 and SPE-4Prime. We employ a one-dimensional (1D) velocity model developed from P- and Rg-wave travel times for Green's function calculations. The attenuation structure of the model is developed from P- and Rg-wave amplitudes. We select data for the inversion based on the criterion that they show consistent travel times and amplitude behavior as those predicted by the 1D model. Due to limited azimuthal coverage of the sources and the mostly vertical-component-only nature of the dataset, only long-period, diagonal components of the moment tensors are well constrained. Nevertheless, the moment tensors, particularly their isotropic components, provide reasonable estimates of the long-period source amplitudes as well as estimates of corner frequencies, albeit with larger uncertainties. The estimated corner frequencies, however, are consistent with estimates from ratios of seismogram spectra from different explosions. These long-period source amplitudes and corner frequencies cannot be fit by classical P-wave explosion source models. The results motivate the development of new P-wave source models suitable for these chemical explosions. To that end, we fit inverted moment-tensor spectra by modifying the classical explosion model using regressions of estimated source parameters. Although the number of data points used in the regression is small, the approach suggests a way for the new-model development when more data are collected.

  5. Bayesian logistic regression approaches to predict incorrect DRG assignment.

    Science.gov (United States)

    Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural

    2018-05-07

    Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.

  6. Physical and Geometric Interpretations of the Riemann Tensor, Ricci Tensor, and Scalar Curvature

    OpenAIRE

    Loveridge, Lee C.

    2004-01-01

    Various interpretations of the Riemann Curvature Tensor, Ricci Tensor, and Scalar Curvature are described. Also, the physical meanings of the Einstein Tensor and Einstein's Equations are discussed. Finally a derivation of Newtonian Gravity from Einstein's Equations is given.

  7. Generalized tensor-based morphometry of HIV/AIDS using multivariate statistics on deformation tensors.

    Science.gov (United States)

    Lepore, N; Brun, C; Chou, Y Y; Chiang, M C; Dutton, R A; Hayashi, K M; Luders, E; Lopez, O L; Aizenstein, H J; Toga, A W; Becker, J T; Thompson, P M

    2008-01-01

    This paper investigates the performance of a new multivariate method for tensor-based morphometry (TBM). Statistics on Riemannian manifolds are developed that exploit the full information in deformation tensor fields. In TBM, multiple brain images are warped to a common neuroanatomical template via 3-D nonlinear registration; the resulting deformation fields are analyzed statistically to identify group differences in anatomy. Rather than study the Jacobian determinant (volume expansion factor) of these deformations, as is common, we retain the full deformation tensors and apply a manifold version of Hotelling's $T(2) test to them, in a Log-Euclidean domain. In 2-D and 3-D magnetic resonance imaging (MRI) data from 26 HIV/AIDS patients and 14 matched healthy subjects, we compared multivariate tensor analysis versus univariate tests of simpler tensor-derived indices: the Jacobian determinant, the trace, geodesic anisotropy, and eigenvalues of the deformation tensor, and the angle of rotation of its eigenvectors. We detected consistent, but more extensive patterns of structural abnormalities, with multivariate tests on the full tensor manifold. Their improved power was established by analyzing cumulative p-value plots using false discovery rate (FDR) methods, appropriately controlling for false positives. This increased detection sensitivity may empower drug trials and large-scale studies of disease that use tensor-based morphometry.

  8. Anisotropic conductivity tensor imaging in MREIT using directional diffusion rate of water molecules

    International Nuclear Information System (INIS)

    Kwon, Oh In; Jeong, Woo Chul; Sajib, Saurav Z K; Kim, Hyung Joong; Woo, Eung Je

    2014-01-01

    Magnetic resonance electrical impedance tomography (MREIT) is an emerging method to visualize electrical conductivity and/or current density images at low frequencies (below 1 KHz). Injecting currents into an imaging object, one component of the induced magnetic flux density is acquired using an MRI scanner for isotropic conductivity image reconstructions. Diffusion tensor MRI (DT-MRI) measures the intrinsic three-dimensional diffusion property of water molecules within a tissue. It characterizes the anisotropic water transport by the effective diffusion tensor. Combining the DT-MRI and MREIT techniques, we propose a novel direct method for absolute conductivity tensor image reconstructions based on a linear relationship between the water diffusion tensor and the electrical conductivity tensor. We first recover the projected current density, which is the best approximation of the internal current density one can obtain from the measured single component of the induced magnetic flux density. This enables us to estimate a scale factor between the diffusion tensor and the conductivity tensor. Combining these values at all pixels with the acquired diffusion tensor map, we can quantitatively recover the anisotropic conductivity tensor map. From numerical simulations and experimental verifications using a biological tissue phantom, we found that the new method overcomes the limitations of each method and successfully reconstructs both the direction and magnitude of the conductivity tensor for both the anisotropic and isotropic regions. (paper)

  9. Bayesian methods outperform parsimony but at the expense of precision in the estimation of phylogeny from discrete morphological data.

    Science.gov (United States)

    O'Reilly, Joseph E; Puttick, Mark N; Parry, Luke; Tanner, Alastair R; Tarver, James E; Fleming, James; Pisani, Davide; Donoghue, Philip C J

    2016-04-01

    Different analytical methods can yield competing interpretations of evolutionary history and, currently, there is no definitive method for phylogenetic reconstruction using morphological data. Parsimony has been the primary method for analysing morphological data, but there has been a resurgence of interest in the likelihood-based Mk-model. Here, we test the performance of the Bayesian implementation of the Mk-model relative to both equal and implied-weight implementations of parsimony. Using simulated morphological data, we demonstrate that the Mk-model outperforms equal-weights parsimony in terms of topological accuracy, and implied-weights performs the most poorly. However, the Mk-model produces phylogenies that have less resolution than parsimony methods. This difference in the accuracy and precision of parsimony and Bayesian approaches to topology estimation needs to be considered when selecting a method for phylogeny reconstruction. © 2016 The Authors.

  10. Development of the Tensoral Computer Language

    Science.gov (United States)

    Ferziger, Joel; Dresselhaus, Eliot

    1996-01-01

    The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.

  11. Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery.

    Science.gov (United States)

    Xie, Qi; Zhao, Qian; Meng, Deyu; Xu, Zongben

    2017-08-02

    It is well known that the sparsity/low-rank of a vector/matrix can be rationally measured by nonzero-entries-number ($l_0$ norm)/nonzero- singular-values-number (rank), respectively. However, data from real applications are often generated by the interaction of multiple factors, which obviously cannot be sufficiently represented by a vector/matrix, while a high order tensor is expected to provide more faithful representation to deliver the intrinsic structure underlying such data ensembles. Unlike the vector/matrix case, constructing a rational high order sparsity measure for tensor is a relatively harder task. To this aim, in this paper we propose a measure for tensor sparsity, called Kronecker-basis-representation based tensor sparsity measure (KBR briefly), which encodes both sparsity insights delivered by Tucker and CANDECOMP/PARAFAC (CP) low-rank decompositions for a general tensor. Then we study the KBR regularization minimization (KBRM) problem, and design an effective ADMM algorithm for solving it, where each involved parameter can be updated with closed-form equations. Such an efficient solver makes it possible to extend KBR to various tasks like tensor completion and tensor robust principal component analysis. A series of experiments, including multispectral image (MSI) denoising, MSI completion and background subtraction, substantiate the superiority of the proposed methods beyond state-of-the-arts.

  12. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    Science.gov (United States)

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  13. Killing-Yano tensors and Nambu mechanics

    International Nuclear Information System (INIS)

    Baleanu, D.

    1998-01-01

    Killing-Yano tensors were introduced in 1952 by Kentaro-Yano from mathematical point of view. The physical interpretation of Killing-Yano tensors of rank higher than two was unclear. We found that all Killing-Yano tensors η i 1 i 2 . .. i n with covariant derivative zero are Nambu tensors. We found that in the case of flat space case all Killing-Yano tensors are Nambu tensors. In the case of Taub-NUT and Kerr-Newmann metric Killing-Yano tensors of order two generate Nambu tensors of rank 3

  14. Relativistic plasma dielectric tensor evaluation based on the exact plasma dispersion functions concept

    International Nuclear Information System (INIS)

    Castejon, F.; Pavlov, S. S.

    2006-01-01

    The fully relativistic plasma dielectric tensor for any wave and plasma parameter is estimated on the basis of the exact plasma dispersion functions concept. The inclusion of this concept allows one to write the tensor in a closed and compact form and to reduce the tensor evaluation to the calculation of those functions. The main analytical properties of these functions are studied and two methods are given for their evaluation. The comparison between the exact dielectric tensor with the weakly relativistic approximation, widely used presently in plasma waves calculations, is given as well as the range of plasma temperature, harmonic number, and propagation angle in which the weakly relativistic approximation is valid

  15. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  16. Degenerate Perturbation Theory for Electronic g Tensors: Leading-Order Relativistic Effects.

    Science.gov (United States)

    Rinkevicius, Zilvinas; de Almeida, Katia Julia; Oprea, Cornel I; Vahtras, Olav; Ågren, Hans; Ruud, Kenneth

    2008-11-11

    A new approach for the evaluation of the leading-order relativistic corrections to the electronic g tensors of molecules with a doublet ground state is presented. The methodology is based on degenerate perturbation theory and includes all relevant contributions to the g tensor shift up to order O(α(4)) originating from the one-electron part of the Breit-Pauli Hamiltonian-that is, it allows for the treatment of scalar relativistic, spin-orbit, and mixed corrections to the spin and orbital Zeeman effects. This approach has been implemented in the framework of spin-restricted density functional theory and is in the present paper, as a first illustration of the theory, applied to study relativistic effects on electronic g tensors of dihalogen anion radicals X2(-) (X = F, Cl, Br, I). The results indicate that the spin-orbit interaction is responsible for the large parallel component of the g tensor shift of Br2(-) and I2(-), and furthermore that both the leading-order scalar relativistic and spin-orbit corrections are of minor importance for the perpendicular component of the g tensor in these molecules since they effectively cancel each other. In addition to investigating the g tensors of dihalogen anion radicals, we also critically examine the importance of various relativistic corrections to the electronic g tensor of linear molecules with Σ-type ground states and present a two-state model suitable for an approximate estimation of the g tensor in such molecules.

  17. Recognition of Action as a Bayesian Parameter Estimation Problem over Time

    DEFF Research Database (Denmark)

    Krüger, Volker

    2007-01-01

    In this paper we will discuss two problems related to action recognition: The first problem is the one of identifying in a surveillance scenario whether a person is walking or running and in what rough direction. The second problem is concerned with the recovery of action primitives from observed...... complex actions. Both problems will be discussed within a statistical framework. Bayesian propagation over time offers a framework to treat likelihood observations at each time step and the dynamics between the time steps in a unified manner. The first problem will be approached as a patter recognition...... of the Bayesian framework for action recognition and round up our discussion....

  18. Allometric Models Based on Bayesian Frameworks Give Better Estimates of Aboveground Biomass in the Miombo Woodlands

    Directory of Open Access Journals (Sweden)

    Shem Kuyah

    2016-02-01

    Full Text Available The miombo woodland is the most extensive dry forest in the world, with the potential to store substantial amounts of biomass carbon. Efforts to obtain accurate estimates of carbon stocks in the miombo woodlands are limited by a general lack of biomass estimation models (BEMs. This study aimed to evaluate the accuracy of most commonly employed allometric models for estimating aboveground biomass (AGB in miombo woodlands, and to develop new models that enable more accurate estimation of biomass in the miombo woodlands. A generalizable mixed-species allometric model was developed from 88 trees belonging to 33 species ranging in diameter at breast height (DBH from 5 to 105 cm using Bayesian estimation. A power law model with DBH alone performed better than both a polynomial model with DBH and the square of DBH, and models including height and crown area as additional variables along with DBH. The accuracy of estimates from published models varied across different sites and trees of different diameter classes, and was lower than estimates from our model. The model developed in this study can be used to establish conservative carbon stocks required to determine avoided emissions in performance-based payment schemes, for example in afforestation and reforestation activities.

  19. Categorical Tensor Network States

    Directory of Open Access Journals (Sweden)

    Jacob D. Biamonte

    2011-12-01

    Full Text Available We examine the use of string diagrams and the mathematics of category theory in the description of quantum states by tensor networks. This approach lead to a unification of several ideas, as well as several results and methods that have not previously appeared in either side of the literature. Our approach enabled the development of a tensor network framework allowing a solution to the quantum decomposition problem which has several appealing features. Specifically, given an n-body quantum state |ψ〉, we present a new and general method to factor |ψ〉 into a tensor network of clearly defined building blocks. We use the solution to expose a previously unknown and large class of quantum states which we prove can be sampled efficiently and exactly. This general framework of categorical tensor network states, where a combination of generic and algebraically defined tensors appear, enhances the theory of tensor network states.

  20. Diffusion tensor imaging of the human skeletal muscle: contributions and applications

    International Nuclear Information System (INIS)

    Neji, Radhouene

    2010-01-01

    In this thesis, we present several techniques for the processing of diffusion tensor images. They span a wide range of tasks such as estimation and regularization, clustering and segmentation, as well as registration. The variational framework proposed for recovering a tensor field from noisy diffusion weighted images exploits the fact that diffusion data represent populations of fibers and therefore each tensor can be reconstructed using a weighted combination of tensors lying in its neighborhood. The segmentation approach operates both at the voxel and the fiber tract levels. It is based on the use of Mercer kernels over Gaussian diffusion probabilities to model tensor similarity and spatial interactions, allowing the definition of fiber metrics that combine information from spatial localization and diffusion tensors. Several clustering techniques can be subsequently used to segment tensor fields and fiber tractographies. Moreover, we show how to develop supervised extensions of these algorithms. The registration algorithm uses probability kernels in order to match moving and target images. The deformation consistency is assessed using the distortion induced in the distances between neighboring probabilities. Discrete optimization is used to seek an optimum of the defined objective function. The experimental validation is done over a dataset of manually segmented diffusion images of the lower leg muscle for healthy and diseased subjects. The results of the techniques developed throughout this thesis are promising. (author)

  1. Tensor Permutation Matrices in Finite Dimensions

    OpenAIRE

    Christian, Rakotonirina

    2005-01-01

    We have generalised the properties with the tensor product, of one 4x4 matrix which is a permutation matrix, and we call a tensor commutation matrix. Tensor commutation matrices can be constructed with or without calculus. A formula allows us to construct a tensor permutation matrix, which is a generalisation of tensor commutation matrix, has been established. The expression of an element of a tensor commutation matrix has been generalised in the case of any element of a tensor permutation ma...

  2. Tensor rank of the tripartite state |W>xn

    International Nuclear Information System (INIS)

    Yu Nengkun; Guo Cheng; Duan Runyao; Chitambar, Eric

    2010-01-01

    Tensor rank refers to the number of product states needed to express a given multipartite quantum state. Its nonadditivity as an entanglement measure has recently been observed. In this Brief Report, we estimate the tensor rank of multiple copies of the tripartite state |W>=(1/√(3))(|100>+|010>+|001>). Both an upper bound and a lower bound of this rank are derived. In particular, it is proven that the rank of |W> x 2 is 7, thus resolving a previously open problem. Some implications of this result are discussed in terms of transformation rates between |W> xn and multiple copies of the state |GHZ>=(1/√(2))(|000>+|111>).

  3. Estimating the Term Structure With a Semiparametric Bayesian Hierarchical Model: An Application to Corporate Bonds1

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Ensor, Katherine B.; Rosner, Gary L.

    2011-01-01

    The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material. PMID:21765566

  4. Simultaneous tensor decomposition and completion using factor priors.

    Science.gov (United States)

    Chen, Yi-Lei; Hsu, Chiou-Ting; Liao, Hong-Yuan Mark

    2014-03-01

    The success of research on matrix completion is evident in a variety of real-world applications. Tensor completion, which is a high-order extension of matrix completion, has also generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called simultaneous tensor decomposition and completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. By exploiting this auxiliary information, our method leverages two classic schemes and accurately estimates the model factors and missing entries. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.

  5. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  6. The geomagnetic field gradient tensor

    DEFF Research Database (Denmark)

    Kotsiaros, Stavros; Olsen, Nils

    2012-01-01

    We develop the general mathematical basis for space magnetic gradiometry in spherical coordinates. The magnetic gradient tensor is a second rank tensor consisting of 3 × 3 = 9 spatial derivatives. Since the geomagnetic field vector B is always solenoidal (∇ · B = 0) there are only eight independent...... tensor elements. Furthermore, in current free regions the magnetic gradient tensor becomes symmetric, further reducing the number of independent elements to five. In that case B is a Laplacian potential field and the gradient tensor can be expressed in series of spherical harmonics. We present properties...... of the magnetic gradient tensor and provide explicit expressions of its elements in terms of spherical harmonics. Finally we discuss the benefit of using gradient measurements for exploring the Earth’s magnetic field from space, in particular the advantage of the various tensor elements for a better determination...

  7. Estimation of relative order tensors, and reconstruction of vectors in space using unassigned RDC data and its application

    Science.gov (United States)

    Miao, Xijiang; Mukhopadhyay, Rishi; Valafar, Homayoun

    2008-10-01

    Advances in NMR instrumentation and pulse sequence design have resulted in easier acquisition of Residual Dipolar Coupling (RDC) data. However, computational and theoretical analysis of this type of data has continued to challenge the international community of investigators because of their complexity and rich information content. Contemporary use of RDC data has required a-priori assignment, which significantly increases the overall cost of structural analysis. This article introduces a novel algorithm that utilizes unassigned RDC data acquired from multiple alignment media ( nD-RDC, n ⩾ 3) for simultaneous extraction of the relative order tensor matrices and reconstruction of the interacting vectors in space. Estimation of the relative order tensors and reconstruction of the interacting vectors can be invaluable in a number of endeavors. An example application has been presented where the reconstructed vectors have been used to quantify the fitness of a template protein structure to the unknown protein structure. This work has other important direct applications such as verification of the novelty of an unknown protein and validation of the accuracy of an available protein structure model in drug design. More importantly, the presented work has the potential to bridge the gap between experimental and computational methods of structure determination.

  8. Algebraic classification of the Weyl tensor in higher dimensions based on its 'superenergy' tensor

    International Nuclear Information System (INIS)

    Senovilla, Jose M M

    2010-01-01

    The algebraic classification of the Weyl tensor in the arbitrary dimension n is recovered by means of the principal directions of its 'superenergy' tensor. This point of view can be helpful in order to compute the Weyl aligned null directions explicitly, and permits one to obtain the algebraic type of the Weyl tensor by computing the principal eigenvalue of rank-2 symmetric future tensors. The algebraic types compatible with states of intrinsic gravitational radiation can then be explored. The underlying ideas are general, so that a classification of arbitrary tensors in the general dimension can be achieved. (fast track communication)

  9. OFDM receiver for fast time-varying channels using block-sparse Bayesian learning

    DEFF Research Database (Denmark)

    Barbu, Oana-Elena; Manchón, Carles Navarro; Rom, Christian

    2016-01-01

    characterized with a basis expansion model using a small number of terms. As a result, the channel estimation problem is posed as that of estimating a vector of complex coefficients that exhibits a block-sparse structure, which we solve with tools from block-sparse Bayesian learning. Using variational Bayesian...... inference, we embed the channel estimator in a receiver structure that performs iterative channel and noise precision estimation, intercarrier interference cancellation, detection and decoding. Simulation results illustrate the superior performance of the proposed receiver over state-of-art receivers....

  10. Symmetric Tensor Decomposition

    DEFF Research Database (Denmark)

    Brachat, Jerome; Comon, Pierre; Mourrain, Bernard

    2010-01-01

    We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables...... of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions and for detecting the rank....

  11. Inference in hybrid Bayesian networks

    International Nuclear Information System (INIS)

    Langseth, Helge; Nielsen, Thomas D.; Rumi, Rafael; Salmeron, Antonio

    2009-01-01

    Since the 1980s, Bayesian networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability techniques (like fault trees and reliability block diagrams). However, limitations in the BNs' calculation engine have prevented BNs from becoming equally popular for domains containing mixtures of both discrete and continuous variables (the so-called hybrid domains). In this paper we focus on these difficulties, and summarize some of the last decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability.

  12. A moment-tensor catalog for intermediate magnitude earthquakes in Mexico

    Science.gov (United States)

    Rodríguez Cardozo, Félix; Hjörleifsdóttir, Vala; Martínez-Peláez, Liliana; Franco, Sara; Iglesias Mendoza, Arturo

    2016-04-01

    Located among five tectonic plates, Mexico is one of the world's most seismically active regions. The earthquake focal mechanisms provide important information on the active tectonics. A widespread technique for estimating the earthquake magnitud and focal mechanism is the inversion for the moment tensor, obtained by minimizing a misfit function that estimates the difference between synthetic and observed seismograms. An important element in the estimation of the moment tensor is an appropriate velocity model, which allows for the calculation of accurate Green's Functions so that the differences between observed and synthetics seismograms are due to the source of the earthquake rather than the velocity model. However, calculating accurate synthetic seismograms gets progressively more difficult as the magnitude of the earthquakes decreases. Large earthquakes (M>5.0) excite waves of longer periods that interact weakly with lateral heterogeneities in the crust. For these events, using 1D velocity models to compute Greens functions works well and they are well characterized by seismic moment tensors reported in global catalogs (eg. USGS fast moment tensor solutions and GCMT). The opposite occurs for small and intermediate sized events, where the relatively shorter periods excited interact strongly with lateral heterogeneities in the crust and upper mantle. To accurately model the Green's functions for the smaller events in a large heterogeneous area, requires 3D or regionalized 1D models. To obtain a rapid estimate of earthquake magnitude, the National Seismological Survey in Mexico (Servicio Sismológico Nacional, SSN) automatically calculates seismic moment tensors for events in the Mexican Territory (Franco et al., 2002; Nolasco-Carteño, 2006). However, for intermediate-magnitude and small earthquakes the signal-to-noise ratio could is low for many of the seismic stations, and without careful selection and filtering of the data, obtaining a stable focal mechanism

  13. Bayesian methods for hackers probabilistic programming and Bayesian inference

    CERN Document Server

    Davidson-Pilon, Cameron

    2016-01-01

    Bayesian methods of inference are deeply natural and extremely powerful. However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice–freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples a...

  14. Development and comparison of Bayesian modularization method in uncertainty assessment of hydrological models

    Science.gov (United States)

    Li, L.; Xu, C.-Y.; Engeland, K.

    2012-04-01

    With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD

  15. A state-space Bayesian framework for estimating biogeochemical transformations using time-lapse geophysical data

    Energy Technology Data Exchange (ETDEWEB)

    Chen, J.; Hubbard, S.; Williams, K.; Pride, S.; Li, L.; Steefel, C.; Slater, L.

    2009-04-15

    We develop a state-space Bayesian framework to combine time-lapse geophysical data with other types of information for quantitative estimation of biogeochemical parameters during bioremediation. We consider characteristics of end-products of biogeochemical transformations as state vectors, which evolve under constraints of local environments through evolution equations, and consider time-lapse geophysical data as available observations, which could be linked to the state vectors through petrophysical models. We estimate the state vectors and their associated unknown parameters over time using Markov chain Monte Carlo sampling methods. To demonstrate the use of the state-space approach, we apply it to complex resistivity data collected during laboratory column biostimulation experiments that were poised to precipitate iron and zinc sulfides during sulfate reduction. We develop a petrophysical model based on sphere-shaped cells to link the sulfide precipitate properties to the time-lapse geophysical attributes and estimate volume fraction of the sulfide precipitates, fraction of the dispersed, sulfide-encrusted cells, mean radius of the aggregated clusters, and permeability over the course of the experiments. Results of the case study suggest that the developed state-space approach permits the use of geophysical datasets for providing quantitative estimates of end-product characteristics and hydrological feedbacks associated with biogeochemical transformations. Although tested here on laboratory column experiment datasets, the developed framework provides the foundation needed for quantitative field-scale estimation of biogeochemical parameters over space and time using direct, but often sparse wellbore data with indirect, but more spatially extensive geophysical datasets.

  16. Technical note: Bayesian calibration of dynamic ruminant nutrition models.

    Science.gov (United States)

    Reed, K F; Arhonditsis, G B; France, J; Kebreab, E

    2016-08-01

    Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. Estimation of initiating event distribution at nuclear power plants by Bayesian procedure

    International Nuclear Information System (INIS)

    Chen Guangming

    1995-01-01

    Initiating events at nuclear power plants such as human errors or components failures may lead to a nuclear accident. The study of the frequency of these events or the distribution of the failure rate is necessary in probabilistic risk assessment for nuclear power plants. This paper presents Bayesian modelling methods for the analysis of the distribution of the failure rate. The method can also be utilized in other related fields especially where the data is sparse. An application of the Bayesian modelling in the analysis of distribution of the time to recover Loss of Off-Site Power ( LOSP) is discussed in the paper

  18. Improving satellite-based PM2.5 estimates in China using Gaussian processes modeling in a Bayesian hierarchical setting.

    Science.gov (United States)

    Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun

    2017-08-01

    Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2  = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.

  19. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    Science.gov (United States)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  20. Monograph On Tensor Notations

    Science.gov (United States)

    Sirlin, Samuel W.

    1993-01-01

    Eight-page report describes systems of notation used most commonly to represent tensors of various ranks, with emphasis on tensors in Cartesian coordinate systems. Serves as introductory or refresher text for scientists, engineers, and others familiar with basic concepts of coordinate systems, vectors, and partial derivatives. Indicial tensor, vector, dyadic, and matrix notations, and relationships among them described.

  1. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  2. Cartesian tensors an introduction

    CERN Document Server

    Temple, G

    2004-01-01

    This undergraduate text provides an introduction to the theory of Cartesian tensors, defining tensors as multilinear functions of direction, and simplifying many theorems in a manner that lends unity to the subject. The author notes the importance of the analysis of the structure of tensors in terms of spectral sets of projection operators as part of the very substance of quantum theory. He therefore provides an elementary discussion of the subject, in addition to a view of isotropic tensors and spinor analysis within the confines of Euclidean space. The text concludes with an examination of t

  3. Estimation of total Effort and Effort Elapsed in Each Step of Software Development Using Optimal Bayesian Belief Network

    Directory of Open Access Journals (Sweden)

    Fatemeh Zare Baghiabad

    2017-09-01

    Full Text Available Accuracy in estimating the needed effort for software development caused software effort estimation to be a challenging issue. Beside estimation of total effort, determining the effort elapsed in each software development step is very important because any mistakes in enterprise resource planning can lead to project failure. In this paper, a Bayesian belief network was proposed based on effective components and software development process. In this model, the feedback loops are considered between development steps provided that the return rates are different for each project. Different return rates help us determine the percentages of the elapsed effort in each software development step, distinctively. Moreover, the error measurement resulted from optimized effort estimation and the optimal coefficients to modify the model are sought. The results of the comparison between the proposed model and other models showed that the model has the capability to highly accurately estimate the total effort (with the marginal error of about 0.114 and to estimate the effort elapsed in each software development step.

  4. Extraction of features from sleep EEG for Bayesian assessment of brain development.

    Directory of Open Access Journals (Sweden)

    Vitaly Schetinin

    Full Text Available Brain development can be evaluated by experts analysing age-related patterns in sleep electroencephalograms (EEG. Natural variations in the patterns, noise, and artefacts affect the evaluation accuracy as well as experts' agreement. The knowledge of predictive posterior distribution allows experts to estimate confidence intervals within which decisions are distributed. Bayesian approach to probabilistic inference has provided accurate estimates of intervals of interest. In this paper we propose a new feature extraction technique for Bayesian assessment and estimation of predictive distribution in a case of newborn brain development assessment. The new EEG features are verified within the Bayesian framework on a large EEG data set including 1,100 recordings made from newborns in 10 age groups. The proposed features are highly correlated with brain maturation and their use increases the assessment accuracy.

  5. MATLAB tensor classes for fast algorithm prototyping.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2004-10-01

    Tensors (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensor as matrix class supports the 'matricization' of a tensor, i.e., the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cp tensor and tucker tensor. We descibe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.

  6. Grid-based lattice summation of electrostatic potentials by assembled rank-structured tensor approximation

    Science.gov (United States)

    Khoromskaia, Venera; Khoromskij, Boris N.

    2014-12-01

    Our recent method for low-rank tensor representation of sums of the arbitrarily positioned electrostatic potentials discretized on a 3D Cartesian grid reduces the 3D tensor summation to operations involving only 1D vectors however retaining the linear complexity scaling in the number of potentials. Here, we introduce and study a novel tensor approach for fast and accurate assembled summation of a large number of lattice-allocated potentials represented on 3D N × N × N grid with the computational requirements only weakly dependent on the number of summed potentials. It is based on the assembled low-rank canonical tensor representations of the collected potentials using pointwise sums of shifted canonical vectors representing the single generating function, say the Newton kernel. For a sum of electrostatic potentials over L × L × L lattice embedded in a box the required storage scales linearly in the 1D grid-size, O(N) , while the numerical cost is estimated by O(NL) . For periodic boundary conditions, the storage demand remains proportional to the 1D grid-size of a unit cell, n = N / L, while the numerical cost reduces to O(N) , that outperforms the FFT-based Ewald-type summation algorithms of complexity O(N3 log N) . The complexity in the grid parameter N can be reduced even to the logarithmic scale O(log N) by using data-sparse representation of canonical N-vectors via the quantics tensor approximation. For justification, we prove an upper bound on the quantics ranks for the canonical vectors in the overall lattice sum. The presented approach is beneficial in applications which require further functional calculus with the lattice potential, say, scalar product with a function, integration or differentiation, which can be performed easily in tensor arithmetics on large 3D grids with 1D cost. Numerical tests illustrate the performance of the tensor summation method and confirm the estimated bounds on the tensor ranks.

  7. Modelling of population dynamics of red king crab using Bayesian approach

    Directory of Open Access Journals (Sweden)

    Bakanev Sergey ...

    2012-10-01

    Modeling population dynamics based on the Bayesian approach enables to successfully resolve the above issues. The integration of the data from various studies into a unified model based on Bayesian parameter estimation method provides a much more detailed description of the processes occurring in the population.

  8. The use of conflicts in searching Bayesian networks

    OpenAIRE

    Poole, David L.

    2013-01-01

    This paper discusses how conflicts (as used by the consistency-based diagnosis community) can be adapted to be used in a search-based algorithm for computing prior and posterior probabilities in discrete Bayesian Networks. This is an "anytime" algorithm, that at any stage can estimate the probabilities and give an error bound. Whereas the most popular Bayesian net algorithms exploit the structure of the network for efficiency, we exploit probability distributions for efficiency; this algorith...

  9. Killing-Yano tensors, rank-2 Killing tensors, and conserved quantities in higher dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Krtous, Pavel [Institute of Theoretical Physics, Charles University, V Holesovickach 2, Prague (Czech Republic); Kubiznak, David [Institute of Theoretical Physics, Charles University, V Holesovickach 2, Prague (Czech Republic); Page, Don N. [Theoretical Physics Institute, University of Alberta, Edmonton T6G 2G7, Alberta (Canada); Frolov, Valeri P. [Theoretical Physics Institute, University of Alberta, Edmonton T6G 2G7, Alberta (Canada)

    2007-02-15

    From the metric and one Killing-Yano tensor of rank D-2 in any D-dimensional spacetime with such a principal Killing-Yano tensor, we show how to generate k = [(D+1)/2] Killing-Yano tensors, of rank D-2j for all 0 {<=} j {<=} k-1, and k rank-2 Killing tensors, giving k constants of geodesic motion that are in involution. For the example of the Kerr-NUT-AdS spacetime (hep-th/0604125) with its principal Killing-Yano tensor (gr-qc/0610144), these constants and the constants from the k Killing vectors give D independent constants in involution, making the geodesic motion completely integrable (hep-th/0611083). The constants of motion are also related to the constants recently obtained in the separation of the Hamilton-Jacobi and Klein-Gordon equations (hep-th/0611245)

  10. Killing-Yano tensors, rank-2 Killing tensors, and conserved quantities in higher dimensions

    International Nuclear Information System (INIS)

    Krtous, Pavel; Kubiznak, David; Page, Don N.; Frolov, Valeri P.

    2007-01-01

    From the metric and one Killing-Yano tensor of rank D-2 in any D-dimensional spacetime with such a principal Killing-Yano tensor, we show how to generate k = [(D+1)/2] Killing-Yano tensors, of rank D-2j for all 0 ≤ j ≤ k-1, and k rank-2 Killing tensors, giving k constants of geodesic motion that are in involution. For the example of the Kerr-NUT-AdS spacetime (hep-th/0604125) with its principal Killing-Yano tensor (gr-qc/0610144), these constants and the constants from the k Killing vectors give D independent constants in involution, making the geodesic motion completely integrable (hep-th/0611083). The constants of motion are also related to the constants recently obtained in the separation of the Hamilton-Jacobi and Klein-Gordon equations (hep-th/0611245)

  11. Uncertainty Management for Diagnostics and Prognostics of Batteries using Bayesian Techniques

    Science.gov (United States)

    Saha, Bhaskar; Goebel, kai

    2007-01-01

    Uncertainty management has always been the key hurdle faced by diagnostics and prognostics algorithms. A Bayesian treatment of this problem provides an elegant and theoretically sound approach to the modern Condition- Based Maintenance (CBM)/Prognostic Health Management (PHM) paradigm. The application of the Bayesian techniques to regression and classification in the form of Relevance Vector Machine (RVM), and to state estimation as in Particle Filters (PF), provides a powerful tool to integrate the diagnosis and prognosis of battery health. The RVM, which is a Bayesian treatment of the Support Vector Machine (SVM), is used for model identification, while the PF framework uses the learnt model, statistical estimates of noise and anticipated operational conditions to provide estimates of remaining useful life (RUL) in the form of a probability density function (PDF). This type of prognostics generates a significant value addition to the management of any operation involving electrical systems.

  12. Bayesian image restoration, using configurations

    DEFF Research Database (Denmark)

    Thorarinsdottir, Thordis

    configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed...

  13. Monitoring of the tensor polarization of high energy deuteron beams; Monitoring tenzornoj polyarizatsii dejtronnykh puchkov vysokoj ehnergii

    Energy Technology Data Exchange (ETDEWEB)

    Zolin, L S; Litvinenko, A G; Pilipenko, Yu K; Reznikov, S G; Rukoyatkin, P A; Fimushkin, V V

    1998-12-01

    The method of determining the tensor component of high energy polarized deuteron beams, based on measuring of the tensor analyzing power in the deuteron stripping reaction, is discussed. This method is convenient for monitoring during long time runs on the tensor polarized deuteron beams. The method was tested in the 5-days run at the LHE JINR accelerator with the 3 and 9 GeV/c tensor polarized deuterons. The results made it possible to estimate the beam polarization stability in time 5 refs., 4 figs., 1 tab.

  14. Bayesian estimation of direct and correlated responses to selection on linear or ratio expressions of feed efficiency in pigs

    DEFF Research Database (Denmark)

    Shirali, Mahmoud; Varley, Patrick Francis; Jensen, Just

    2018-01-01

    meat percentage (LMP) along with the derived traits of RFI and FCR; and (3) deriving Bayesian estimates of direct and correlated responses to selection on RFI, FCR, ADG, ADFI, and LMP. Response to selection was defined as the difference in additive genetic mean of the selected top individuals, expected......, respectively. Selection against RFIG showed a direct response of − 0.16 kg/d and correlated responses of − 0.16 kg/kg for FCR and − 0.15 kg/d for ADFI, with no effect on other production traits. Selection against FCR resulted in a direct response of − 0.17 kg/kg and correlated responses of − 0.14 kg/d for RFIG......, − 0.18 kg/d for ADFI, and 0.98% for LMP. Conclusions: The Bayesian methodology developed here enables prediction of breeding values for FCR and RFI from a single multi-variate model. In addition, we derived posterior distributions of direct and correlated responses to selection. Genetic parameter...

  15. An introduction to using Bayesian linear regression with clinical data.

    Science.gov (United States)

    Baldwin, Scott A; Larson, Michael J

    2017-11-01

    Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Nonlinear Bayesian Estimation of BOLD Signal under Non-Gaussian Noise

    Directory of Open Access Journals (Sweden)

    Ali Fahim Khan

    2015-01-01

    Full Text Available Modeling the blood oxygenation level dependent (BOLD signal has been a subject of study for over a decade in the neuroimaging community. Inspired from fluid dynamics, the hemodynamic model provides a plausible yet convincing interpretation of the BOLD signal by amalgamating effects of dynamic physiological changes in blood oxygenation, cerebral blood flow and volume. The nonautonomous, nonlinear set of differential equations of the hemodynamic model constitutes the process model while the weighted nonlinear sum of the physiological variables forms the measurement model. Plagued by various noise sources, the time series fMRI measurement data is mostly assumed to be affected by additive Gaussian noise. Though more feasible, the assumption may cause the designed filter to perform poorly if made to work under non-Gaussian environment. In this paper, we present a data assimilation scheme that assumes additive non-Gaussian noise, namely, the e-mixture noise, affecting the measurements. The proposed filter MAGSF and the celebrated EKF are put to test by performing joint optimal Bayesian filtering to estimate both the states and parameters governing the hemodynamic model under non-Gaussian environment. Analyses using both the synthetic and real data reveal superior performance of the MAGSF as compared to EKF.

  17. Bayesian structural inference for hidden processes

    Science.gov (United States)

    Strelioff, Christopher C.; Crutchfield, James P.

    2014-04-01

    We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.

  18. Propagation of Uncertainty in Bayesian Kernel Models - Application to Multiple-Step Ahead Forecasting

    DEFF Research Database (Denmark)

    Quinonero, Joaquin; Girard, Agathe; Larsen, Jan

    2003-01-01

    The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models such as the Gaus......The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models...... such as the Gaussian process and the relevance vector machine. We derive novel analytic expressions for the predictive mean and variance for Gaussian kernel shapes under the assumption of a Gaussian input distribution in the static case, and of a recursive Gaussian predictive density in iterative forecasting...

  19. Competing risk models in reliability systems, a Weibull distribution model with Bayesian analysis approach

    International Nuclear Information System (INIS)

    Iskandar, Ismed; Gondokaryono, Yudi Satria

    2016-01-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range

  20. Tensor-based spatiotemporal saliency detection

    Science.gov (United States)

    Dou, Hao; Li, Bin; Deng, Qianqian; Zhang, LiRui; Pan, Zhihong; Tian, Jinwen

    2018-03-01

    This paper proposes an effective tensor-based spatiotemporal saliency computation model for saliency detection in videos. First, we construct the tensor representation of video frames. Then, the spatiotemporal saliency can be directly computed by the tensor distance between different tensors, which can preserve the complete temporal and spatial structure information of object in the spatiotemporal domain. Experimental results demonstrate that our method can achieve encouraging performance in comparison with the state-of-the-art methods.

  1. Compressive Sensing Based Bayesian Sparse Channel Estimation for OFDM Communication Systems: High Performance and Low Complexity

    Science.gov (United States)

    Xu, Li; Shan, Lin; Adachi, Fumiyuki

    2014-01-01

    In orthogonal frequency division modulation (OFDM) communication systems, channel state information (CSI) is required at receiver due to the fact that frequency-selective fading channel leads to disgusting intersymbol interference (ISI) over data transmission. Broadband channel model is often described by very few dominant channel taps and they can be probed by compressive sensing based sparse channel estimation (SCE) methods, for example, orthogonal matching pursuit algorithm, which can take the advantage of sparse structure effectively in the channel as for prior information. However, these developed methods are vulnerable to both noise interference and column coherence of training signal matrix. In other words, the primary objective of these conventional methods is to catch the dominant channel taps without a report of posterior channel uncertainty. To improve the estimation performance, we proposed a compressive sensing based Bayesian sparse channel estimation (BSCE) method which cannot only exploit the channel sparsity but also mitigate the unexpected channel uncertainty without scarifying any computational complexity. The proposed method can reveal potential ambiguity among multiple channel estimators that are ambiguous due to observation noise or correlation interference among columns in the training matrix. Computer simulations show that proposed method can improve the estimation performance when comparing with conventional SCE methods. PMID:24983012

  2. Estimating prevalence and diagnostic test characteristics of bovine cysticercosis in Belgium in the absence of a 'gold standard' reference test using a Bayesian approach.

    Science.gov (United States)

    Jansen, Famke; Dorny, Pierre; Gabriël, Sarah; Eichenberger, Ramon Marc; Berkvens, Dirk

    2018-04-30

    A Bayesian model was developed to estimate values for the prevalence and diagnostic test characteristics of bovine cysticercosis (Taenia saginata) by combining results of four imperfect tests. Samples of 612 bovine carcases that were found negative for cysticercosis during routine meat inspection collected at three Belgian slaughterhouses, underwent enhanced meat inspection (additional incisions in the heart), dissection of the predilection sites, B158/B60 Ag-ELISA and ES Ab-ELISA. This Bayesian approach allows for the combination of prior expert opinion with experimental data to estimate the true prevalence of bovine cysticercosis in the absence of a gold standard test. A first model (based on a multinomial distribution and including all possible interactions between the individual tests) required estimation of 31 parameters, while only allowing for 15 parameters to be estimated. Including prior expert information about specificity and sensitivity resulted in an optimal model with a reduction of the number of parameters to be estimated to 8. The estimated bovine cysticercosis prevalence was 33.9% (95% credibility interval: 27.7-44.4%), while apparent prevalence based on meat inspection is only 0.23%. The test performances were estimated as follows (sensitivity (Se) - specificity (Sp)): enhanced meat inspection (Se 2.87% - Sp 100%), dissection of predilection sites (Se 69.8% - Sp 100%), Ag-ELISA (Se 26.9% - Sp 99.4%), Ab-ELISA (Se 13.8% - Sp 92.9%). Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2010-01-01

    Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente

  4. Naive Bayesian classifiers for multinomial features: a theoretical analysis

    CSIR Research Space (South Africa)

    Van Dyk, E

    2007-11-01

    Full Text Available The authors investigate the use of naive Bayesian classifiers for multinomial feature spaces and derive error estimates for these classifiers. The error analysis is done by developing a mathematical model to estimate the probability density...

  5. Support agnostic Bayesian matching pursuit for block sparse signals

    KAUST Repository

    Masood, Mudassir

    2013-05-01

    A fast matching pursuit method using a Bayesian approach is introduced for block-sparse signal recovery. This method performs Bayesian estimates of block-sparse signals even when the distribution of active blocks is non-Gaussian or unknown. It is agnostic to the distribution of active blocks in the signal and utilizes a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data and no user intervention is required. The method requires a priori knowledge of block partition and utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean square error (MMSE) estimate of the block-sparse signal. Simulation results demonstrate the power and robustness of our proposed estimator. © 2013 IEEE.

  6. Bayesian image restoration, using configurations

    DEFF Research Database (Denmark)

    Thorarinsdottir, Thordis Linda

    2006-01-01

    configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for the salt and pepper noise. The inference in the model is discussed...

  7. Generalized dielectric permittivity tensor

    International Nuclear Information System (INIS)

    Borzdov, G.N.; Barkovskii, L.M.; Fedorov, F.I.

    1986-01-01

    The authors deal with the question of what is to be done with the formalism of the electrodynamics of dispersive media based on the introduction of dielectric-permittivity tensors for purely harmonic fields when Voigt waves and waves of more general form exist. An attempt is made to broaden and generalize the formalism to take into account dispersion of waves of the given type. In dispersive media, the polarization, magnetization, and conduction current-density vectors of point and time are determined by the values of the electromagnetic field vectors in the vicinity of this point (spatial dispersion) in the preceding instants of time (time dispersion). The dielectric-permittivity tensor and other tensors of electrodynamic parameters of the medium are introduced in terms of a set of evolution operators and not the set of harmonic function. It is noted that a magnetic-permeability tensor and an elastic-modulus tensor may be introduced for an acoustic field in dispersive anisotropic media with coupling equations of general form

  8. Tensor analysis for physicists

    CERN Document Server

    Schouten, J A

    1989-01-01

    This brilliant study by a famed mathematical scholar and former professor of mathematics at the University of Amsterdam integrates a concise exposition of the mathematical basis of tensor analysis with admirably chosen physical examples of the theory. The first five chapters incisively set out the mathematical theory underlying the use of tensors. The tensor algebra in EN and RN is developed in Chapters I and II. Chapter II introduces a sub-group of the affine group, then deals with the identification of quantities in EN. The tensor analysis in XN is developed in Chapter IV. In chapters VI through IX, Professor Schouten presents applications of the theory that are both intrinsically interesting and good examples of the use and advantages of the calculus. Chapter VI, intimately connected with Chapter III, shows that the dimensions of physical quantities depend upon the choice of the underlying group, and that tensor calculus is the best instrument for dealing with the properties of anisotropic media. In Chapte...

  9. Sparse alignment for robust tensor learning.

    Science.gov (United States)

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming

    2014-10-01

    Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods.

  10. TensorPack: a Maple-based software package for the manipulation of algebraic expressions of tensors in general relativity

    International Nuclear Information System (INIS)

    Huf, P A; Carminati, J

    2015-01-01

    In this paper we: (1) introduce TensorPack, a software package for the algebraic manipulation of tensors in covariant index format in Maple; (2) briefly demonstrate the use of the package with an orthonormal tensor proof of the shearfree conjecture for dust. TensorPack is based on the Riemann and Canon tensor software packages and uses their functions to express tensors in an indexed covariant format. TensorPack uses a string representation as input and provides functions for output in index form. It extends the functionality to basic algebra of tensors, substitution, covariant differentiation, contraction, raising/lowering indices, symmetry functions and other accessory functions. The output can be merged with text in the Maple environment to create a full working document with embedded dynamic functionality. The package offers potential for manipulation of indexed algebraic tensor expressions in a flexible software environment. (paper)

  11. Decomposing tensors with structured matrix factors reduces to rank-1 approximations

    DEFF Research Database (Denmark)

    Comon, Pierre; Sørensen, Mikael; Tsigaridas, Elias

    2010-01-01

    Tensor decompositions permit to estimate in a deterministic way the parameters in a multi-linear model. Applications have been already pointed out in antenna array processing and digital communications, among others, and are extremely attractive provided some diversity at the receiver is availabl....... As opposed to the widely used ALS algorithm, non-iterative algorithms are proposed in this paper to compute the required tensor decomposition into a sum of rank-1 terms, when some factor matrices enjoy some structure, such as block-Hankel, triangular, band, etc....

  12. Multisnapshot Sparse Bayesian Learning for DOA

    DEFF Research Database (Denmark)

    Gerstoft, Peter; Mecklenbrauker, Christoph F.; Xenaki, Angeliki

    2016-01-01

    The directions of arrival (DOA) of plane waves are estimated from multisnapshot sensor array data using sparse Bayesian learning (SBL). The prior for the source amplitudes is assumed independent zero-mean complex Gaussian distributed with hyperparameters, the unknown variances (i.e., the source...

  13. Unique characterization of the Bel-Robinson tensor

    International Nuclear Information System (INIS)

    Bergqvist, G; Lankinen, P

    2004-01-01

    We prove that a completely symmetric and trace-free rank-4 tensor is, up to sign, a Bel-Robinson-type tensor, i.e., the superenergy tensor of a tensor with the same algebraic symmetries as the Weyl tensor, if and only if it satisfies a certain quadratic identity. This may be seen as the first Rainich theory result for rank-4 tensors

  14. Data quality in diffusion tensor imaging studies of the preterm brain: a systematic review.

    Science.gov (United States)

    Pieterman, Kay; Plaisier, Annemarie; Govaert, Paul; Leemans, Alexander; Lequin, Maarten H; Dudink, Jeroen

    2015-08-01

    To study early neurodevelopment in preterm infants, evaluation of brain maturation and injury is increasingly performed using diffusion tensor imaging, for which the reliability of underlying data is paramount. To review the literature to evaluate acquisition and processing methodology in diffusion tensor imaging studies of preterm infants. We searched the Embase, Medline, Web of Science and Cochrane databases for relevant papers published between 2003 and 2013. The following keywords were included in our search: prematurity, neuroimaging, brain, and diffusion tensor imaging. We found 74 diffusion tensor imaging studies in preterm infants meeting our inclusion criteria. There was wide variation in acquisition and processing methodology, and we found incomplete reporting of these settings. Nineteen studies (26%) reported the use of neonatal hardware. Data quality assessment was not reported in 13 (18%) studies. Artefacts-correction and data-exclusion was not reported in 33 (45%) and 18 (24%) studies, respectively. Tensor estimation algorithms were reported in 56 (76%) studies but were often suboptimal. Diffusion tensor imaging acquisition and processing settings are incompletely described in current literature, vary considerably, and frequently do not meet the highest standards.

  15. Quantifying Uncertainty in Near Surface Electromagnetic Imaging Using Bayesian Methods

    Science.gov (United States)

    Blatter, D. B.; Ray, A.; Key, K.

    2017-12-01

    Geoscientists commonly use electromagnetic methods to image the Earth's near surface. Field measurements of EM fields are made (often with the aid an artificial EM source) and then used to infer near surface electrical conductivity via a process known as inversion. In geophysics, the standard inversion tool kit is robust and can provide an estimate of the Earth's near surface conductivity that is both geologically reasonable and compatible with the measured field data. However, standard inverse methods struggle to provide a sense of the uncertainty in the estimate they provide. This is because the task of finding an Earth model that explains the data to within measurement error is non-unique - that is, there are many, many such models; but the standard methods provide only one "answer." An alternative method, known as Bayesian inversion, seeks to explore the full range of Earth model parameters that can adequately explain the measured data, rather than attempting to find a single, "ideal" model. Bayesian inverse methods can therefore provide a quantitative assessment of the uncertainty inherent in trying to infer near surface conductivity from noisy, measured field data. This study applies a Bayesian inverse method (called trans-dimensional Markov chain Monte Carlo) to transient airborne EM data previously collected over Taylor Valley - one of the McMurdo Dry Valleys in Antarctica. Our results confirm the reasonableness of previous estimates (made using standard methods) of near surface conductivity beneath Taylor Valley. In addition, we demonstrate quantitatively the uncertainty associated with those estimates. We demonstrate that Bayesian inverse methods can provide quantitative uncertainty to estimates of near surface conductivity.

  16. Tensor Product of Polygonal Cell Complexes

    OpenAIRE

    Chien, Yu-Yen

    2017-01-01

    We introduce the tensor product of polygonal cell complexes, which interacts nicely with the tensor product of link graphs of complexes. We also develop the unique factorization property of polygonal cell complexes with respect to the tensor product, and study the symmetries of tensor products of polygonal cell complexes.

  17. Mean template for tensor-based morphometry using deformation tensors.

    Science.gov (United States)

    Leporé, Natasha; Brun, Caroline; Pennec, Xavier; Chou, Yi-Yu; Lopez, Oscar L; Aizenstein, Howard J; Becker, James T; Toga, Arthur W; Thompson, Paul M

    2007-01-01

    Tensor-based morphometry (TBM) studies anatomical differences between brain images statistically, to identify regions that differ between groups, over time, or correlate with cognitive or clinical measures. Using a nonlinear registration algorithm, all images are mapped to a common space, and statistics are most commonly performed on the Jacobian determinant (local expansion factor) of the deformation fields. In, it was shown that the detection sensitivity of the standard TBM approach could be increased by using the full deformation tensors in a multivariate statistical analysis. Here we set out to improve the common space itself, by choosing the shape that minimizes a natural metric on the deformation tensors from that space to the population of control subjects. This method avoids statistical bias and should ease nonlinear registration of new subjects data to a template that is 'closest' to all subjects' anatomies. As deformation tensors are symmetric positive-definite matrices and do not form a vector space, all computations are performed in the log-Euclidean framework. The control brain B that is already the closest to 'average' is found. A gradient descent algorithm is then used to perform the minimization that iteratively deforms this template and obtains the mean shape. We apply our method to map the profile of anatomical differences in a dataset of 26 HIV/AIDS patients and 14 controls, via a log-Euclidean Hotelling's T2 test on the deformation tensors. These results are compared to the ones found using the 'best' control, B. Statistics on both shapes are evaluated using cumulative distribution functions of the p-values in maps of inter-group differences.

  18. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems

  19. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  20. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  1. Notes on super Killing tensors

    Energy Technology Data Exchange (ETDEWEB)

    Howe, P.S. [Department of Mathematics, King’s College London,The Strand, London WC2R 2LS (United Kingdom); Lindström, University [Department of Physics and Astronomy, Theoretical Physics, Uppsala University,SE-751 20 Uppsala (Sweden); Theoretical Physics, Imperial College London,Prince Consort Road, London SW7 2AZ (United Kingdom)

    2016-03-14

    The notion of a Killing tensor is generalised to a superspace setting. Conserved quantities associated with these are defined for superparticles and Poisson brackets are used to define a supersymmetric version of the even Schouten-Nijenhuis bracket. Superconformal Killing tensors in flat superspaces are studied for spacetime dimensions 3,4,5,6 and 10. These tensors are also presented in analytic superspaces and super-twistor spaces for 3,4 and 6 dimensions. Algebraic structures associated with superconformal Killing tensors are also briefly discussed.

  2. Bayesian Graphical Models

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Nielsen, Thomas Dyhre

    2016-01-01

    Mathematically, a Bayesian graphical model is a compact representation of the joint probability distribution for a set of variables. The most frequently used type of Bayesian graphical models are Bayesian networks. The structural part of a Bayesian graphical model is a graph consisting of nodes...

  3. Bayesian dynamic mediation analysis.

    Science.gov (United States)

    Huang, Jing; Yuan, Ying

    2017-12-01

    Most existing methods for mediation analysis assume that mediation is a stationary, time-invariant process, which overlooks the inherently dynamic nature of many human psychological processes and behavioral activities. In this article, we consider mediation as a dynamic process that continuously changes over time. We propose Bayesian multilevel time-varying coefficient models to describe and estimate such dynamic mediation effects. By taking the nonparametric penalized spline approach, the proposed method is flexible and able to accommodate any shape of the relationship between time and mediation effects. Simulation studies show that the proposed method works well and faithfully reflects the true nature of the mediation process. By modeling mediation effect nonparametrically as a continuous function of time, our method provides a valuable tool to help researchers obtain a more complete understanding of the dynamic nature of the mediation process underlying psychological and behavioral phenomena. We also briefly discuss an alternative approach of using dynamic autoregressive mediation model to estimate the dynamic mediation effect. The computer code is provided to implement the proposed Bayesian dynamic mediation analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Making tensor factorizations robust to non-gaussian noise.

    Energy Technology Data Exchange (ETDEWEB)

    Chi, Eric C. (Rice University, Houston, TX); Kolda, Tamara Gibson

    2011-03-01

    Tensors are multi-way arrays, and the CANDECOMP/PARAFAC (CP) tensor factorization has found application in many different domains. The CP model is typically fit using a least squares objective function, which is a maximum likelihood estimate under the assumption of independent and identically distributed (i.i.d.) Gaussian noise. We demonstrate that this loss function can be highly sensitive to non-Gaussian noise. Therefore, we propose a loss function based on the 1-norm because it can accommodate both Gaussian and grossly non-Gaussian perturbations. We also present an alternating majorization-minimization (MM) algorithm for fitting a CP model using our proposed loss function (CPAL1) and compare its performance to the workhorse algorithm for fitting CP models, CP alternating least squares (CPALS).

  5. Monocular Visual Odometry Based on Trifocal Tensor Constraint

    Science.gov (United States)

    Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.

    2018-02-01

    For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.

  6. Bayesian Nonlinear Assimilation of Eulerian and Lagrangian Coastal Flow Data

    Science.gov (United States)

    2015-09-30

    Lagrangian Coastal Flow Data Dr. Pierre F.J. Lermusiaux Department of Mechanical Engineering Center for Ocean Science and Engineering Massachusetts...Develop and apply theory, schemes and computational systems for rigorous Bayesian nonlinear assimilation of Eulerian and Lagrangian coastal flow data...coastal ocean fields, both in Eulerian and Lagrangian forms. - Further develop and implement our GMM-DO schemes for robust Bayesian nonlinear estimation

  7. A BAYESIAN ESTIMATE OF THE CMB–LARGE-SCALE STRUCTURE CROSS-CORRELATION

    Energy Technology Data Exchange (ETDEWEB)

    Moura-Santos, E. [Instituto de Física, Universidade de São Paulo, Rua do Matão trav. R 187, 05508-090, São Paulo—SP (Brazil); Carvalho, F. C. [Departamento de Física, Universidade do Estado do Rio Grande do Norte, 59610-210, Mossoró-RN (Brazil); Penna-Lima, M. [APC, AstroParticule et Cosmologie, Université Paris Diderot, CNRS/IN2P3, CEA/Irfu, Observatoire de Paris, Sorbonne Paris Cité, 10, rue Alice Domon et Léonie Duquet, F-75205 Paris Cedex 13 (France); Novaes, C. P.; Wuensche, C. A., E-mail: emoura@if.usp.br, E-mail: fabiocabral@uern.br, E-mail: pennal@apc.in2p3.fr, E-mail: cawuenschel@das.inpe.br, E-mail: camilanovaes@on.br [Observatório Nacional, Rua General José Cristino 77, São Cristóvão, 20921-400, Rio de Janeiro, RJ (Brazil)

    2016-08-01

    Evidences for late-time acceleration of the universe are provided by multiple probes, such as Type Ia supernovae, the cosmic microwave background (CMB), and large-scale structure (LSS). In this work, we focus on the integrated Sachs–Wolfe (ISW) effect, i.e., secondary CMB fluctuations generated by evolving gravitational potentials due to the transition between, e.g., the matter and dark energy (DE) dominated phases. Therefore, assuming a flat universe, DE properties can be inferred from ISW detections. We present a Bayesian approach to compute the CMB–LSS cross-correlation signal. The method is based on the estimate of the likelihood for measuring a combined set consisting of a CMB temperature and galaxy contrast maps, provided that we have some information on the statistical properties of the fluctuations affecting these maps. The likelihood is estimated by a sampling algorithm, therefore avoiding the computationally demanding techniques of direct evaluation in either pixel or harmonic space. As local tracers of the matter distribution at large scales, we used the Two Micron All Sky Survey galaxy catalog and, for the CMB temperature fluctuations, the ninth-year data release of the Wilkinson Microwave Anisotropy Probe ( WMAP 9). The results show a dominance of cosmic variance over the weak recovered signal, due mainly to the shallowness of the catalog used, with systematics associated with the sampling algorithm playing a secondary role as sources of uncertainty. When combined with other complementary probes, the method presented in this paper is expected to be a useful tool to late-time acceleration studies in cosmology.

  8. Automated gravity gradient tensor inversion for underwater object detection

    International Nuclear Information System (INIS)

    Wu, Lin; Tian, Jinwen

    2010-01-01

    Underwater abnormal object detection is a current need for the navigation security of autonomous underwater vehicles (AUVs). In this paper, an automated gravity gradient tensor inversion algorithm is proposed for the purpose of passive underwater object detection. Full-tensor gravity gradient anomalies induced by an object in the partial area can be measured with the technique of gravity gradiometry on an AUV. Then the automated algorithm utilizes the anomalies, using the inverse method to estimate the mass and barycentre location of the arbitrary-shaped object. A few tests on simple synthetic models will be illustrated, in order to evaluate the feasibility and accuracy of the new algorithm. Moreover, the method is applied to a complicated model of an abnormal object with gradiometer and AUV noise, and interference from a neighbouring illusive smaller object. In all cases tested, the estimated mass and barycentre location parameters are found to be in good agreement with the actual values

  9. Tensor voting for image correction by global and local intensity alignment.

    Science.gov (United States)

    Jia, Jiaya; Tang, Chi-Keung

    2005-01-01

    This paper presents a voting method to perform image correction by global and local intensity alignment. The key to our modeless approach is the estimation of global and local replacement functions by reducing the complex estimation problem to the robust 2D tensor voting in the corresponding voting spaces. No complicated model for replacement function (curve) is assumed. Subject to the monotonic constraint only, we vote for an optimal replacement function by propagating the curve smoothness constraint using a dense tensor field. Our method effectively infers missing curve segments and rejects image outliers. Applications using our tensor voting approach are proposed and described. The first application consists of image mosaicking of static scenes, where the voted replacement functions are used in our iterative registration algorithm for computing the best warping matrix. In the presence of occlusion, our replacement function can be employed to construct a visually acceptable mosaic by detecting occlusion which has large and piecewise constant color. Furthermore, by the simultaneous consideration of color matches and spatial constraints in the voting space, we perform image intensity compensation and high contrast image correction using our voting framework, when only two defective input images are given.

  10. Error estimation and adaptivity for incompressible hyperelasticity

    KAUST Repository

    Whiteley, J.P.

    2014-04-30

    SUMMARY: A Galerkin FEM is developed for nonlinear, incompressible (hyper) elasticity that takes account of nonlinearities in both the strain tensor and the relationship between the strain tensor and the stress tensor. By using suitably defined linearised dual problems with appropriate boundary conditions, a posteriori error estimates are then derived for both linear functionals of the solution and linear functionals of the stress on a boundary, where Dirichlet boundary conditions are applied. A second, higher order method for calculating a linear functional of the stress on a Dirichlet boundary is also presented together with an a posteriori error estimator for this approach. An implementation for a 2D model problem with known solution, where the entries of the strain tensor exhibit large, rapid variations, demonstrates the accuracy and sharpness of the error estimators. Finally, using a selection of model problems, the a posteriori error estimate is shown to provide a basis for effective mesh adaptivity. © 2014 John Wiley & Sons, Ltd.

  11. Tensor Train Neighborhood Preserving Embedding

    Science.gov (United States)

    Wang, Wenqi; Aggarwal, Vaneet; Aeron, Shuchin

    2018-05-01

    In this paper, we propose a Tensor Train Neighborhood Preserving Embedding (TTNPE) to embed multi-dimensional tensor data into low dimensional tensor subspace. Novel approaches to solve the optimization problem in TTNPE are proposed. For this embedding, we evaluate novel trade-off gain among classification, computation, and dimensionality reduction (storage) for supervised learning. It is shown that compared to the state-of-the-arts tensor embedding methods, TTNPE achieves superior trade-off in classification, computation, and dimensionality reduction in MNIST handwritten digits and Weizmann face datasets.

  12. Experiences in applying Bayesian integrative models in interdisciplinary modeling: the computational and human challenges

    DEFF Research Database (Denmark)

    Kuikka, Sakari; Haapasaari, Päivi Elisabet; Helle, Inari

    2011-01-01

    We review the experience obtained in using integrative Bayesian models in interdisciplinary analysis focusing on sustainable use of marine resources and environmental management tasks. We have applied Bayesian models to both fisheries and environmental risk analysis problems. Bayesian belief...... be time consuming and research projects can be difficult to manage due to unpredictable technical problems related to parameter estimation. Biology, sociology and environmental economics have their own scientific traditions. Bayesian models are becoming traditional tools in fisheries biology, where...

  13. Estimating extreme river discharges in Europe through a Bayesian network

    Science.gov (United States)

    Paprotny, Dominik; Morales-Nápoles, Oswaldo

    2017-06-01

    Large-scale hydrological modelling of flood hazards requires adequate extreme discharge data. In practise, models based on physics are applied alongside those utilizing only statistical analysis. The former require enormous computational power, while the latter are mostly limited in accuracy and spatial coverage. In this paper we introduce an alternate, statistical approach based on Bayesian networks (BNs), a graphical model for dependent random variables. We use a non-parametric BN to describe the joint distribution of extreme discharges in European rivers and variables representing the geographical characteristics of their catchments. Annual maxima of daily discharges from more than 1800 river gauges (stations with catchment areas ranging from 1.4 to 807 000 km2) were collected, together with information on terrain, land use and local climate. The (conditional) correlations between the variables are modelled through copulas, with the dependency structure defined in the network. The results show that using this method, mean annual maxima and return periods of discharges could be estimated with an accuracy similar to existing studies using physical models for Europe and better than a comparable global statistical model. Performance of the model varies slightly between regions of Europe, but is consistent between different time periods, and remains the same in a split-sample validation. Though discharge prediction under climate change is not the main scope of this paper, the BN was applied to a large domain covering all sizes of rivers in the continent both for present and future climate, as an example. Results show substantial variation in the influence of climate change on river discharges. The model can be used to provide quick estimates of extreme discharges at any location for the purpose of obtaining input information for hydraulic modelling.

  14. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    OpenAIRE

    Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong

    2013-01-01

    In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...

  15. The Topology of Symmetric Tensor Fields

    Science.gov (United States)

    Levin, Yingmei; Batra, Rajesh; Hesselink, Lambertus; Levy, Yuval

    1997-01-01

    Combinatorial topology, also known as "rubber sheet geometry", has extensive applications in geometry and analysis, many of which result from connections with the theory of differential equations. A link between topology and differential equations is vector fields. Recent developments in scientific visualization have shown that vector fields also play an important role in the analysis of second-order tensor fields. A second-order tensor field can be transformed into its eigensystem, namely, eigenvalues and their associated eigenvectors without loss of information content. Eigenvectors behave in a similar fashion to ordinary vectors with even simpler topological structures due to their sign indeterminacy. Incorporating information about eigenvectors and eigenvalues in a display technique known as hyperstreamlines reveals the structure of a tensor field. The simplify and often complex tensor field and to capture its important features, the tensor is decomposed into an isotopic tensor and a deviator. A tensor field and its deviator share the same set of eigenvectors, and therefore they have a similar topological structure. A a deviator determines the properties of a tensor field, while the isotopic part provides a uniform bias. Degenerate points are basic constituents of tensor fields. In 2-D tensor fields, there are only two types of degenerate points; while in 3-D, the degenerate points can be characterized in a Q'-R' plane. Compressible and incompressible flows share similar topological feature due to the similarity of their deviators. In the case of the deformation tensor, the singularities of its deviator represent the area of vortex core in the field. In turbulent flows, the similarities and differences of the topology of the deformation and the Reynolds stress tensors reveal that the basic addie-viscosity assuptions have their validity in turbulence modeling under certain conditions.

  16. Identifying Isotropic Events using an Improved Regional Moment Tensor Inversion Technique

    Energy Technology Data Exchange (ETDEWEB)

    Dreger, Douglas S. [Univ. of California, Berkeley, CA (United States); Ford, Sean R. [Univ. of California, Berkeley, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Walter, William R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-12-08

    Research was carried out investigating the feasibility of using a regional distance seismic waveform moment tensor inverse procedure to estimate source parameters of nuclear explosions and to use the source inversion results to develop a source-type discrimination capability. The results of the research indicate that it is possible to robustly determine the seismic moment tensor of nuclear explosions, and when compared to natural seismicity in the context of the a Hudson et al. (1989) source-type diagram they are found to separate from populations of earthquakes and underground cavity collapse seismic sources.

  17. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2003-01-01

    As the power of Bayesian techniques has become more fully realized, the field of artificial intelligence has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligence keeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering. They emphasize understanding and intuition but also provide the algorithms and technical background needed for applications. Software, exercises, and solutions are available on the authors' website.

  18. Convergence of scalar-tensor theories towards general relativity and primordial nucleosynthesis

    International Nuclear Information System (INIS)

    Serna, A; Alimi, J-M; Navarro, A

    2002-01-01

    In this paper, we analyse the conditions for convergence towards general relativity of scalar-tensor gravity theories defined by an arbitrary coupling function α (in the Einstein frame). We show that, in general, the evolution of the scalar field (φ) is governed by two opposite mechanisms: an attraction mechanism which tends to drive scalar-tensor models towards Einstein's theory, and a repulsion mechanism which has the contrary effect. The attraction mechanism dominates the recent epochs of the universe evolution if, and only if, the scalar field and its derivative satisfy certain boundary conditions. Since these conditions for convergence towards general relativity depend on the particular scalar-tensor theory used to describe the universe evolution, the nucleosynthesis bounds on the present value of the coupling function, α 0 , strongly differ from some theories to others. For example, in theories defined by α ∝ |φ| analytical estimates lead to very stringent nucleosynthesis bounds on α 0 (∼ -19 ). By contrast, in scalar-tensor theories defined by α ∝ φ much larger limits on α 0 (∼ -7 ) are found

  19. Bayesian Dimensionality Assessment for the Multidimensional Nominal Response Model

    Directory of Open Access Journals (Sweden)

    Javier Revuelta

    2017-06-01

    Full Text Available This article introduces Bayesian estimation and evaluation procedures for the multidimensional nominal response model. The utility of this model is to perform a nominal factor analysis of items that consist of a finite number of unordered response categories. The key aspect of the model, in comparison with traditional factorial model, is that there is a slope for each response category on the latent dimensions, instead of having slopes associated to the items. The extended parameterization of the multidimensional nominal response model requires large samples for estimation. When sample size is of a moderate or small size, some of these parameters may be weakly empirically identifiable and the estimation algorithm may run into difficulties. We propose a Bayesian MCMC inferential algorithm to estimate the parameters and the number of dimensions underlying the multidimensional nominal response model. Two Bayesian approaches to model evaluation were compared: discrepancy statistics (DIC, WAICC, and LOO that provide an indication of the relative merit of different models, and the standardized generalized discrepancy measure that requires resampling data and is computationally more involved. A simulation study was conducted to compare these two approaches, and the results show that the standardized generalized discrepancy measure can be used to reliably estimate the dimensionality of the model whereas the discrepancy statistics are questionable. The paper also includes an example with real data in the context of learning styles, in which the model is used to conduct an exploratory factor analysis of nominal data.

  20. Use of a Bayesian isotope mixing model to estimate proportional contributions of multiple nitrate sources in surface water

    International Nuclear Information System (INIS)

    Xue Dongmei; De Baets, Bernard; Van Cleemput, Oswald; Hennessy, Carmel; Berglund, Michael; Boeckx, Pascal

    2012-01-01

    To identify different NO 3 − sources in surface water and to estimate their proportional contribution to the nitrate mixture in surface water, a dual isotope and a Bayesian isotope mixing model have been applied for six different surface waters affected by agriculture, greenhouses in an agricultural area, and households. Annual mean δ 15 N–NO 3 − were between 8.0 and 19.4‰, while annual mean δ 18 O–NO 3 − were given by 4.5–30.7‰. SIAR was used to estimate the proportional contribution of five potential NO 3 − sources (NO 3 − in precipitation, NO 3 − fertilizer, NH 4 + in fertilizer and rain, soil N, and manure and sewage). SIAR showed that “manure and sewage” contributed highest, “soil N”, “NO 3 − fertilizer” and “NH 4 + in fertilizer and rain” contributed middle, and “NO 3 − in precipitation” contributed least. The SIAR output can be considered as a “fingerprint” for the NO 3 − source contributions. However, the wide range of isotope values observed in surface water and of the NO 3 − sources limit its applicability. - Highlights: ► The dual isotope approach (δ 15 N- and δ 18 O–NO 3 − ) identify dominant nitrate sources in 6 surface waters. ► The SIAR model estimate proportional contributions for 5 nitrate sources. ► SIAR is a reliable approach to assess temporal and spatial variations of different NO 3 − sources. ► The wide range of isotope values observed in surface water and of the nitrate sources limit its applicability. - This paper successfully applied a dual isotope approach and Bayesian isotopic mixing model to identify and quantify 5 potential nitrate sources in surface water.

  1. BAYESIAN IMAGE RESTORATION, USING CONFIGURATIONS

    Directory of Open Access Journals (Sweden)

    Thordis Linda Thorarinsdottir

    2011-05-01

    Full Text Available In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed in detail for 3 X 3 and 5 X 5 configurations and examples of the performance of the procedure are given.

  2. Random SU(2) invariant tensors

    Science.gov (United States)

    Li, Youning; Han, Muxin; Ruan, Dong; Zeng, Bei

    2018-04-01

    SU(2) invariant tensors are states in the (local) SU(2) tensor product representation but invariant under the global group action. They are of importance in the study of loop quantum gravity. A random tensor is an ensemble of tensor states. An average over the ensemble is carried out when computing any physical quantities. The random tensor exhibits a phenomenon known as ‘concentration of measure’, which states that for any bipartition the average value of entanglement entropy of its reduced density matrix is asymptotically the maximal possible as the local dimensions go to infinity. We show that this phenomenon is also true when the average is over the SU(2) invariant subspace instead of the entire space for rank-n tensors in general. It is shown in our earlier work Li et al (2017 New J. Phys. 19 063029) that the subleading correction of the entanglement entropy has a mild logarithmic divergence when n  =  4. In this paper, we show that for n  >  4 the subleading correction is not divergent but a finite number. In some special situation, the number could be even smaller than 1/2, which is the subleading correction of random state over the entire Hilbert space of tensors.

  3. Nonlinear Bayesian filtering and learning: a neuronal dynamics for perception.

    Science.gov (United States)

    Kutschireiter, Anna; Surace, Simone Carlo; Sprekeler, Henning; Pfister, Jean-Pascal

    2017-08-18

    The robust estimation of dynamical hidden features, such as the position of prey, based on sensory inputs is one of the hallmarks of perception. This dynamical estimation can be rigorously formulated by nonlinear Bayesian filtering theory. Recent experimental and behavioral studies have shown that animals' performance in many tasks is consistent with such a Bayesian statistical interpretation. However, it is presently unclear how a nonlinear Bayesian filter can be efficiently implemented in a network of neurons that satisfies some minimum constraints of biological plausibility. Here, we propose the Neural Particle Filter (NPF), a sampling-based nonlinear Bayesian filter, which does not rely on importance weights. We show that this filter can be interpreted as the neuronal dynamics of a recurrently connected rate-based neural network receiving feed-forward input from sensory neurons. Further, it captures properties of temporal and multi-sensory integration that are crucial for perception, and it allows for online parameter learning with a maximum likelihood approach. The NPF holds the promise to avoid the 'curse of dimensionality', and we demonstrate numerically its capability to outperform weighted particle filters in higher dimensions and when the number of particles is limited.

  4. Bayesian Estimation of the Kumaraswamy InverseWeibull Distribution

    Directory of Open Access Journals (Sweden)

    Felipe R.S. de Gusmao

    2017-05-01

    Full Text Available The Kumaraswamy InverseWeibull distribution has the ability to model failure rates that have unimodal shapes and are quite common in reliability and biological studies. The three-parameter Kumaraswamy InverseWeibull distribution with decreasing and unimodal failure rate is introduced. We provide a comprehensive treatment of the mathematical properties of the Kumaraswany Inverse Weibull distribution and derive expressions for its moment generating function and the ligrl/ig-th generalized moment. Some properties of the model with some graphs of density and hazard function are discussed. We also discuss a Bayesian approach for this distribution and an application was made for a real data set.

  5. BAYESIAN MAGNETOHYDRODYNAMIC SEISMOLOGY OF CORONAL LOOPS

    International Nuclear Information System (INIS)

    Arregui, I.; Asensio Ramos, A.

    2011-01-01

    We perform a Bayesian parameter inference in the context of resonantly damped transverse coronal loop oscillations. The forward problem is solved in terms of parametric results for kink waves in one-dimensional flux tubes in the thin tube and thin boundary approximations. For the inverse problem, we adopt a Bayesian approach to infer the most probable values of the relevant parameters, for given observed periods and damping times, and to extract their confidence levels. The posterior probability distribution functions are obtained by means of Markov Chain Monte Carlo simulations, incorporating observed uncertainties in a consistent manner. We find well-localized solutions in the posterior probability distribution functions for two of the three parameters of interest, namely the Alfven travel time and the transverse inhomogeneity length scale. The obtained estimates for the Alfven travel time are consistent with previous inversion results, but the method enables us to additionally constrain the transverse inhomogeneity length scale and to estimate real error bars for each parameter. When observational estimates for the density contrast are used, the method enables us to fully constrain the three parameters of interest. These results can serve to improve our current estimates of unknown physical parameters in coronal loops and to test the assumed theoretical model.

  6. Spherical Tensor Calculus for Local Adaptive Filtering

    Science.gov (United States)

    Reisert, Marco; Burkhardt, Hans

    In 3D image processing tensors play an important role. While rank-1 and rank-2 tensors are well understood and commonly used, higher rank tensors are rare. This is probably due to their cumbersome rotation behavior which prevents a computationally efficient use. In this chapter we want to introduce the notion of a spherical tensor which is based on the irreducible representations of the 3D rotation group. In fact, any ordinary cartesian tensor can be decomposed into a sum of spherical tensors, while each spherical tensor has a quite simple rotation behavior. We introduce so called tensorial harmonics that provide an orthogonal basis for spherical tensor fields of any rank. It is just a generalization of the well known spherical harmonics. Additionally we propose a spherical derivative which connects spherical tensor fields of different degree by differentiation. Based on the proposed theory we present two applications. We propose an efficient algorithm for dense tensor voting in 3D, which makes use of tensorial harmonics decomposition of the tensor-valued voting field. In this way it is possible to perform tensor voting by linear-combinations of convolutions in an efficient way. Secondly, we propose an anisotropic smoothing filter that uses a local shape and orientation adaptive filter kernel which can be computed efficiently by the use spherical derivatives.

  7. Improved tensor multiplets

    International Nuclear Information System (INIS)

    Wit, B. de; Rocek, M.

    1982-01-01

    We construct a conformally invariant theory of the N = 1 supersymmetric tensor gauge multiplet and discuss the situation in N = 2. We show that our results give rise to the recently proposed variant of Poincare supergravity, and provide the complete tensor calculus for the theory. Finally, we argue that this theory cannot be quantized sensibly. (orig.)

  8. The evolution of tensor polarization

    International Nuclear Information System (INIS)

    Huang, H.; Lee, S.Y.; Ratner, L.

    1993-01-01

    By using the equation of motion for the vector polarization, the spin transfer matrix for spin tensor polarization, the spin transfer matrix for spin tensor polarization is derived. The evolution equation for the tensor polarization is studied in the presence of an isolate spin resonance and in the presence of a spin rotor, or snake

  9. Bayesian inference model for fatigue life of laminated composites

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov; Kiureghian, Armen Der; Berggreen, Christian

    2016-01-01

    A probabilistic model for estimating the fatigue life of laminated composite plates is developed. The model is based on lamina-level input data, making it possible to predict fatigue properties for a wide range of laminate configurations. Model parameters are estimated by Bayesian inference. The ...

  10. Tensor algebra and tensor analysis for engineers with applications to continuum mechanics

    CERN Document Server

    Itskov, Mikhail

    2015-01-01

    This is the fourth and revised edition of a well-received book that aims at bridging the gap between the engineering course of tensor algebra on the one side and the mathematical course of classical linear algebra on the other side. In accordance with the contemporary way of scientific publications, a modern absolute tensor notation is preferred throughout. The book provides a comprehensible exposition of the fundamental mathematical concepts of tensor calculus and enriches the presented material with many illustrative examples. In addition, the book also includes advanced chapters dealing with recent developments in the theory of isotropic and anisotropic tensor functions and their applications to continuum mechanics. Hence, this monograph addresses graduate students as well as scientists working in this field. In each chapter numerous exercises are included, allowing for self-study and intense practice. Solutions to the exercises are also provided.

  11. Tensor Calculus: Unlearning Vector Calculus

    Science.gov (United States)

    Lee, Wha-Suck; Engelbrecht, Johann; Moller, Rita

    2018-01-01

    Tensor calculus is critical in the study of the vector calculus of the surface of a body. Indeed, tensor calculus is a natural step-up for vector calculus. This paper presents some pitfalls of a traditional course in vector calculus in transitioning to tensor calculus. We show how a deeper emphasis on traditional topics such as the Jacobian can…

  12. Estimation of Fine Particulate Matter in Taipei Using Landuse Regression and Bayesian Maximum Entropy Methods

    Directory of Open Access Journals (Sweden)

    Yi-Ming Kuo

    2011-06-01

    Full Text Available Fine airborne particulate matter (PM2.5 has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS, the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME method. The resulting epistemic framework can assimilate knowledge bases including: (a empirical-based spatial trends of PM concentration based on landuse regression, (b the spatio-temporal dependence among PM observation information, and (c site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan from 2005–2007.

  13. Estimation of fine particulate matter in Taipei using landuse regression and bayesian maximum entropy methods.

    Science.gov (United States)

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-06-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.

  14. Bayesian Maximum Entropy space/time estimation of surface water chloride in Maryland using river distances.

    Science.gov (United States)

    Jat, Prahlad; Serre, Marc L

    2016-12-01

    Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R 2 by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles. Copyright © 2016. Published by Elsevier Ltd.

  15. A simulation study on Bayesian Ridge regression models for several collinearity levels

    Science.gov (United States)

    Efendi, Achmad; Effrihan

    2017-12-01

    When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.

  16. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  17. Bayesian ARTMAP for regression.

    Science.gov (United States)

    Sasu, L M; Andonie, R

    2013-10-01

    Bayesian ARTMAP (BA) is a recently introduced neural architecture which uses a combination of Fuzzy ARTMAP competitive learning and Bayesian learning. Training is generally performed online, in a single-epoch. During training, BA creates input data clusters as Gaussian categories, and also infers the conditional probabilities between input patterns and categories, and between categories and classes. During prediction, BA uses Bayesian posterior probability estimation. So far, BA was used only for classification. The goal of this paper is to analyze the efficiency of BA for regression problems. Our contributions are: (i) we generalize the BA algorithm using the clustering functionality of both ART modules, and name it BA for Regression (BAR); (ii) we prove that BAR is a universal approximator with the best approximation property. In other words, BAR approximates arbitrarily well any continuous function (universal approximation) and, for every given continuous function, there is one in the set of BAR approximators situated at minimum distance (best approximation); (iii) we experimentally compare the online trained BAR with several neural models, on the following standard regression benchmarks: CPU Computer Hardware, Boston Housing, Wisconsin Breast Cancer, and Communities and Crime. Our results show that BAR is an appropriate tool for regression tasks, both for theoretical and practical reasons. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Estimating size and scope economies in the Portuguese water sector using the Bayesian stochastic frontier analysis

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Pedro, E-mail: pedrocarv@coc.ufrj.br [Computational Modelling in Engineering and Geophysics Laboratory (LAMEMO), Department of Civil Engineering, COPPE, Federal University of Rio de Janeiro, Av. Pedro Calmon - Ilha do Fundão, 21941-596 Rio de Janeiro (Brazil); Center for Urban and Regional Systems (CESUR), CERIS, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisbon (Portugal); Marques, Rui Cunha, E-mail: pedro.c.carvalho@tecnico.ulisboa.pt [Center for Urban and Regional Systems (CESUR), CERIS, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisbon (Portugal)

    2016-02-15

    This study aims to search for economies of size and scope in the Portuguese water sector applying Bayesian and classical statistics to make inference in stochastic frontier analysis (SFA). This study proves the usefulness and advantages of the application of Bayesian statistics for making inference in SFA over traditional SFA which just uses classical statistics. The resulting Bayesian methods allow overcoming some problems that arise in the application of the traditional SFA, such as the bias in small samples and skewness of residuals. In the present case study of the water sector in Portugal, these Bayesian methods provide more plausible and acceptable results. Based on the results obtained we found that there are important economies of output density, economies of size, economies of vertical integration and economies of scope in the Portuguese water sector, pointing out to the huge advantages in undertaking mergers by joining the retail and wholesale components and by joining the drinking water and wastewater services. - Highlights: • This study aims to search for economies of size and scope in the water sector; • The usefulness of the application of Bayesian methods is highlighted; • Important economies of output density, economies of size, economies of vertical integration and economies of scope are found.

  19. Estimating size and scope economies in the Portuguese water sector using the Bayesian stochastic frontier analysis

    International Nuclear Information System (INIS)

    Carvalho, Pedro; Marques, Rui Cunha

    2016-01-01

    This study aims to search for economies of size and scope in the Portuguese water sector applying Bayesian and classical statistics to make inference in stochastic frontier analysis (SFA). This study proves the usefulness and advantages of the application of Bayesian statistics for making inference in SFA over traditional SFA which just uses classical statistics. The resulting Bayesian methods allow overcoming some problems that arise in the application of the traditional SFA, such as the bias in small samples and skewness of residuals. In the present case study of the water sector in Portugal, these Bayesian methods provide more plausible and acceptable results. Based on the results obtained we found that there are important economies of output density, economies of size, economies of vertical integration and economies of scope in the Portuguese water sector, pointing out to the huge advantages in undertaking mergers by joining the retail and wholesale components and by joining the drinking water and wastewater services. - Highlights: • This study aims to search for economies of size and scope in the water sector; • The usefulness of the application of Bayesian methods is highlighted; • Important economies of output density, economies of size, economies of vertical integration and economies of scope are found.

  20. Link prediction via generalized coupled tensor factorisation

    DEFF Research Database (Denmark)

    Ermiş, Beyza; Evrim, Acar Ataman; Taylan Cemgil, A.

    2012-01-01

    and higher-order tensors. We propose to use an approach based on probabilistic interpretation of tensor factorisation models, i.e., Generalised Coupled Tensor Factorisation, which can simultaneously fit a large class of tensor models to higher-order tensors/matrices with com- mon latent factors using...... different loss functions. Numerical experiments demonstrate that joint analysis of data from multiple sources via coupled factorisation improves the link prediction performance and the selection of right loss function and tensor model is crucial for accurately predicting missing links....