WorldWideScience

Sample records for bayesian maximum entropy

  1. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  2. Objective Bayesianism and the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Jon Williamson

    2013-09-01

    Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

  3. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    OpenAIRE

    Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong

    2013-01-01

    In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...

  4. The maximum entropy method of moments and Bayesian probability theory

    Science.gov (United States)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  5. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  6. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    Directory of Open Access Journals (Sweden)

    Ning-Cong Xiao

    2013-12-01

    Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.

  7. Bayesian interpretation of Generalized empirical likelihood by maximum entropy

    OpenAIRE

    Rochet , Paul

    2011-01-01

    We study a parametric estimation problem related to moment condition models. As an alternative to the generalized empirical likelihood (GEL) and the generalized method of moments (GMM), a Bayesian approach to the problem can be adopted, extending the MEM procedure to parametric moment conditions. We show in particular that a large number of GEL estimators can be interpreted as a maximum entropy solution. Moreover, we provide a more general field of applications by proving the method to be rob...

  8. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  9. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    Science.gov (United States)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  10. Application of Bayesian Maximum Entropy Filter in parameter calibration of groundwater flow model in PingTung Plain

    Science.gov (United States)

    Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung

    2017-04-01

    Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.

  11. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  12. Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics

    Science.gov (United States)

    Abe, Sumiyoshi

    2014-11-01

    The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.

  13. Comparison Between Bayesian and Maximum Entropy Analyses of Flow Networks†

    Directory of Open Access Journals (Sweden)

    Steven H. Waldrip

    2017-02-01

    Full Text Available We compare the application of Bayesian inference and the maximum entropy (MaxEnt method for the analysis of flow networks, such as water, electrical and transport networks. The two methods have the advantage of allowing a probabilistic prediction of flow rates and other variables, when there is insufficient information to obtain a deterministic solution, and also allow the effects of uncertainty to be included. Both methods of inference update a prior to a posterior probability density function (pdf by the inclusion of new information, in the form of data or constraints. The MaxEnt method maximises an entropy function subject to constraints, using the method of Lagrange multipliers,to give the posterior, while the Bayesian method finds its posterior by multiplying the prior with likelihood functions incorporating the measured data. In this study, we examine MaxEnt using soft constraints, either included in the prior or as probabilistic constraints, in addition to standard moment constraints. We show that when the prior is Gaussian,both Bayesian inference and the MaxEnt method with soft prior constraints give the same posterior means, but their covariances are different. In the Bayesian method, the interactions between variables are applied through the likelihood function, using second or higher-order cross-terms within the posterior pdf. In contrast, the MaxEnt method incorporates interactions between variables using Lagrange multipliers, avoiding second-order correlation terms in the posterior covariance. The MaxEnt method with soft prior constraints, therefore, has a numerical advantage over Bayesian inference, in that the covariance terms are avoided in its integrations. The second MaxEnt method with soft probabilistic constraints is shown to give posterior means of similar, but not identical, structure to the other two methods, due to its different formulation.

  14. Parametric Bayesian Estimation of Differential Entropy and Relative Entropy

    Directory of Open Access Journals (Sweden)

    Maya Gupta

    2010-04-01

    Full Text Available Given iid samples drawn from a distribution with known parametric form, we propose the minimization of expected Bregman divergence to form Bayesian estimates of differential entropy and relative entropy, and derive such estimators for the uniform, Gaussian, Wishart, and inverse Wishart distributions. Additionally, formulas are given for a log gamma Bregman divergence and the differential entropy and relative entropy for the Wishart and inverse Wishart. The results, as always with Bayesian estimates, depend on the accuracy of the prior parameters, but example simulations show that the performance can be substantially improved compared to maximum likelihood or state-of-the-art nonparametric estimators.

  15. Bayesian Maximum Entropy prediction of soil categories using a traditional soil map as soft information.

    NARCIS (Netherlands)

    Brus, D.J.; Bogaert, P.; Heuvelink, G.B.M.

    2008-01-01

    Bayesian Maximum Entropy was used to estimate the probabilities of occurrence of soil categories in the Netherlands, and to simulate realizations from the associated multi-point pdf. Besides the hard observations (H) of the categories at 8369 locations, the soil map of the Netherlands 1:50 000 was

  16. Unification of field theory and maximum entropy methods for learning probability densities

    Science.gov (United States)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  17. Unification of field theory and maximum entropy methods for learning probability densities.

    Science.gov (United States)

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  18. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  19. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  20. Nonadditive entropy maximization is inconsistent with Bayesian updating

    Science.gov (United States)

    Pressé, Steve

    2014-11-01

    The maximum entropy method—used to infer probabilistic models from data—is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.

  1. Bayesian Maximum Entropy Based Algorithm for Digital X-ray Mammogram Processing

    Directory of Open Access Journals (Sweden)

    Radu Mutihac

    2009-06-01

    Full Text Available Basics of Bayesian statistics in inverse problems using the maximum entropy principle are summarized in connection with the restoration of positive, additive images from various types of data like X-ray digital mammograms. An efficient iterative algorithm for image restoration from large data sets based on the conjugate gradient method and Lagrange multipliers in nonlinear optimization of a specific potential function was developed. The point spread function of the imaging system was determined by numerical simulations of inhomogeneous breast-like tissue with microcalcification inclusions of various opacities. The processed digital and digitized mammograms resulted superior in comparison with their raw counterparts in terms of contrast, resolution, noise, and visibility of details.

  2. Parametric Bayesian Estimation of Differential Entropy and Relative Entropy

    OpenAIRE

    Gupta; Srivastava

    2010-01-01

    Given iid samples drawn from a distribution with known parametric form, we propose the minimization of expected Bregman divergence to form Bayesian estimates of differential entropy and relative entropy, and derive such estimators for the uniform, Gaussian, Wishart, and inverse Wishart distributions. Additionally, formulas are given for a log gamma Bregman divergence and the differential entropy and relative entropy for the Wishart and inverse Wishart. The results, as always with Bayesian est...

  3. Improving Bayesian credibility intervals for classifier error rates using maximum entropy empirical priors.

    Science.gov (United States)

    Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders

    2010-06-01

    Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.

  4. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  5. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  6. A Bayes-Maximum Entropy method for multi-sensor data fusion

    Energy Technology Data Exchange (ETDEWEB)

    Beckerman, M.

    1991-01-01

    In this paper we introduce a Bayes-Maximum Entropy formalism for multi-sensor data fusion, and present an application of this methodology to the fusion of ultrasound and visual sensor data as acquired by a mobile robot. In our approach the principle of maximum entropy is applied to the construction of priors and likelihoods from the data. Distances between ultrasound and visual points of interest in a dual representation are used to define Gibbs likelihood distributions. Both one- and two-dimensional likelihoods are presented, and cast into a form which makes explicit their dependence upon the mean. The Bayesian posterior distributions are used to test a null hypothesis, and Maximum Entropy Maps used for navigation are updated using the resulting information from the dual representation. 14 refs., 9 figs.

  7. Analysis of neutron reflectivity data: maximum entropy, Bayesian spectral analysis and speckle holography

    International Nuclear Information System (INIS)

    Sivia, D.S.; Hamilton, W.A.; Smith, G.S.

    1991-01-01

    The analysis of neutron reflectivity data to obtain nuclear scattering length density profiles is akin to the notorious phaseless Fourier problem, well known in many fields such as crystallography. Current methods of analysis culminate in the refinement of a few parameters of a functional model, and are often preceded by a long and laborious process of trial and error. We start by discussing the use of maximum entropy for obtained 'free-form' solutions of the density profile, as an alternative to the trial and error phase when a functional model is not available. Next we consider a Bayesian spectral analysis approach, which is appropriate for optimising the parameters of a simple (but adequate) type of model when the number of parameters is not known. Finally, we suggest a novel experimental procedure, the analogue of astronomical speckle holography, designed to alleviate the ambiguity problems inherent in traditional reflectivity measurements. (orig.)

  8. On an Objective Basis for the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    David J. Miller

    2015-01-01

    Full Text Available In this letter, we elaborate on some of the issues raised by a recent paper by Neapolitan and Jiang concerning the maximum entropy (ME principle and alternative principles for estimating probabilities consistent with known, measured constraint information. We argue that the ME solution for the “problematic” example introduced by Neapolitan and Jiang has stronger objective basis, rooted in results from information theory, than their alternative proposed solution. We also raise some technical concerns about the Bayesian analysis in their work, which was used to independently support their alternative to the ME solution. The letter concludes by noting some open problems involving maximum entropy statistical inference.

  9. An Adaptively Accelerated Bayesian Deblurring Method with Entropy Prior

    Directory of Open Access Journals (Sweden)

    Yong-Hoon Kim

    2008-05-01

    Full Text Available The development of an efficient adaptively accelerated iterative deblurring algorithm based on Bayesian statistical concept has been reported. Entropy of an image has been used as a “prior” distribution and instead of additive form, used in conventional acceleration methods an exponent form of relaxation constant has been used for acceleration. Thus the proposed method is called hereafter as adaptively accelerated maximum a posteriori with entropy prior (AAMAPE. Based on empirical observations in different experiments, the exponent is computed adaptively using first-order derivatives of the deblurred image from previous two iterations. This exponent improves speed of the AAMAPE method in early stages and ensures stability at later stages of iteration. In AAMAPE method, we also consider the constraint of the nonnegativity and flux conservation. The paper discusses the fundamental idea of the Bayesian image deblurring with the use of entropy as prior, and the analytical analysis of superresolution and the noise amplification characteristics of the proposed method. The experimental results show that the proposed AAMAPE method gives lower RMSE and higher SNR in 44% lesser iterations as compared to nonaccelerated maximum a posteriori with entropy prior (MAPE method. Moreover, AAMAPE followed by wavelet wiener filtering gives better result than the state-of-the-art methods.

  10. Unification of field theory and maximum entropy methods for learning probability densities

    OpenAIRE

    Kinney, Justin B.

    2014-01-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy de...

  11. Bayesian and maximum entropy methods for fusion diagnostic measurements with compact neutron spectrometers

    International Nuclear Information System (INIS)

    Reginatto, Marcel; Zimbal, Andreas

    2008-01-01

    In applications of neutron spectrometry to fusion diagnostics, it is advantageous to use methods of data analysis which can extract information from the spectrum that is directly related to the parameters of interest that describe the plasma. We present here methods of data analysis which were developed with this goal in mind, and which were applied to spectrometric measurements made with an organic liquid scintillation detector (type NE213). In our approach, we combine Bayesian parameter estimation methods and unfolding methods based on the maximum entropy principle. This two-step method allows us to optimize the analysis of the data depending on the type of information that we want to extract from the measurements. To illustrate these methods, we analyze neutron measurements made at the PTB accelerator under controlled conditions, using accelerator-produced neutron beams. Although the methods have been chosen with a specific application in mind, they are general enough to be useful for many other types of measurements

  12. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  13. Entropy, Information Theory, Information Geometry and Bayesian Inference in Data, Signal and Image Processing and Inverse Problems

    Directory of Open Access Journals (Sweden)

    Ali Mohammad-Djafari

    2015-06-01

    Full Text Available The main content of this review article is first to review the main inference tools using Bayes rule, the maximum entropy principle (MEP, information theory, relative entropy and the Kullback–Leibler (KL divergence, Fisher information and its corresponding geometries. For each of these tools, the precise context of their use is described. The second part of the paper is focused on the ways these tools have been used in data, signal and image processing and in the inverse problems, which arise in different physical sciences and engineering applications. A few examples of the applications are described: entropy in independent components analysis (ICA and in blind source separation, Fisher information in data model selection, different maximum entropy-based methods in time series spectral estimation and in linear inverse problems and, finally, the Bayesian inference for general inverse problems. Some original materials concerning the approximate Bayesian computation (ABC and, in particular, the variational Bayesian approximation (VBA methods are also presented. VBA is used for proposing an alternative Bayesian computational tool to the classical Markov chain Monte Carlo (MCMC methods. We will also see that VBA englobes joint maximum a posteriori (MAP, as well as the different expectation-maximization (EM algorithms as particular cases.

  14. Maximum entropy reconstruction of spin densities involving non uniform prior

    International Nuclear Information System (INIS)

    Schweizer, J.; Ressouche, E.; Papoular, R.J.; Zheludev, A.I.

    1997-01-01

    Diffraction experiments give microscopic information on structures in crystals. A method which uses the concept of maximum of entropy (MaxEnt), appears to be a formidable improvement in the treatment of diffraction data. This method is based on a bayesian approach: among all the maps compatible with the experimental data, it selects that one which has the highest prior (intrinsic) probability. Considering that all the points of the map are equally probable, this probability (flat prior) is expressed via the Boltzman entropy of the distribution. This method has been used for the reconstruction of charge densities from X-ray data, for maps of nuclear densities from unpolarized neutron data as well as for distributions of spin density. The density maps obtained by this method, as compared to those resulting from the usual inverse Fourier transformation, are tremendously improved. In particular, any substantial deviation from the background is really contained in the data, as it costs entropy compared to a map that would ignore such features. However, in most of the cases, before the measurements are performed, some knowledge exists about the distribution which is investigated. It can range from the simple information of the type of scattering electrons to an elaborate theoretical model. In these cases, the uniform prior which considers all the different pixels as equally likely, is too weak a requirement and has to be replaced. In a rigorous bayesian analysis, Skilling has shown that prior knowledge can be encoded into the Maximum Entropy formalism through a model m(rvec r), via a new definition for the entropy given in this paper. In the absence of any data, the maximum of the entropy functional is reached for ρ(rvec r) = m(rvec r). Any substantial departure from the model, observed in the final map, is really contained in the data as, with the new definition, it costs entropy. This paper presents illustrations of model testing

  15. Maximum entropy reconstruction of the configurational density of states from microcanonical simulations

    International Nuclear Information System (INIS)

    Davis, Sergio

    2013-01-01

    In this work we develop a method for inferring the underlying configurational density of states of a molecular system by combining information from several microcanonical molecular dynamics or Monte Carlo simulations at different energies. This method is based on Jaynes' Maximum Entropy formalism (MaxEnt) for Bayesian statistical inference under known expectation values. We present results of its application to measure thermodynamic entropy and free energy differences in embedded-atom models of metals.

  16. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  17. Bayesian or Laplacien inference, entropy and information theory and information geometry in data and signal processing

    Science.gov (United States)

    Mohammad-Djafari, Ali

    2015-01-01

    The main object of this tutorial article is first to review the main inference tools using Bayesian approach, Entropy, Information theory and their corresponding geometries. This review is focused mainly on the ways these tools have been used in data, signal and image processing. After a short introduction of the different quantities related to the Bayes rule, the entropy and the Maximum Entropy Principle (MEP), relative entropy and the Kullback-Leibler divergence, Fisher information, we will study their use in different fields of data and signal processing such as: entropy in source separation, Fisher information in model order selection, different Maximum Entropy based methods in time series spectral estimation and finally, general linear inverse problems.

  18. Merging daily sea surface temperature data from multiple satellites using a Bayesian maximum entropy method

    Science.gov (United States)

    Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei

    2015-12-01

    Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.

  19. On Supra-Bayesian weighted combination of available data determined by Kerridge inaccuracy ane entropy

    Czech Academy of Sciences Publication Activity Database

    Sečkárová, Vladimíra

    2013-01-01

    Roč. 22, č. 1 (2013), s. 159-168 ISSN 0204-9805 R&D Projects: GA ČR GA13-13502S Grant - others:GA MŠk(CZ) SVV-265315 Institutional support: RVO:67985556 Keywords : Kerridge inaccuracy * maximum entropy principle * parameter estimation Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2013/AS/seckarova-on supra- bayesian weighted combination of available data determined by kerridge inaccuracy ane entropy .pdf

  20. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  1. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  2. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  3. The determination of nuclear charge distributions using a Bayesian maximum entropy method

    International Nuclear Information System (INIS)

    Macaulay, V.A.; Buck, B.

    1995-01-01

    We treat the inference of nuclear charge densities from measurements of elastic electron scattering cross sections. In order to get the most reliable information from expensively acquired, incomplete and noisy measurements, we use Bayesian probability theory. Very little prior information about the charge densities is assumed. We derive a prior probability distribution which is a generalization of a form used widely in image restoration based on the entropy of a physical density. From the posterior distribution of possible densities, we select the most probable one, and show how error bars can be evaluated. These have very reasonable properties, such as increasing without bound as hypotheses about finer scale structures are included in the hypothesis space. The methods are demonstrated by using data on the nuclei 4 He and 12 C. (orig.)

  4. A Note of Caution on Maximizing Entropy

    Directory of Open Access Journals (Sweden)

    Richard E. Neapolitan

    2014-07-01

    Full Text Available The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of performing Bayesian updating using Bayes’ Theorem, and its use often has efficacious results. However, in some circumstances the results seem unacceptable and unintuitive. This paper discusses some of these cases, and discusses how to identify some of the situations in which this principle should not be used. The paper starts by reviewing three approaches to probability, namely the classical approach, the limiting frequency approach, and the Bayesian approach. It then introduces maximum entropy and shows its relationship to the three approaches. Next, through examples, it shows that maximizing entropy sometimes can stand in direct opposition to Bayesian updating based on reasonable prior beliefs. The paper concludes that if we take the Bayesian approach that probability is about reasonable belief based on all available information, then we can resolve the conflict between the maximum entropy approach and the Bayesian approach that is demonstrated in the examples.

  5. Identification of Watershed-scale Critical Source Areas Using Bayesian Maximum Entropy Spatiotemporal Analysis

    Science.gov (United States)

    Roostaee, M.; Deng, Z.

    2017-12-01

    The states' environmental agencies are required by The Clean Water Act to assess all waterbodies and evaluate potential sources of impairments. Spatial and temporal distributions of water quality parameters are critical in identifying Critical Source Areas (CSAs). However, due to limitations in monetary resources and a large number of waterbodies, available monitoring stations are typically sparse with intermittent periods of data collection. Hence, scarcity of water quality data is a major obstacle in addressing sources of pollution through management strategies. In this study spatiotemporal Bayesian Maximum Entropy method (BME) is employed to model the inherent temporal and spatial variability of measured water quality indicators such as Dissolved Oxygen (DO) concentration for Turkey Creek Watershed. Turkey Creek is located in northern Louisiana and has been listed in 303(d) list for DO impairment since 2014 in Louisiana Water Quality Inventory Reports due to agricultural practices. BME method is proved to provide more accurate estimates than the methods of purely spatial analysis by incorporating space/time distribution and uncertainty in available measured soft and hard data. This model would be used to estimate DO concentration at unmonitored locations and times and subsequently identifying CSAs. The USDA's crop-specific land cover data layers of the watershed were then used to determine those practices/changes that led to low DO concentration in identified CSAs. Primary results revealed that cultivation of corn and soybean as well as urban runoff are main contributing sources in low dissolved oxygen in Turkey Creek Watershed.

  6. Precipitation Interpolation by Multivariate Bayesian Maximum Entropy Based on Meteorological Data in Yun- Gui-Guang region, Mainland China

    Science.gov (United States)

    Wang, Chaolin; Zhong, Shaobo; Zhang, Fushen; Huang, Quanyi

    2016-11-01

    Precipitation interpolation has been a hot area of research for many years. It had close relation to meteorological factors. In this paper, precipitation from 91 meteorological stations located in and around Yunnan, Guizhou and Guangxi Zhuang provinces (or autonomous region), Mainland China was taken into consideration for spatial interpolation. Multivariate Bayesian maximum entropy (BME) method with auxiliary variables, including mean relative humidity, water vapour pressure, mean temperature, mean wind speed and terrain elevation, was used to get more accurate regional distribution of annual precipitation. The means, standard deviations, skewness and kurtosis of meteorological factors were calculated. Variogram and cross- variogram were fitted between precipitation and auxiliary variables. The results showed that the multivariate BME method was precise with hard and soft data, probability density function. Annual mean precipitation was positively correlated with mean relative humidity, mean water vapour pressure, mean temperature and mean wind speed, negatively correlated with terrain elevation. The results are supposed to provide substantial reference for research of drought and waterlog in the region.

  7. A spatiotemporal dengue fever early warning model accounting for nonlinear associations with meteorological factors: a Bayesian maximum entropy approach

    Science.gov (United States)

    Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang

    2014-05-01

    Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.

  8. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    Science.gov (United States)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  9. Estimation of Fine Particulate Matter in Taipei Using Landuse Regression and Bayesian Maximum Entropy Methods

    Directory of Open Access Journals (Sweden)

    Yi-Ming Kuo

    2011-06-01

    Full Text Available Fine airborne particulate matter (PM2.5 has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS, the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME method. The resulting epistemic framework can assimilate knowledge bases including: (a empirical-based spatial trends of PM concentration based on landuse regression, (b the spatio-temporal dependence among PM observation information, and (c site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan from 2005–2007.

  10. Estimation of fine particulate matter in Taipei using landuse regression and bayesian maximum entropy methods.

    Science.gov (United States)

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-06-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.

  11. Bayesian Maximum Entropy space/time estimation of surface water chloride in Maryland using river distances.

    Science.gov (United States)

    Jat, Prahlad; Serre, Marc L

    2016-12-01

    Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R 2 by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles. Copyright © 2016. Published by Elsevier Ltd.

  12. Spectral density analysis of time correlation functions in lattice QCD using the maximum entropy method

    International Nuclear Information System (INIS)

    Fiebig, H. Rudolf

    2002-01-01

    We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss the practical issues of the approach

  13. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  14. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    Science.gov (United States)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  15. A Bayesian maximum entropy-based methodology for optimal spatiotemporal design of groundwater monitoring networks.

    Science.gov (United States)

    Hosseini, Marjan; Kerachian, Reza

    2017-09-01

    This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.

  16. Bayesian Probability Theory

    Science.gov (United States)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  17. Spatiotemporal modeling of PM2.5 concentrations at the national scale combining land use regression and Bayesian maximum entropy in China.

    Science.gov (United States)

    Chen, Li; Gao, Shuang; Zhang, Hui; Sun, Yanling; Ma, Zhenxing; Vedal, Sverre; Mao, Jian; Bai, Zhipeng

    2018-05-03

    Concentrations of particulate matter with aerodynamic diameter Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals were developed to estimate the PM 2.5 concentrations on a national scale in China. This hybrid model could potentially provide more valid predictions than a commonly-used LUR model. The LUR/BME model had good performance characteristics, with R 2  = 0.82 and root mean square error (RMSE) of 4.6 μg/m 3 . Prediction errors of the LUR/BME model were reduced by incorporating soft data accounting for data uncertainty, with the R 2 increasing by 6%. The performance of LUR/BME is better than OK/BME. The LUR/BME model is the most accurate fine spatial scale PM 2.5 model developed to date for China. Copyright © 2018. Published by Elsevier Ltd.

  18. LensEnt2: Maximum-entropy weak lens reconstruction

    Science.gov (United States)

    Marshall, P. J.; Hobson, M. P.; Gull, S. F.; Bridle, S. L.

    2013-08-01

    LensEnt2 is a maximum entropy reconstructor of weak lensing mass maps. The method takes each galaxy shape as an independent estimator of the reduced shear field and incorporates an intrinsic smoothness, determined by Bayesian methods, into the reconstruction. The uncertainties from both the intrinsic distribution of galaxy shapes and galaxy shape estimation are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures are calculated with corresponding uncertainties. The input is a galaxy ellipticity catalog with each measured galaxy shape treated as a noisy tracer of the reduced shear field, which is inferred on a fine pixel grid assuming positivity, and smoothness on scales of w arcsec where w is an input parameter. The ICF width w can be chosen by computing the evidence for it.

  19. The Kalman Filter Revisited Using Maximum Relative Entropy

    Directory of Open Access Journals (Sweden)

    Adom Giffin

    2014-02-01

    Full Text Available In 1960, Rudolf E. Kalman created what is known as the Kalman filter, which is a way to estimate unknown variables from noisy measurements. The algorithm follows the logic that if the previous state of the system is known, it could be used as the best guess for the current state. This information is first applied a priori to any measurement by using it in the underlying dynamics of the system. Second, measurements of the unknown variables are taken. These two pieces of information are taken into account to determine the current state of the system. Bayesian inference is specifically designed to accommodate the problem of updating what we think of the world based on partial or uncertain information. In this paper, we present a derivation of the general Bayesian filter, then adapt it for Markov systems. A simple example is shown for pedagogical purposes. We also show that by using the Kalman assumptions or “constraints”, we can arrive at the Kalman filter using the method of maximum (relative entropy (MrE, which goes beyond Bayesian methods. Finally, we derive a generalized, nonlinear filter using MrE, where the original Kalman Filter is a special case. We further show that the variable relationship can be any function, and thus, approximations, such as the extended Kalman filter, the unscented Kalman filter and other Kalman variants are special cases as well.

  20. Bayesian maximum entropy integration of ozone observations and model predictions: an application for attainment demonstration in North Carolina.

    Science.gov (United States)

    de Nazelle, Audrey; Arunachalam, Saravanan; Serre, Marc L

    2010-08-01

    States in the USA are required to demonstrate future compliance of criteria air pollutant standards by using both air quality monitors and model outputs. In the case of ozone, the demonstration tests aim at relying heavily on measured values, due to their perceived objectivity and enforceable quality. Weight given to numerical models is diminished by integrating them in the calculations only in a relative sense. For unmonitored locations, the EPA has suggested the use of a spatial interpolation technique to assign current values. We demonstrate that this approach may lead to erroneous assignments of nonattainment and may make it difficult for States to establish future compliance. We propose a method that combines different sources of information to map air pollution, using the Bayesian Maximum Entropy (BME) Framework. The approach gives precedence to measured values and integrates modeled data as a function of model performance. We demonstrate this approach in North Carolina, using the State's ozone monitoring network in combination with outputs from the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. We show that the BME data integration approach, compared to a spatial interpolation of measured data, improves the accuracy and the precision of ozone estimations across the state.

  1. Stationary neutrino radiation transport by maximum entropy closure

    International Nuclear Information System (INIS)

    Bludman, S.A.

    1994-11-01

    The authors obtain the angular distributions that maximize the entropy functional for Maxwell-Boltzmann (classical), Bose-Einstein, and Fermi-Dirac radiation. In the low and high occupancy limits, the maximum entropy closure is bounded by previously known variable Eddington factors that depend only on the flux. For intermediate occupancy, the maximum entropy closure depends on both the occupation density and the flux. The Fermi-Dirac maximum entropy variable Eddington factor shows a scale invariance, which leads to a simple, exact analytic closure for fermions. This two-dimensional variable Eddington factor gives results that agree well with exact (Monte Carlo) neutrino transport calculations out of a collapse residue during early phases of hydrostatic neutron star formation

  2. Optimizing an estuarine water quality monitoring program through an entropy-based hierarchical spatiotemporal Bayesian framework

    Science.gov (United States)

    Alameddine, Ibrahim; Karmakar, Subhankar; Qian, Song S.; Paerl, Hans W.; Reckhow, Kenneth H.

    2013-10-01

    The total maximum daily load program aims to monitor more than 40,000 standard violations in around 20,000 impaired water bodies across the United States. Given resource limitations, future monitoring efforts have to be hedged against the uncertainties in the monitored system, while taking into account existing knowledge. In that respect, we have developed a hierarchical spatiotemporal Bayesian model that can be used to optimize an existing monitoring network by retaining stations that provide the maximum amount of information, while identifying locations that would benefit from the addition of new stations. The model assumes the water quality parameters are adequately described by a joint matrix normal distribution. The adopted approach allows for a reduction in redundancies, while emphasizing information richness rather than data richness. The developed approach incorporates the concept of entropy to account for the associated uncertainties. Three different entropy-based criteria are adopted: total system entropy, chlorophyll-a standard violation entropy, and dissolved oxygen standard violation entropy. A multiple attribute decision making framework is adopted to integrate the competing design criteria and to generate a single optimal design. The approach is implemented on the water quality monitoring system of the Neuse River Estuary in North Carolina, USA. The model results indicate that the high priority monitoring areas identified by the total system entropy and the dissolved oxygen violation entropy criteria are largely coincident. The monitoring design based on the chlorophyll-a standard violation entropy proved to be less informative, given the low probabilities of violating the water quality standard in the estuary.

  3. Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.

    Science.gov (United States)

    Farsani, Zahra Amini; Schmid, Volker J

    2017-01-01

    In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.

  4. Quantile-based Bayesian maximum entropy approach for spatiotemporal modeling of ambient air quality levels.

    Science.gov (United States)

    Yu, Hwa-Lung; Wang, Chih-Hsin

    2013-02-05

    Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.

  5. The Data-Constrained Generalized Maximum Entropy Estimator of the GLM: Asymptotic Theory and Inference

    Directory of Open Access Journals (Sweden)

    Nicholas Scott Cardell

    2013-05-01

    Full Text Available Maximum entropy methods of parameter estimation are appealing because they impose no additional structure on the data, other than that explicitly assumed by the analyst. In this paper we prove that the data constrained GME estimator of the general linear model is consistent and asymptotically normal. The approach we take in establishing the asymptotic properties concomitantly identifies a new computationally efficient method for calculating GME estimates. Formulae are developed to compute asymptotic variances and to perform Wald, likelihood ratio, and Lagrangian multiplier statistical tests on model parameters. Monte Carlo simulations are provided to assess the performance of the GME estimator in both large and small sample situations. Furthermore, we extend our results to maximum cross-entropy estimators and indicate a variant of the GME estimator that is unbiased. Finally, we discuss the relationship of GME estimators to Bayesian estimators, pointing out the conditions under which an unbiased GME estimator would be efficient.

  6. A Two-Stage Maximum Entropy Prior of Location Parameter with a Stochastic Multivariate Interval Constraint and Its Properties

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2016-05-01

    Full Text Available This paper proposes a two-stage maximum entropy prior to elicit uncertainty regarding a multivariate interval constraint of the location parameter of a scale mixture of normal model. Using Shannon’s entropy, this study demonstrates how the prior, obtained by using two stages of a prior hierarchy, appropriately accounts for the information regarding the stochastic constraint and suggests an objective measure of the degree of belief in the stochastic constraint. The study also verifies that the proposed prior plays the role of bridging the gap between the canonical maximum entropy prior of the parameter with no interval constraint and that with a certain multivariate interval constraint. It is shown that the two-stage maximum entropy prior belongs to the family of rectangle screened normal distributions that is conjugate for samples from a normal distribution. Some properties of the prior density, useful for developing a Bayesian inference of the parameter with the stochastic constraint, are provided. We also propose a hierarchical constrained scale mixture of normal model (HCSMN, which uses the prior density to estimate the constrained location parameter of a scale mixture of normal model and demonstrates the scope of its applicability.

  7. Halo-independence with quantified maximum entropy at DAMA/LIBRA

    Energy Technology Data Exchange (ETDEWEB)

    Fowlie, Andrew, E-mail: andrew.j.fowlie@googlemail.com [ARC Centre of Excellence for Particle Physics at the Tera-scale, Monash University, Melbourne, Victoria 3800 (Australia)

    2017-10-01

    Using the DAMA/LIBRA anomaly as an example, we formalise the notion of halo-independence in the context of Bayesian statistics and quantified maximum entropy. We consider an infinite set of possible profiles, weighted by an entropic prior and constrained by a likelihood describing noisy measurements of modulated moments by DAMA/LIBRA. Assuming an isotropic dark matter (DM) profile in the galactic rest frame, we find the most plausible DM profiles and predictions for unmodulated signal rates at DAMA/LIBRA. The entropic prior contains an a priori unknown regularisation factor, β, that describes the strength of our conviction that the profile is approximately Maxwellian. By varying β, we smoothly interpolate between a halo-independent and a halo-dependent analysis, thus exploring the impact of prior information about the DM profile.

  8. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    Directory of Open Access Journals (Sweden)

    Rodrigo Cofré

    2018-01-01

    Full Text Available The spiking activity of neuronal networks follows laws that are not time-reversal symmetric; the notion of pre-synaptic and post-synaptic neurons, stimulus correlations and noise correlations have a clear time order. Therefore, a biologically realistic statistical model for the spiking activity should be able to capture some degree of time irreversibility. We use the thermodynamic formalism to build a framework in the context maximum entropy models to quantify the degree of time irreversibility, providing an explicit formula for the information entropy production of the inferred maximum entropy Markov chain. We provide examples to illustrate our results and discuss the importance of time irreversibility for modeling the spike train statistics.

  9. Reconstruction of calmodulin single-molecule FRET states, dye interactions, and CaMKII peptide binding by MultiNest and classic maximum entropy

    Science.gov (United States)

    DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.

    2013-08-01

    We analyzed single molecule FRET burst measurements using Bayesian nested sampling. The MultiNest algorithm produces accurate FRET efficiency distributions from single-molecule data. FRET efficiency distributions recovered by MultiNest and classic maximum entropy are compared for simulated data and for calmodulin labeled at residues 44 and 117. MultiNest compares favorably with maximum entropy analysis for simulated data, judged by the Bayesian evidence. FRET efficiency distributions recovered for calmodulin labeled with two different FRET dye pairs depended on the dye pair and changed upon Ca2+ binding. We also looked at the FRET efficiency distributions of calmodulin bound to the calcium/calmodulin dependent protein kinase II (CaMKII) binding domain. For both dye pairs, the FRET efficiency distribution collapsed to a single peak in the case of calmodulin bound to the CaMKII peptide. These measurements strongly suggest that consideration of dye-protein interactions is crucial in forming an accurate picture of protein conformations from FRET data.

  10. Reconstruction of Calmodulin Single-Molecule FRET States, Dye-Interactions, and CaMKII Peptide Binding by MultiNest and Classic Maximum Entropy.

    Science.gov (United States)

    Devore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2013-08-30

    We analyze single molecule FRET burst measurements using Bayesian nested sampling. The MultiNest algorithm produces accurate FRET efficiency distributions from single-molecule data. FRET efficiency distributions recovered by MultiNest and classic maximum entropy are compared for simulated data and for calmodulin labeled at residues 44 and 117. MultiNest compares favorably with maximum entropy analysis for simulated data, judged by the Bayesian evidence. FRET efficiency distributions recovered for calmodulin labeled with two different FRET dye pairs depended on the dye pair and changed upon Ca 2+ binding. We also looked at the FRET efficiency distributions of calmodulin bound to the calcium/calmodulin dependent protein kinase II (CaMKII) binding domain. For both dye pairs, the FRET efficiency distribution collapsed to a single peak in the case of calmodulin bound to the CaMKII peptide. These measurements strongly suggest that consideration of dye-protein interactions is crucial in forming an accurate picture of protein conformations from FRET data.

  11. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  12. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  13. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....

  14. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    Science.gov (United States)

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  15. Maximum entropy formalism for the analytic continuation of matrix-valued Green's functions

    Science.gov (United States)

    Kraberger, Gernot J.; Triebl, Robert; Zingl, Manuel; Aichhorn, Markus

    2017-10-01

    We present a generalization of the maximum entropy method to the analytic continuation of matrix-valued Green's functions. To treat off-diagonal elements correctly based on Bayesian probability theory, the entropy term has to be extended for spectral functions that are possibly negative in some frequency ranges. In that way, all matrix elements of the Green's function matrix can be analytically continued; we introduce a computationally cheap element-wise method for this purpose. However, this method cannot ensure important constraints on the mathematical properties of the resulting spectral functions, namely positive semidefiniteness and Hermiticity. To improve on this, we present a full matrix formalism, where all matrix elements are treated simultaneously. We show the capabilities of these methods using insulating and metallic dynamical mean-field theory (DMFT) Green's functions as test cases. Finally, we apply the methods to realistic material calculations for LaTiO3, where off-diagonal matrix elements in the Green's function appear due to the distorted crystal structure.

  16. Analysis of positron lifetime spectra using quantified maximum entropy and a general linear filter

    International Nuclear Information System (INIS)

    Shukla, A.; Peter, M.; Hoffmann, L.

    1993-01-01

    Two new approaches are used to analyze positron annihilation lifetime spectra. A general linear filter is designed to filter the noise from lifetime data. The quantified maximum entropy method is used to solve the inverse problem of finding the lifetimes and intensities present in data. We determine optimal values of parameters needed for fitting using Bayesian methods. Estimates of errors are provided. We present results on simulated and experimental data with extensive tests to show the utility of this method and compare it with other existing methods. (orig.)

  17. Spatial Estimation of Losses Attributable to Meteorological Disasters in a Specific Area (105.0°E–115.0°E, 25°N–35°N Using Bayesian Maximum Entropy and Partial Least Squares Regression

    Directory of Open Access Journals (Sweden)

    F. S. Zhang

    2016-01-01

    Full Text Available The spatial mapping of losses attributable to such disasters is now well established as a means of describing the spatial patterns of disaster risk, and it has been shown to be suitable for many types of major meteorological disasters. However, few studies have been carried out by developing a regression model to estimate the effects of the spatial distribution of meteorological factors on losses associated with meteorological disasters. In this study, the proposed approach is capable of the following: (a estimating the spatial distributions of seven meteorological factors using Bayesian maximum entropy, (b identifying the four mapping methods used in this research with the best performance based on the cross validation, and (c establishing a fitted model between the PLS components and disaster losses information using partial least squares regression within a specific research area. The results showed the following: (a best mapping results were produced by multivariate Bayesian maximum entropy with probabilistic soft data; (b the regression model using three PLS components, extracted from seven meteorological factors by PLS method, was the most predictive by means of PRESS/SS test; (c northern Hunan Province sustains the most damage, and southeastern Gansu Province and western Guizhou Province sustained the least.

  18. Application of maximum entropy to neutron tunneling spectroscopy

    International Nuclear Information System (INIS)

    Mukhopadhyay, R.; Silver, R.N.

    1990-01-01

    We demonstrate the maximum entropy method for the deconvolution of high resolution tunneling data acquired with a quasielastic spectrometer. Given a precise characterization of the instrument resolution function, a maximum entropy analysis of lutidine data obtained with the IRIS spectrometer at ISIS results in an effective factor of three improvement in resolution. 7 refs., 4 figs

  19. Three faces of entropy for complex systems: Information, thermodynamics, and the maximum entropy principle

    Science.gov (United States)

    Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf

    2017-09-01

    There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.

  20. Modeling multisite streamflow dependence with maximum entropy copula

    Science.gov (United States)

    Hao, Z.; Singh, V. P.

    2013-10-01

    Synthetic streamflows at different sites in a river basin are needed for planning, operation, and management of water resources projects. Modeling the temporal and spatial dependence structure of monthly streamflow at different sites is generally required. In this study, the maximum entropy copula method is proposed for multisite monthly streamflow simulation, in which the temporal and spatial dependence structure is imposed as constraints to derive the maximum entropy copula. The monthly streamflows at different sites are then generated by sampling from the conditional distribution. A case study for the generation of monthly streamflow at three sites in the Colorado River basin illustrates the application of the proposed method. Simulated streamflow from the maximum entropy copula is in satisfactory agreement with observed streamflow.

  1. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  2. Maximum entropy reconstructions for crystallographic imaging; Cristallographie et reconstruction d`images par maximum d`entropie

    Energy Technology Data Exchange (ETDEWEB)

    Papoular, R

    1997-07-01

    The Fourier Transform is of central importance to Crystallography since it allows the visualization in real space of tridimensional scattering densities pertaining to physical systems from diffraction data (powder or single-crystal diffraction, using x-rays, neutrons, electrons or else). In turn, this visualization makes it possible to model and parametrize these systems, the crystal structures of which are eventually refined by Least-Squares techniques (e.g., the Rietveld method in the case of Powder Diffraction). The Maximum Entropy Method (sometimes called MEM or MaxEnt) is a general imaging technique, related to solving ill-conditioned inverse problems. It is ideally suited for tackling undetermined systems of linear questions (for which the number of variables is much larger than the number of equations). It is already being applied successfully in Astronomy, Radioastronomy and Medical Imaging. The advantages of using MAXIMUM Entropy over conventional Fourier and `difference Fourier` syntheses stem from the following facts: MaxEnt takes the experimental error bars into account; MaxEnt incorporate Prior Knowledge (e.g., the positivity of the scattering density in some instances); MaxEnt allows density reconstructions from incompletely phased data, as well as from overlapping Bragg reflections; MaxEnt substantially reduces truncation errors to which conventional experimental Fourier reconstructions are usually prone. The principles of Maximum Entropy imaging as applied to Crystallography are first presented. The method is then illustrated by a detailed example specific to Neutron Diffraction: the search for proton in solids. (author). 17 refs.

  3. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation

    Science.gov (United States)

    Bergeron, Dominic; Tremblay, A.-M. S.

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  4. Maximum-Entropy Inference with a Programmable Annealer

    Science.gov (United States)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  5. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  6. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  7. Spatiotemporal fusion of multiple-satellite aerosol optical depth (AOD) products using Bayesian maximum entropy method

    Science.gov (United States)

    Tang, Qingxin; Bo, Yanchen; Zhu, Yuxin

    2016-04-01

    Merging multisensor aerosol optical depth (AOD) products is an effective way to produce more spatiotemporally complete and accurate AOD products. A spatiotemporal statistical data fusion framework based on a Bayesian maximum entropy (BME) method was developed for merging satellite AOD products in East Asia. The advantages of the presented merging framework are that it not only utilizes the spatiotemporal autocorrelations but also explicitly incorporates the uncertainties of the AOD products being merged. The satellite AOD products used for merging are the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 5.1 Level-2 AOD products (MOD04_L2) and the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Deep Blue Level 2 AOD products (SWDB_L2). The results show that the average completeness of the merged AOD data is 95.2%,which is significantly superior to the completeness of MOD04_L2 (22.9%) and SWDB_L2 (20.2%). By comparing the merged AOD to the Aerosol Robotic Network AOD records, the results show that the correlation coefficient (0.75), root-mean-square error (0.29), and mean bias (0.068) of the merged AOD are close to those (the correlation coefficient (0.82), root-mean-square error (0.19), and mean bias (0.059)) of the MODIS AOD. In the regions where both MODIS and SeaWiFS have valid observations, the accuracy of the merged AOD is higher than those of MODIS and SeaWiFS AODs. Even in regions where both MODIS and SeaWiFS AODs are missing, the accuracy of the merged AOD is also close to the accuracy of the regions where both MODIS and SeaWiFS have valid observations.

  8. Information and Entropy

    Science.gov (United States)

    Caticha, Ariel

    2007-11-01

    What is information? Is it physical? We argue that in a Bayesian theory the notion of information must be defined in terms of its effects on the beliefs of rational agents. Information is whatever constrains rational beliefs and therefore it is the force that induces us to change our minds. This problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), which is designed for updating from arbitrary priors given information in the form of arbitrary constraints, includes as special cases both MaxEnt (which allows arbitrary constraints) and Bayes' rule (which allows arbitrary priors). Thus, ME unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme that allows us to handle problems that lie beyond the reach of either of the two methods separately. I conclude with a couple of simple illustrative examples.

  9. Choosing between Higher Moment Maximum Entropy Models and Its Application to Homogeneous Point Processes with Random Effects

    Directory of Open Access Journals (Sweden)

    Lotfi Khribi

    2017-12-01

    Full Text Available In the Bayesian framework, the usual choice of prior in the prediction of homogeneous Poisson processes with random effects is the gamma one. Here, we propose the use of higher order maximum entropy priors. Their advantage is illustrated in a simulation study and the choice of the best order is established by two goodness-of-fit criteria: Kullback–Leibler divergence and a discrepancy measure. This procedure is illustrated on a warranty data set from the automobile industry.

  10. Justifying Objective Bayesianism on Predicate Languages

    Directory of Open Access Journals (Sweden)

    Jürgen Landes

    2015-04-01

    Full Text Available Objective Bayesianism says that the strengths of one’s beliefs ought to be probabilities, calibrated to physical probabilities insofar as one has evidence of them, and otherwise sufficiently equivocal. These norms of belief are often explicated using the maximum entropy principle. In this paper we investigate the extent to which one can provide a unified justification of the objective Bayesian norms in the case in which the background language is a first-order predicate language, with a view to applying the resulting formalism to inductive logic. We show that the maximum entropy principle can be motivated largely in terms of minimising worst-case expected loss.

  11. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  12. Application of Maximum Entropy Distribution to the Statistical Properties of Wave Groups

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The new distributions of the statistics of wave groups based on the maximum entropy principle are presented. The maximum entropy distributions appear to be superior to conventional distributions when applied to a limited amount of information. Its applications to the wave group properties show the effectiveness of the maximum entropy distribution. FFT filtering method is employed to obtain the wave envelope fast and efficiently. Comparisons of both the maximum entropy distribution and the distribution of Longuet-Higgins (1984) with the laboratory wind-wave data show that the former gives a better fit.

  13. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  14. Tsallis distribution as a standard maximum entropy solution with 'tail' constraint

    International Nuclear Information System (INIS)

    Bercher, J.-F.

    2008-01-01

    We show that Tsallis' distributions can be derived from the standard (Shannon) maximum entropy setting, by incorporating a constraint on the divergence between the distribution and another distribution imagined as its tail. In this setting, we find an underlying entropy which is the Renyi entropy. Furthermore, escort distributions and generalized means appear as a direct consequence of the construction. Finally, the 'maximum entropy tail distribution' is identified as a Generalized Pareto Distribution

  15. Spatiotemporal analysis and mapping of oral cancer risk in changhua county (taiwan): an application of generalized bayesian maximum entropy method.

    Science.gov (United States)

    Yu, Hwa-Lung; Chiang, Chi-Ting; Lin, Shu-De; Chang, Tsun-Kuo

    2010-02-01

    Incidence rate of oral cancer in Changhua County is the highest among the 23 counties of Taiwan during 2001. However, in health data analysis, crude or adjusted incidence rates of a rare event (e.g., cancer) for small populations often exhibit high variances and are, thus, less reliable. We proposed a generalized Bayesian Maximum Entropy (GBME) analysis of spatiotemporal disease mapping under conditions of considerable data uncertainty. GBME was used to study the oral cancer population incidence in Changhua County (Taiwan). Methodologically, GBME is based on an epistematics principles framework and generates spatiotemporal estimates of oral cancer incidence rates. In a way, it accounts for the multi-sourced uncertainty of rates, including small population effects, and the composite space-time dependence of rare events in terms of an extended Poisson-based semivariogram. The results showed that GBME analysis alleviates the noises of oral cancer data from population size effect. Comparing to the raw incidence data, the maps of GBME-estimated results can identify high risk oral cancer regions in Changhua County, where the prevalence of betel quid chewing and cigarette smoking is relatively higher than the rest of the areas. GBME method is a valuable tool for spatiotemporal disease mapping under conditions of uncertainty. 2010 Elsevier Inc. All rights reserved.

  16. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  17. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  18. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems.

    Science.gov (United States)

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-05-13

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.

  19. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  20. MAXIMUM PRINCIPLE FOR SUBSONIC FLOW WITH VARIABLE ENTROPY

    Directory of Open Access Journals (Sweden)

    B. Sizykh Grigory

    2017-01-01

    Full Text Available Maximum principle for subsonic flow is fair for stationary irrotational subsonic gas flows. According to this prin- ciple, if the value of the velocity is not constant everywhere, then its maximum is achieved on the boundary and only on the boundary of the considered domain. This property is used when designing form of an aircraft with a maximum critical val- ue of the Mach number: it is believed that if the local Mach number is less than unit in the incoming flow and on the body surface, then the Mach number is less then unit in all points of flow. The known proof of maximum principle for subsonic flow is based on the assumption that in the whole considered area of the flow the pressure is a function of density. For the ideal and perfect gas (the role of diffusion is negligible, and the Mendeleev-Clapeyron law is fulfilled, the pressure is a function of density if entropy is constant in the entire considered area of the flow. Shows an example of a stationary sub- sonic irrotational flow, in which the entropy has different values on different stream lines, and the pressure is not a function of density. The application of the maximum principle for subsonic flow with respect to such a flow would be unreasonable. This example shows the relevance of the question about the place of the points of maximum value of the velocity, if the entropy is not a constant. To clarify the regularities of the location of these points, was performed the analysis of the com- plete Euler equations (without any simplifying assumptions in 3-D case. The new proof of the maximum principle for sub- sonic flow was proposed. This proof does not rely on the assumption that the pressure is a function of density. Thus, it is shown that the maximum principle for subsonic flow is true for stationary subsonic irrotational flows of ideal perfect gas with variable entropy.

  1. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  2. Maximum and minimum entropy states yielding local continuity bounds

    Science.gov (United States)

    Hanson, Eric P.; Datta, Nilanjana

    2018-04-01

    Given an arbitrary quantum state (σ), we obtain an explicit construction of a state ρɛ * ( σ ) [respectively, ρ * , ɛ ( σ ) ] which has the maximum (respectively, minimum) entropy among all states which lie in a specified neighborhood (ɛ-ball) of σ. Computing the entropy of these states leads to a local strengthening of the continuity bound of the von Neumann entropy, i.e., the Audenaert-Fannes inequality. Our bound is local in the sense that it depends on the spectrum of σ. The states ρɛ * ( σ ) and ρ * , ɛ (σ) depend only on the geometry of the ɛ-ball and are in fact optimizers for a larger class of entropies. These include the Rényi entropy and the minimum- and maximum-entropies, providing explicit formulas for certain smoothed quantities. This allows us to obtain local continuity bounds for these quantities as well. In obtaining this bound, we first derive a more general result which may be of independent interest, namely, a necessary and sufficient condition under which a state maximizes a concave and Gâteaux-differentiable function in an ɛ-ball around a given state σ. Examples of such a function include the von Neumann entropy and the conditional entropy of bipartite states. Our proofs employ tools from the theory of convex optimization under non-differentiable constraints, in particular Fermat's rule, and majorization theory.

  3. Calculating the Prior Probability Distribution for a Causal Network Using Maximum Entropy: Alternative Approaches

    Directory of Open Access Journals (Sweden)

    Michael J. Markham

    2011-07-01

    Full Text Available Some problems occurring in Expert Systems can be resolved by employing a causal (Bayesian network and methodologies exist for this purpose. These require data in a specific form and make assumptions about the independence relationships involved. Methodologies using Maximum Entropy (ME are free from these conditions and have the potential to be used in a wider context including systems consisting of given sets of linear and independence constraints, subject to consistency and convergence. ME can also be used to validate results from the causal network methodologies. Three ME methods for determining the prior probability distribution of causal network systems are considered. The first method is Sequential Maximum Entropy in which the computation of a progression of local distributions leads to the over-all distribution. This is followed by development of the Method of Tribus. The development takes the form of an algorithm that includes the handling of explicit independence constraints. These fall into two groups those relating parents of vertices, and those deduced from triangulation of the remaining graph. The third method involves a variation in the part of that algorithm which handles independence constraints. Evidence is presented that this adaptation only requires the linear constraints and the parental independence constraints to emulate the second method in a substantial class of examples.

  4. On the maximum entropy distributions of inherently positive nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Taavitsainen, A., E-mail: aapo.taavitsainen@gmail.com; Vanhanen, R.

    2017-05-11

    The multivariate log-normal distribution is used by many authors and statistical uncertainty propagation programs for inherently positive quantities. Sometimes it is claimed that the log-normal distribution results from the maximum entropy principle, if only means, covariances and inherent positiveness of quantities are known or assumed to be known. In this article we show that this is not true. Assuming a constant prior distribution, the maximum entropy distribution is in fact a truncated multivariate normal distribution – whenever it exists. However, its practical application to multidimensional cases is hindered by lack of a method to compute its location and scale parameters from means and covariances. Therefore, regardless of its theoretical disadvantage, use of other distributions seems to be a practical necessity. - Highlights: • Statistical uncertainty propagation requires a sampling distribution. • The objective distribution of inherently positive quantities is determined. • The objectivity is based on the maximum entropy principle. • The maximum entropy distribution is the truncated normal distribution. • Applicability of log-normal or normal distribution approximation is limited.

  5. Rumor Identification with Maximum Entropy in MicroNet

    Directory of Open Access Journals (Sweden)

    Suisheng Yu

    2017-01-01

    Full Text Available The widely used applications of Microblog, WeChat, and other social networking platforms (that we call MicroNet shorten the period of information dissemination and expand the range of information dissemination, which allows rumors to cause greater harm and have more influence. A hot topic in the information dissemination field is how to identify and block rumors. Based on the maximum entropy model, this paper constructs the recognition mechanism of rumor information in the micronetwork environment. First, based on the information entropy theory, we obtained the characteristics of rumor information using the maximum entropy model. Next, we optimized the original classifier training set and the feature function to divide the information into rumors and nonrumors. Finally, the experimental simulation results show that the rumor identification results using this method are better than the original classifier and other related classification methods.

  6. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  7. The maximum-entropy method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš; Schneider, M.

    2003-01-01

    Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003

  8. Estimation of Land Surface Temperature through Blending MODIS and AMSR-E Data with the Bayesian Maximum Entropy Method

    Directory of Open Access Journals (Sweden)

    Xiaokang Kou

    2016-01-01

    Full Text Available Land surface temperature (LST plays a major role in the study of surface energy balances. Remote sensing techniques provide ways to monitor LST at large scales. However, due to atmospheric influences, significant missing data exist in LST products retrieved from satellite thermal infrared (TIR remotely sensed data. Although passive microwaves (PMWs are able to overcome these atmospheric influences while estimating LST, the data are constrained by low spatial resolution. In this study, to obtain complete and high-quality LST data, the Bayesian Maximum Entropy (BME method was introduced to merge 0.01° and 0.25° LSTs inversed from MODIS and AMSR-E data, respectively. The result showed that the missing LSTs in cloudy pixels were filled completely, and the availability of merged LSTs reaches 100%. Because the depths of LST and soil temperature measurements are different, before validating the merged LST, the station measurements were calibrated with an empirical equation between MODIS LST and 0~5 cm soil temperatures. The results showed that the accuracy of merged LSTs increased with the increasing quantity of utilized data, and as the availability of utilized data increased from 25.2% to 91.4%, the RMSEs of the merged data decreased from 4.53 °C to 2.31 °C. In addition, compared with the filling gap method in which MODIS LST gaps were filled with AMSR-E LST directly, the merged LSTs from the BME method showed better spatial continuity. The different penetration depths of TIR and PMWs may influence fusion performance and still require further studies.

  9. Maximum Entropy: Clearing up Mysteries

    Directory of Open Access Journals (Sweden)

    Marian Grendár

    2001-04-01

    Full Text Available Abstract: There are several mystifications and a couple of mysteries pertinent to MaxEnt. The mystifications, pitfalls and traps are set up mainly by an unfortunate formulation of Jaynes' die problem, the cause célèbre of MaxEnt. After discussing the mystifications a new formulation of the problem is proposed. Then we turn to the mysteries. An answer to the recurring question 'Just what are we accomplishing when we maximize entropy?' [8], based on MaxProb rationale of MaxEnt [6], is recalled. A brief view on the other mystery: 'What is the relation between MaxEnt and the Bayesian method?' [9], in light of the MaxProb rationale of MaxEnt suggests that there is not and cannot be a conflict between MaxEnt and Bayes Theorem.

  10. Maximum Entropy and Theory Construction: A Reply to Favretti

    Directory of Open Access Journals (Sweden)

    John Harte

    2018-04-01

    Full Text Available In the maximum entropy theory of ecology (METE, the form of a function describing the distribution of abundances over species and metabolic rates over individuals in an ecosystem is inferred using the maximum entropy inference procedure. Favretti shows that an alternative maximum entropy model exists that assumes the same prior knowledge and makes predictions that differ from METE’s. He shows that both cannot be correct and asserts that his is the correct one because it can be derived from a classic microstate-counting calculation. I clarify here exactly what the core entities and definitions are for METE, and discuss the relevance of two critical issues raised by Favretti: the existence of a counting procedure for microstates and the choices of definition of the core elements of a theory. I emphasize that a theorist controls how the core entities of his or her theory are defined, and that nature is the final arbiter of the validity of a theory.

  11. A Maximum Entropy Method for a Robust Portfolio Problem

    Directory of Open Access Journals (Sweden)

    Yingying Xu

    2014-06-01

    Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.

  12. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  13. Maximum entropy based reconstruction of soft X ray emissivity profiles in W7-AS

    International Nuclear Information System (INIS)

    Ertl, K.; Linden, W. von der; Dose, V.; Weller, A.

    1996-01-01

    The reconstruction of 2-D emissivity profiles from soft X ray tomography measurements constitutes a highly underdetermined and ill-posed inversion problem, because of the restricted viewing access, the number of chords and the increased noise level in most plasma devices. An unbiased and consistent probabilistic approach within the framework of Bayesian inference is provided by the maximum entropy method, which is independent of model assumptions, but allows any prior knowledge available to be incorporated. The formalism is applied to the reconstruction of emissivity profiles in an NBI heated plasma discharge to determine the dependence of the Shafranov shift on β, the reduction of which was a particular objective in designing the advanced W7-AS stellarator. (author). 40 refs, 7 figs

  14. The Structure of the Class of Maximum Tsallis–Havrda–Chavát Entropy Copulas

    Directory of Open Access Journals (Sweden)

    Jesús E. García

    2016-07-01

    Full Text Available A maximum entropy copula is the copula associated with the joint distribution, with prescribed marginal distributions on [ 0 , 1 ] , which maximizes the Tsallis–Havrda–Chavát entropy with q = 2 . We find necessary and sufficient conditions for each maximum entropy copula to be a copula in the class introduced in Rodríguez-Lallena and Úbeda-Flores (2004, and we also show that each copula in that class is a maximum entropy copula.

  15. Discontinuity of maximum entropy inference and quantum phase transitions

    International Nuclear Information System (INIS)

    Chen, Jianxin; Ji, Zhengfeng; Yu, Nengkun; Zeng, Bei; Li, Chi-Kwong; Poon, Yiu-Tung; Shen, Yi; Zhou, Duanlu

    2015-01-01

    In this paper, we discuss the connection between two genuinely quantum phenomena—the discontinuity of quantum maximum entropy inference and quantum phase transitions at zero temperature. It is shown that the discontinuity of the maximum entropy inference of local observable measurements signals the non-local type of transitions, where local density matrices of the ground state change smoothly at the transition point. We then propose to use the quantum conditional mutual information of the ground state as an indicator to detect the discontinuity and the non-local type of quantum phase transitions in the thermodynamic limit. (paper)

  16. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  17. Maximum entropy estimation via Gauss-LP quadratures

    NARCIS (Netherlands)

    Thély, Maxime; Sutter, Tobias; Mohajerin Esfahani, P.; Lygeros, John; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    2017-01-01

    We present an approximation method to a class of parametric integration problems that naturally appear when solving the dual of the maximum entropy estimation problem. Our method builds up on a recent generalization of Gauss quadratures via an infinite-dimensional linear program, and utilizes a

  18. Maximum-entropy networks pattern detection, network reconstruction and graph combinatorics

    CERN Document Server

    Squartini, Tiziano

    2017-01-01

    This book is an introduction to maximum-entropy models of random graphs with given topological properties and their applications. Its original contribution is the reformulation of many seemingly different problems in the study of both real networks and graph theory within the unified framework of maximum entropy. Particular emphasis is put on the detection of structural patterns in real networks, on the reconstruction of the properties of networks from partial information, and on the enumeration and sampling of graphs with given properties.  After a first introductory chapter explaining the motivation, focus, aim and message of the book, chapter 2 introduces the formal construction of maximum-entropy ensembles of graphs with local topological constraints. Chapter 3 focuses on the problem of pattern detection in real networks and provides a powerful way to disentangle nontrivial higher-order structural features from those that can be traced back to simpler local constraints. Chapter 4 focuses on the problem o...

  19. Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method

    Science.gov (United States)

    Pei-Jui, Wu; Hwa-Lung, Yu

    2016-04-01

    The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .

  20. Bayesian probability theory and inverse problems

    International Nuclear Information System (INIS)

    Kopec, S.

    1994-01-01

    Bayesian probability theory is applied to approximate solving of the inverse problems. In order to solve the moment problem with the noisy data, the entropic prior is used. The expressions for the solution and its error bounds are presented. When the noise level tends to zero, the Bayesian solution tends to the classic maximum entropy solution in the L 2 norm. The way of using spline prior is also shown. (author)

  1. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  2. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  3. The constraint rule of the maximum entropy principle

    NARCIS (Netherlands)

    Uffink, J.

    1995-01-01

    The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability

  4. Bayesian Image Segmentations by Potts Prior and Loopy Belief Propagation

    Science.gov (United States)

    Tanaka, Kazuyuki; Kataoka, Shun; Yasuda, Muneki; Waizumi, Yuji; Hsu, Chiou-Ting

    2014-12-01

    This paper presents a Bayesian image segmentation model based on Potts prior and loopy belief propagation. The proposed Bayesian model involves several terms, including the pairwise interactions of Potts models, and the average vectors and covariant matrices of Gauss distributions in color image modeling. These terms are often referred to as hyperparameters in statistical machine learning theory. In order to determine these hyperparameters, we propose a new scheme for hyperparameter estimation based on conditional maximization of entropy in the Potts prior. The algorithm is given based on loopy belief propagation. In addition, we compare our conditional maximum entropy framework with the conventional maximum likelihood framework, and also clarify how the first order phase transitions in loopy belief propagations for Potts models influence our hyperparameter estimation procedures.

  5. The Maximum Entropy Principle and the Modern Portfolio Theory

    Directory of Open Access Journals (Sweden)

    Ailton Cassetari

    2003-12-01

    Full Text Available In this work, a capital allocation methodology base don the Principle of Maximum Entropy was developed. The Shannons entropy is used as a measure, concerning the Modern Portfolio Theory, are also discuted. Particularly, the methodology is tested making a systematic comparison to: 1 the mean-variance (Markovitz approach and 2 the mean VaR approach (capital allocations based on the Value at Risk concept. In principle, such confrontations show the plausibility and effectiveness of the developed method.

  6. Applications of the Maximum Entropy Method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš

    2004-01-01

    Roč. 305, - (2004), s. 57-62 ISSN 0015-0193 Grant - others:DFG and FCI(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : Maximum Entropy Method * modulated structures * charge density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.517, year: 2004

  7. Bayesian maximum posterior probability method for interpreting plutonium urinalysis data

    International Nuclear Information System (INIS)

    Miller, G.; Inkret, W.C.

    1996-01-01

    A new internal dosimetry code for interpreting urinalysis data in terms of radionuclide intakes is described for the case of plutonium. The mathematical method is to maximise the Bayesian posterior probability using an entropy function as the prior probability distribution. A software package (MEMSYS) developed for image reconstruction is used. Some advantages of the new code are that it ensures positive calculated dose, it smooths out fluctuating data, and it provides an estimate of the propagated uncertainty in the calculated doses. (author)

  8. Development of an Anisotropic Geological-Based Land Use Regression and Bayesian Maximum Entropy Model for Estimating Groundwater Radon across Northing Carolina

    Science.gov (United States)

    Messier, K. P.; Serre, M. L.

    2015-12-01

    Radon (222Rn) is a naturally occurring chemically inert, colorless, and odorless radioactive gas produced from the decay of uranium (238U), which is ubiquitous in rocks and soils worldwide. Exposure to 222Rn is likely the second leading cause of lung cancer after cigarette smoking via inhalation; however, exposure through untreated groundwater is also a contributing factor to both inhalation and ingestion routes. A land use regression (LUR) model for groundwater 222Rn with anisotropic geological and 238U based explanatory variables is developed, which helps elucidate the factors contributing to elevated 222Rn across North Carolina. Geological and uranium based variables are constructed in elliptical buffers surrounding each observation such that they capture the lateral geometric anisotropy present in groundwater 222Rn. Moreover, geological features are defined at three different geological spatial scales to allow the model to distinguish between large area and small area effects of geology on groundwater 222Rn. The LUR is also integrated into the Bayesian Maximum Entropy (BME) geostatistical framework to increase accuracy and produce a point-level LUR-BME model of groundwater 222Rn across North Carolina including prediction uncertainty. The LUR-BME model of groundwater 222Rn results in a leave-one out cross-validation of 0.46 (Pearson correlation coefficient= 0.68), effectively predicting within the spatial covariance range. Modeled results of 222Rn concentrations show variability among Intrusive Felsic geological formations likely due to average bedrock 238U defined on the basis of overlying stream-sediment 238U concentrations that is a widely distributed consistently analyzed point-source data.

  9. Maximum entropy principle and hydrodynamic models in statistical mechanics

    International Nuclear Information System (INIS)

    Trovato, M.; Reggiani, L.

    2012-01-01

    This review presents the state of the art of the maximum entropy principle (MEP) in its classical and quantum (QMEP) formulation. Within the classical MEP we overview a general theory able to provide, in a dynamical context, the macroscopic relevant variables for carrier transport in the presence of electric fields of arbitrary strength. For the macroscopic variables the linearized maximum entropy approach is developed including full-band effects within a total energy scheme. Under spatially homogeneous conditions, we construct a closed set of hydrodynamic equations for the small-signal (dynamic) response of the macroscopic variables. The coupling between the driving field and the energy dissipation is analyzed quantitatively by using an arbitrary number of moments of the distribution function. Analogously, the theoretical approach is applied to many one-dimensional n + nn + submicron Si structures by using different band structure models, different doping profiles, different applied biases and is validated by comparing numerical calculations with ensemble Monte Carlo simulations and with available experimental data. Within the quantum MEP we introduce a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is then asserted as fundamental principle of quantum statistical mechanics. Accordingly, we have developed a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theory is formulated both in thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ħ 2 , being ħ the reduced Planck constant. In particular, by using an arbitrary number of moments, we prove that: i) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives both of the

  10. Maximum entropy production rate in quantum thermodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible

  11. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.

    Science.gov (United States)

    Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz

    2017-10-01

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.

  12. Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle

    OpenAIRE

    Ge Cheng; Zhenyu Zhang; Moses Ntanda Kyebambe; Nasser Kimbugwe

    2016-01-01

    Predicting the outcome of National Basketball Association (NBA) matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME) model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that...

  13. Ergodicity, Maximum Entropy Production, and Steepest Entropy Ascent in the Proofs of Onsager's Reciprocal Relations

    Science.gov (United States)

    Benfenati, Francesco; Beretta, Gian Paolo

    2018-04-01

    We show that to prove the Onsager relations using the microscopic time reversibility one necessarily has to make an ergodic hypothesis, or a hypothesis closely linked to that. This is true in all the proofs of the Onsager relations in the literature: from the original proof by Onsager, to more advanced proofs in the context of linear response theory and the theory of Markov processes, to the proof in the context of the kinetic theory of gases. The only three proofs that do not require any kind of ergodic hypothesis are based on additional hypotheses on the macroscopic evolution: Ziegler's maximum entropy production principle (MEPP), the principle of time reversal invariance of the entropy production, or the steepest entropy ascent principle (SEAP).

  14. Gamma-ray spectra deconvolution by maximum-entropy methods

    International Nuclear Information System (INIS)

    Los Arcos, J.M.

    1996-01-01

    A maximum-entropy method which includes the response of detectors and the statistical fluctuations of spectra is described and applied to the deconvolution of γ-ray spectra. Resolution enhancement of 25% can be reached for experimental peaks and up to 50% for simulated ones, while the intensities are conserved within 1-2%. (orig.)

  15. A Research on Maximum Symbolic Entropy from Intrinsic Mode Function and Its Application in Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Zhuofei Xu

    2017-01-01

    Full Text Available Empirical mode decomposition (EMD is a self-adaptive analysis method for nonlinear and nonstationary signals. It has been widely applied to machinery fault diagnosis and structural damage detection. A novel feature, maximum symbolic entropy of intrinsic mode function based on EMD, is proposed to enhance the ability of recognition of EMD in this paper. First, a signal is decomposed into a collection of intrinsic mode functions (IMFs based on the local characteristic time scale of the signal, and then IMFs are transformed into a serious of symbolic sequence with different parameters. Second, it can be found that the entropies of symbolic IMFs are quite different. However, there is always a maximum value for a certain symbolic IMF. Third, take the maximum symbolic entropy as features to describe IMFs from a signal. Finally, the proposed features are applied to evaluate the effect of maximum symbolic entropy in fault diagnosis of rolling bearing, and then the maximum symbolic entropy is compared with other standard time analysis features in a contrast experiment. Although maximum symbolic entropy is only a time domain feature, it can reveal the signal characteristic information accurately. It can also be used in other fields related to EMD method.

  16. Current opinion about maximum entropy methods in Moessbauer spectroscopy

    International Nuclear Information System (INIS)

    Szymanski, K

    2009-01-01

    Current opinion about Maximum Entropy Methods in Moessbauer Spectroscopy is presented. The most important advantage offered by the method is the correct data processing under circumstances of incomplete information. Disadvantage is the sophisticated algorithm and its application to the specific problems.

  17. Self-consistent Bayesian analysis of space-time symmetry studies

    International Nuclear Information System (INIS)

    Davis, E.D.

    1996-01-01

    We introduce a Bayesian method for the analysis of epithermal neutron transmission data on space-time symmetries in which unique assignment of the prior is achieved by maximisation of the cross entropy and the imposition of a self-consistency criterion. Unlike the maximum likelihood method used in previous analyses of parity-violation data, our method is freed of an ad hoc cutoff parameter. Monte Carlo studies indicate that our self-consistent Bayesian analysis is superior to the maximum likelihood method when applied to the small data samples typical of symmetry studies. (orig.)

  18. Maximum entropy reconstruction of poloidal magnetic field and radial electric field profiles in tokamaks

    Science.gov (United States)

    Chen, Yihang; Xiao, Chijie; Yang, Xiaoyi; Wang, Tianbo; Xu, Tianchao; Yu, Yi; Xu, Min; Wang, Long; Lin, Chen; Wang, Xiaogang

    2017-10-01

    The Laser-driven Ion beam trace probe (LITP) is a new diagnostic method for measuring poloidal magnetic field (Bp) and radial electric field (Er) in tokamaks. LITP injects a laser-driven ion beam into the tokamak, and Bp and Er profiles can be reconstructed using tomography methods. A reconstruction code has been developed to validate the LITP theory, and both 2D reconstruction of Bp and simultaneous reconstruction of Bp and Er have been attained. To reconstruct from experimental data with noise, Maximum Entropy and Gaussian-Bayesian tomography methods were applied and improved according to the characteristics of the LITP problem. With these improved methods, a reconstruction error level below 15% has been attained with a data noise level of 10%. These methods will be further tested and applied in the following LITP experiments. Supported by the ITER-CHINA program 2015GB120001, CHINA MOST under 2012YQ030142 and National Natural Science Foundation Abstract of China under 11575014 and 11375053.

  19. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    Directory of Open Access Journals (Sweden)

    Junguo Hu

    Full Text Available Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK and Co-Kriging (Co-OK methods. The results indicated that the root mean squared errors (RMSEs and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193 were less than those for the OK method (1.146 and 1.539 when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.

  20. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    Science.gov (United States)

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.

  1. Hydrodynamic Relaxation of an Electron Plasma to a Near-Maximum Entropy State

    International Nuclear Information System (INIS)

    Rodgers, D. J.; Servidio, S.; Matthaeus, W. H.; Mitchell, T. B.; Aziz, T.; Montgomery, D. C.

    2009-01-01

    Dynamical relaxation of a pure electron plasma in a Malmberg-Penning trap is studied, comparing experiments, numerical simulations and statistical theories of weakly dissipative two-dimensional (2D) turbulence. Simulations confirm that the dynamics are approximated well by a 2D hydrodynamic model. Statistical analysis favors a theoretical picture of relaxation to a near-maximum entropy state with constrained energy, circulation, and angular momentum. This provides evidence that 2D electron fluid relaxation in a turbulent regime is governed by principles of maximum entropy.

  2. Maximum entropy analysis of liquid diffraction data

    International Nuclear Information System (INIS)

    Root, J.H.; Egelstaff, P.A.; Nickel, B.G.

    1986-01-01

    A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)

  3. Image coding based on maximum entropy partitioning for identifying ...

    Indian Academy of Sciences (India)

    A new coding scheme based on maximum entropy partitioning is proposed in our work, particularly to identify the improbable intensities related to different emotions. The improbable intensities when used as a mask decode the facial expression correctly, providing an effectiveplatform for future emotion categorization ...

  4. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results...

  5. Bayesian Methods and Universal Darwinism

    Science.gov (United States)

    Campbell, John

    2009-12-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent Champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a `copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the Operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that Systems will evolve to states of highest entropy subject to the constraints of scientific law. This principle may be inverted to provide illumination as to the nature of scientific law. Our best cosmological theories suggest the universe contained much less complexity during the period shortly after the Big Bang than it does at present. The scientific subject matter of atomic physics, chemistry, biology and the social sciences has been created since that time. An explanation is proposed for the existence of this subject matter as due to the evolution of constraints in the form of adaptations imposed on Maximum Entropy. It is argued these adaptations were discovered and instantiated through the Operations of a succession of Darwinian processes.

  6. Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Ge Cheng

    2016-12-01

    Full Text Available Predicting the outcome of National Basketball Association (NBA matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that the model is able to predict the winning team with 74.4% accuracy, outperforming other classical machine learning algorithms that could only afford a maximum prediction accuracy of 70.6% in the experiments that we performed.

  7. Dynamical maximum entropy approach to flocking.

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  8. Maximum entropy production: Can it be used to constrain conceptual hydrological models?

    Science.gov (United States)

    M.C. Westhoff; E. Zehe

    2013-01-01

    In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...

  9. Maximum entropy perception-action space: a Bayesian model of eye movement selection

    OpenAIRE

    Colas , Francis; Bessière , Pierre; Girard , Benoît

    2010-01-01

    International audience; In this article, we investigate the issue of the selection of eye movements in a free-eye Multiple Object Tracking task. We propose a Bayesian model of retinotopic maps with a complex logarithmic mapping. This model is structured in two parts: a representation of the visual scene, and a decision model based on the representation. We compare different decision models based on different features of the representation and we show that taking into account uncertainty helps...

  10. Bayesian uncertainty analyses of probabilistic risk models

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1989-01-01

    Applications of Bayesian principles to the uncertainty analyses are discussed in the paper. A short review of the most important uncertainties and their causes is provided. An application of the principle of maximum entropy to the determination of Bayesian prior distributions is described. An approach based on so called probabilistic structures is presented in order to develop a method of quantitative evaluation of modelling uncertainties. The method is applied to a small example case. Ideas for application areas for the proposed method are discussed

  11. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  12. The prior-derived F constraints in the maximum-entropy method

    Czech Academy of Sciences Publication Activity Database

    Palatinus, Lukáš; van Smaalen, S.

    2005-01-01

    Roč. 61, - (2005), s. 363-372 ISSN 0108-7673 Institutional research plan: CEZ:AV0Z10100521 Keywords : charge density * maximum-entropy method * sodium nitrite Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.791, year: 2005

  13. Maximum non-extensive entropy block bootstrap for non-stationary processes

    Czech Academy of Sciences Publication Activity Database

    Bergamelli, M.; Novotný, Jan; Urga, G.

    2015-01-01

    Roč. 91, 1/2 (2015), s. 115-139 ISSN 0001-771X R&D Projects: GA ČR(CZ) GA14-27047S Institutional support: RVO:67985998 Keywords : maximum entropy * bootstrap * Monte Carlo simulations Subject RIV: AH - Economics

  14. Spatiotemporal modeling of ozone levels in Quebec (Canada): a comparison of kriging, land-use regression (LUR), and combined Bayesian maximum entropy-LUR approaches.

    Science.gov (United States)

    Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael; Smargiassi, Audrey

    2014-09-01

    Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data.

  15. Spatially-Explicit Bayesian Information Entropy Metrics for Calibrating Landscape Transformation Models

    Directory of Open Access Journals (Sweden)

    Kostas Alexandridis

    2013-06-01

    Full Text Available Assessing spatial model performance often presents challenges related to the choice and suitability of traditional statistical methods in capturing the true validity and dynamics of the predicted outcomes. The stochastic nature of many of our contemporary spatial models of land use change necessitate the testing and development of new and innovative methodologies in statistical spatial assessment. In many cases, spatial model performance depends critically on the spatially-explicit prior distributions, characteristics, availability and prevalence of the variables and factors under study. This study explores the statistical spatial characteristics of statistical model assessment of modeling land use change dynamics in a seven-county study area in South-Eastern Wisconsin during the historical period of 1963–1990. The artificial neural network-based Land Transformation Model (LTM predictions are used to compare simulated with historical land use transformations in urban/suburban landscapes. We introduce a range of Bayesian information entropy statistical spatial metrics for assessing the model performance across multiple simulation testing runs. Bayesian entropic estimates of model performance are compared against information-theoretic stochastic entropy estimates and theoretically-derived accuracy assessments. We argue for the critical role of informational uncertainty across different scales of spatial resolution in informing spatial landscape model assessment. Our analysis reveals how incorporation of spatial and landscape information asymmetry estimates can improve our stochastic assessments of spatial model predictions. Finally our study shows how spatially-explicit entropic classification accuracy estimates can work closely with dynamic modeling methodologies in improving our scientific understanding of landscape change as a complex adaptive system and process.

  16. Spectral maximum entropy hydrodynamics of fermionic radiation: a three-moment system for one-dimensional flows

    International Nuclear Information System (INIS)

    Banach, Zbigniew; Larecki, Wieslaw

    2013-01-01

    The spectral formulation of the nine-moment radiation hydrodynamics resulting from using the Boltzmann entropy maximization procedure is considered. The analysis is restricted to the one-dimensional flows of a gas of massless fermions. The objective of the paper is to demonstrate that, for such flows, the spectral nine-moment maximum entropy hydrodynamics of fermionic radiation is not a purely formal theory. We first determine the domains of admissible values of the spectral moments and of the Lagrange multipliers corresponding to them. We then prove the existence of a solution to the constrained entropy optimization problem. Due to the strict concavity of the entropy functional defined on the space of distribution functions, there exists a one-to-one correspondence between the Lagrange multipliers and the moments. The maximum entropy closure of moment equations results in the symmetric conservative system of first-order partial differential equations for the Lagrange multipliers. However, this system can be transformed into the equivalent system of conservation equations for the moments. These two systems are consistent with the additional conservation equation interpreted as the balance of entropy. Exploiting the above facts, we arrive at the differential relations satisfied by the entropy function and the additional function required to close the system of moment equations. We refer to this additional function as the moment closure function. In general, the moment closure and entropy–entropy flux functions cannot be explicitly calculated in terms of the moments determining the state of a gas. Therefore, we develop a perturbation method of calculating these functions. Some additional analytical (and also numerical) results are obtained, assuming that the maximum entropy distribution function tends to the Maxwell–Boltzmann limit. (paper)

  17. Spectrum unfolding in X-ray spectrometry using the maximum entropy method

    International Nuclear Information System (INIS)

    Fernandez, Jorge E.; Scot, Viviana; Di Giulio, Eugenio

    2014-01-01

    The solution of the unfolding problem is an ever-present issue in X-ray spectrometry. The maximum entropy technique solves this problem by taking advantage of some known a priori physical information and by ensuring an outcome with only positive values. This method is implemented in MAXED (MAXimum Entropy Deconvolution), a software code contained in the package UMG (Unfolding with MAXED and GRAVEL) developed at PTB and distributed by NEA Data Bank. This package contains also the code GRAVEL (used to estimate the precision of the solution). This article introduces the new code UMESTRAT (Unfolding Maximum Entropy STRATegy) which applies a semi-automatic strategy to solve the unfolding problem by using a suitable combination of MAXED and GRAVEL for applications in X-ray spectrometry. Some examples of the use of UMESTRAT are shown, demonstrating its capability to remove detector artifacts from the measured spectrum consistently with the model used for the detector response function (DRF). - Highlights: ► A new strategy to solve the unfolding problem in X-ray spectrometry is presented. ► The presented strategy uses a suitable combination of the codes MAXED and GRAVEL. ► The applied strategy provides additional information on the Detector Response Function. ► The code UMESTRAT is developed to apply this new strategy in a semi-automatic mode

  18. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  19. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    Science.gov (United States)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  20. Feasible Histories, Maximum Entropy

    International Nuclear Information System (INIS)

    Pitowsky, I.

    1999-01-01

    We consider the broadest possible consistency condition for a family of histories, which extends all previous proposals. A family that satisfies this condition is called feasible. On each feasible family of histories we choose a probability measure by maximizing entropy, while keeping the probabilities of commuting histories to their quantum mechanical values. This procedure is justified by the assumption that decoherence increases entropy. Finally, a criterion for identifying the nearly classical families is proposed

  1. A Maximum Entropy Approach to Loss Distribution Analysis

    Directory of Open Access Journals (Sweden)

    Marco Bee

    2013-03-01

    Full Text Available In this paper we propose an approach to the estimation and simulation of loss distributions based on Maximum Entropy (ME, a non-parametric technique that maximizes the Shannon entropy of the data under moment constraints. Special cases of the ME density correspond to standard distributions; therefore, this methodology is very general as it nests most classical parametric approaches. Sampling the ME distribution is essential in many contexts, such as loss models constructed via compound distributions. Given the difficulties in carrying out exact simulation,we propose an innovative algorithm, obtained by means of an extension of Adaptive Importance Sampling (AIS, for the approximate simulation of the ME distribution. Several numerical experiments confirm that the AIS-based simulation technique works well, and an application to insurance data gives further insights in the usefulness of the method for modelling, estimating and simulating loss distributions.

  2. Applications of the principle of maximum entropy: from physics to ecology.

    Science.gov (United States)

    Banavar, Jayanth R; Maritan, Amos; Volkov, Igor

    2010-02-17

    There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori.

  3. Applications of the principle of maximum entropy: from physics to ecology

    International Nuclear Information System (INIS)

    Banavar, Jayanth R; Volkov, Igor; Maritan, Amos

    2010-01-01

    There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori. (topical review)

  4. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  5. Twenty-five years of maximum-entropy principle

    Science.gov (United States)

    Kapur, J. N.

    1983-04-01

    The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.

  6. Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models.

    Directory of Open Access Journals (Sweden)

    Richard R Stein

    2015-07-01

    Full Text Available Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene-gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design.

  7. Maximum entropy tokamak configurations

    International Nuclear Information System (INIS)

    Minardi, E.

    1989-01-01

    The new entropy concept for the collective magnetic equilibria is applied to the description of the states of a tokamak subject to ohmic and auxiliary heating. The condition for the existence of steady state plasma states with vanishing entropy production implies, on one hand, the resilience of specific current density profiles and, on the other, severe restrictions on the scaling of the confinement time with power and current. These restrictions are consistent with Goldston scaling and with the existence of a heat pinch. (author)

  8. Precise charge density studies by maximum entropy method

    CERN Document Server

    Takata, M

    2003-01-01

    For the production research and development of nanomaterials, their structural information is indispensable. Recently, a sophisticated analytical method, which is based on information theory, the Maximum Entropy Method (MEM) using synchrotron radiation powder data, has been successfully applied to determine precise charge densities of metallofullerenes and nanochannel microporous compounds. The results revealed various endohedral natures of metallofullerenes and one-dimensional array formation of adsorbed gas molecules in nanochannel microporous compounds. The concept of MEM analysis was also described briefly. (author)

  9. Power spectrum of the geomagnetic field by the maximum entropy method

    International Nuclear Information System (INIS)

    Kantor, I.J.; Trivedi, N.B.

    1980-01-01

    Monthly mean values of Vassouras (state of Rio de Janeiro) geomagnetic field are analyzed us the maximum entropy method. The method is described and compared with other methods of spectral analysis, and its advantages and disadvantages are presented. (Author) [pt

  10. Nonequilibrium thermodynamics and maximum entropy production in the Earth system: applications and implications.

    Science.gov (United States)

    Kleidon, Axel

    2009-06-01

    The Earth system is maintained in a unique state far from thermodynamic equilibrium, as, for instance, reflected in the high concentration of reactive oxygen in the atmosphere. The myriad of processes that transform energy, that result in the motion of mass in the atmosphere, in oceans, and on land, processes that drive the global water, carbon, and other biogeochemical cycles, all have in common that they are irreversible in their nature. Entropy production is a general consequence of these processes and measures their degree of irreversibility. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints. In this review, the basics of nonequilibrium thermodynamics are described, as well as how these apply to Earth system processes. Applications of the MEP principle are discussed, ranging from the strength of the atmospheric circulation, the hydrological cycle, and biogeochemical cycles to the role that life plays in these processes. Nonequilibrium thermodynamics and the MEP principle have potentially wide-ranging implications for our understanding of Earth system functioning, how it has evolved in the past, and why it is habitable. Entropy production allows us to quantify an objective direction of Earth system change (closer to vs further away from thermodynamic equilibrium, or, equivalently, towards a state of MEP). When a maximum in entropy production is reached, MEP implies that the Earth system reacts to perturbations primarily with negative feedbacks. In conclusion, this nonequilibrium thermodynamic view of the Earth system shows great promise to establish a holistic description of the Earth as one system. This perspective is likely to allow us to better understand and predict its function as one entity, how it has evolved in the past, and how it is modified by human activities in the future.

  11. Incommensurate modulations made visible by the Maximum Entropy Method in superspace

    Czech Academy of Sciences Publication Activity Database

    Palatinus, Lukáš; van Smaalen, S.

    2004-01-01

    Roč. 219, - (2004), s. 719-729 ISSN 0044-2968 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : Maximum Entropy Method * modulated structures * charge density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.390, year: 2004

  12. Comparison of tomography reconstruction by maximum entropy and filtered retro projection

    International Nuclear Information System (INIS)

    Abdala, F.J.P.; Simpson, D.M.; Roberty, N.C.

    1992-01-01

    The tomographic reconstruction with few projections is studied, comparing the maximum entropy method with filtered retro projection. Simulations with and without the presence of noise and also with the presence of an object of high density inside of the skull are showed. (C.G.C.)

  13. Can the maximum entropy principle be explained as a consistency requirement?

    NARCIS (Netherlands)

    Uffink, J.

    1997-01-01

    The principle of maximum entropy is a general method to assign values to probability distributions on the basis of partial information. This principle, introduced by Jaynes in 1957, forms an extension of the classical principle of insufficient reason. It has been further generalized, both in

  14. Derivation of some new distributions in statistical mechanics using maximum entropy approach

    Directory of Open Access Journals (Sweden)

    Ray Amritansu

    2014-01-01

    Full Text Available The maximum entropy principle has been earlier used to derive the Bose Einstein(B.E., Fermi Dirac(F.D. & Intermediate Statistics(I.S. distribution of statistical mechanics. The central idea of these distributions is to predict the distribution of the microstates, which are the particle of the system, on the basis of the knowledge of some macroscopic data. The latter information is specified in the form of some simple moment constraints. One distribution differs from the other in the way in which the constraints are specified. In the present paper, we have derived some new distributions similar to B.E., F.D. distributions of statistical mechanics by using maximum entropy principle. Some proofs of B.E. & F.D. distributions are shown, and at the end some new results are discussed.

  15. Maximum entropy technique in the doublet structure analysis

    International Nuclear Information System (INIS)

    Belashev, B.Z.; Panebrattsev, Yu.A.; Shakhaliev, Eh.I.; Soroko, L.M.

    1998-01-01

    The Maximum Entropy Technique (MENT) for solution of the inverse problems is explained. The effective computer program for resolution of the nonlinear equations system encountered in the MENT has been developed and tested. The possibilities of the MENT have been demonstrated on the example of the MENT in the doublet structure analysis of noisy experimental data. The comparison of the MENT results with results of the Fourier algorithm technique without regularization is presented. The tolerant noise level is equal to 30% for MENT and only 0.1% for the Fourier algorithm

  16. Application of Bayesian inference to stochastic analytic continuation

    International Nuclear Information System (INIS)

    Fuchs, S; Pruschke, T; Jarrell, M

    2010-01-01

    We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data. The algorithm is strictly based on principles of Bayesian statistical inference. It utilizes Monte Carlo simulations to calculate a weighted average of possible energy spectra. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum entropy calculation.

  17. Maximum Entropy Closure of Balance Equations for Miniband Semiconductor Superlattices

    Directory of Open Access Journals (Sweden)

    Luis L. Bonilla

    2016-07-01

    Full Text Available Charge transport in nanosized electronic systems is described by semiclassical or quantum kinetic equations that are often costly to solve numerically and difficult to reduce systematically to macroscopic balance equations for densities, currents, temperatures and other moments of macroscopic variables. The maximum entropy principle can be used to close the system of equations for the moments but its accuracy or range of validity are not always clear. In this paper, we compare numerical solutions of balance equations for nonlinear electron transport in semiconductor superlattices. The equations have been obtained from Boltzmann–Poisson kinetic equations very far from equilibrium for strong fields, either by the maximum entropy principle or by a systematic Chapman–Enskog perturbation procedure. Both approaches produce the same current-voltage characteristic curve for uniform fields. When the superlattices are DC voltage biased in a region where there are stable time periodic solutions corresponding to recycling and motion of electric field pulses, the differences between the numerical solutions produced by numerically solving both types of balance equations are smaller than the expansion parameter used in the perturbation procedure. These results and possible new research venues are discussed.

  18. On the maximum-entropy method for kinetic equation of radiation, particle and gas

    International Nuclear Information System (INIS)

    El-Wakil, S.A.; Madkour, M.A.; Degheidy, A.R.; Machali, H.M.

    1995-01-01

    The maximum-entropy approach is used to calculate some problems in radiative transfer and reactor physics such as the escape probability, the emergent and transmitted intensities for a finite slab as well as the emergent intensity for a semi-infinite medium. Also, it is employed to solve problems involving spherical geometry, such as luminosity (the total energy emitted by a sphere), neutron capture probability and the albedo problem. The technique is also employed in the kinetic theory of gases to calculate the Poiseuille flow and thermal creep of a rarefied gas between two plates. Numerical calculations are achieved and compared with the published data. The comparisons demonstrate that the maximum-entropy results are good in agreement with the exact ones. (orig.)

  19. ON A GENERALIZATION OF THE MAXIMUM ENTROPY THEOREM OF BURG

    Directory of Open Access Journals (Sweden)

    JOSÉ MARCANO

    2017-01-01

    Full Text Available In this article we introduce some matrix manipulations that allow us to obtain a version of the original Christoffel-Darboux formula, which is of interest in many applications of linear algebra. Using these developments matrix and Jensen’s inequality, we obtain the main result of this proposal, which is the generalization of the maximum entropy theorem of Burg for multivariate processes.

  20. Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle

    Science.gov (United States)

    Shoufan Fang; George Z. Gertner

    2000-01-01

    When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...

  1. Test the principle of maximum entropy in constant sum 2×2 game: Evidence in experimental economics

    International Nuclear Information System (INIS)

    Xu, Bin; Zhang, Hongen; Wang, Zhijian; Zhang, Jianbo

    2012-01-01

    By using laboratory experimental data, we test the uncertainty of strategy type in various competing environments with two-person constant sum 2×2 game in the social system. It firstly shows that, in these competing game environments, the outcome of human's decision-making obeys the principle of the maximum entropy. -- Highlights: ► Test the uncertainty in two-person constant sum games with experimental data. ► On game level, the constant sum game fits the principle of maximum entropy. ► On group level, all empirical entropy values are close to theoretical maxima. ► The results can be different for the games that are not constant sum game.

  2. Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains

    Directory of Open Access Journals (Sweden)

    Erik Van der Straeten

    2009-11-01

    Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.

  3. Maximum entropy restoration of laser fusion target x-ray photographs

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.

    1976-01-01

    Maximum entropy principles were used to analyze the microdensitometer traces of a laser-fusion target photograph. The object is a glowing laser-fusion target microsphere 0.95 cm from a pinhole of radius 2 x 10 -4 cm, the image is 7.2 cm from the pinhole and the photon wavelength is likely to be 6.2 x 10 -8 cm. Some computational aspects of the problem are also considered

  4. Bayesian inference in processing experimental data: principles and basic applications

    International Nuclear Information System (INIS)

    D'Agostini, G

    2003-01-01

    This paper introduces general ideas and some basic methods of the Bayesian probability theory applied to physics measurements. Our aim is to make the reader familiar, through examples rather than rigorous formalism, with concepts such as the following: model comparison (including the automatic Ockham's Razor filter provided by the Bayesian approach); parametric inference; quantification of the uncertainty about the value of physical quantities, also taking into account systematic effects; role of marginalization; posterior characterization; predictive distributions; hierarchical modelling and hyperparameters; Gaussian approximation of the posterior and recovery of conventional methods, especially maximum likelihood and chi-square fits under well-defined conditions; conjugate priors, transformation invariance and maximum entropy motivated priors; and Monte Carlo (MC) estimates of expectation, including a short introduction to Markov Chain MC methods

  5. Robust optimum design with maximum entropy method; Saidai entropy ho mochiita robust sei saitekika sekkeiho

    Energy Technology Data Exchange (ETDEWEB)

    Kawaguchi, K; Egashira, Y; Watanabe, G [Mazda Motor Corp., Hiroshima (Japan)

    1997-10-01

    Vehicle and unit performance change according to not only external causes represented by the environment such as temperature or weather, but also internal causes which are dispersion of component characteristics and manufacturing processes or aged deteriorations. We developed the design method to estimate thus performance distributions with maximum entropy method and to calculate specifications with high performance robustness using Fuzzy theory. This paper describes the details of these methods and examples applied to power window system. 3 refs., 7 figs., 4 tabs.

  6. Direct comparison of phase-sensitive vibrational sum frequency generation with maximum entropy method: case study of water.

    Science.gov (United States)

    de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie

    2011-12-14

    We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics

  7. Modeling of the Maximum Entropy Problem as an Optimal Control Problem and its Application to Pdf Estimation of Electricity Price

    Directory of Open Access Journals (Sweden)

    M. E. Haji Abadi

    2013-09-01

    Full Text Available In this paper, the continuous optimal control theory is used to model and solve the maximum entropy problem for a continuous random variable. The maximum entropy principle provides a method to obtain least-biased probability density function (Pdf estimation. In this paper, to find a closed form solution for the maximum entropy problem with any number of moment constraints, the entropy is considered as a functional measure and the moment constraints are considered as the state equations. Therefore, the Pdf estimation problem can be reformulated as the optimal control problem. Finally, the proposed method is applied to estimate the Pdf of the hourly electricity prices of New England and Ontario electricity markets. Obtained results show the efficiency of the proposed method.

  8. Maximum entropy networks are more controllable than preferential attachment networks

    International Nuclear Information System (INIS)

    Hou, Lvlin; Small, Michael; Lao, Songyang

    2014-01-01

    A maximum entropy (ME) method to generate typical scale-free networks has been recently introduced. We investigate the controllability of ME networks and Barabási–Albert preferential attachment networks. Our experimental results show that ME networks are significantly more easily controlled than BA networks of the same size and the same degree distribution. Moreover, the control profiles are used to provide insight into control properties of both classes of network. We identify and classify the driver nodes and analyze the connectivity of their neighbors. We find that driver nodes in ME networks have fewer mutual neighbors and that their neighbors have lower average degree. We conclude that the properties of the neighbors of driver node sensitively affect the network controllability. Hence, subtle and important structural differences exist between BA networks and typical scale-free networks of the same degree distribution. - Highlights: • The controllability of maximum entropy (ME) and Barabási–Albert (BA) networks is investigated. • ME networks are significantly more easily controlled than BA networks of the same degree distribution. • The properties of the neighbors of driver node sensitively affect the network controllability. • Subtle and important structural differences exist between BA networks and typical scale-free networks

  9. Test the principle of maximum entropy in constant sum 2×2 game: Evidence in experimental economics

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Bin, E-mail: xubin211@zju.edu.cn [Experimental Social Science Laboratory, Zhejiang University, Hangzhou, 310058 (China); Public Administration College, Zhejiang Gongshang University, Hangzhou, 310018 (China); Zhang, Hongen, E-mail: hongen777@163.com [Department of Physics, Zhejiang University, Hangzhou, 310027 (China); Wang, Zhijian, E-mail: wangzj@zju.edu.cn [Experimental Social Science Laboratory, Zhejiang University, Hangzhou, 310058 (China); Zhang, Jianbo, E-mail: jbzhang08@zju.edu.cn [Department of Physics, Zhejiang University, Hangzhou, 310027 (China)

    2012-03-19

    By using laboratory experimental data, we test the uncertainty of strategy type in various competing environments with two-person constant sum 2×2 game in the social system. It firstly shows that, in these competing game environments, the outcome of human's decision-making obeys the principle of the maximum entropy. -- Highlights: ► Test the uncertainty in two-person constant sum games with experimental data. ► On game level, the constant sum game fits the principle of maximum entropy. ► On group level, all empirical entropy values are close to theoretical maxima. ► The results can be different for the games that are not constant sum game.

  10. Deconvolution in the presence of noise using the Maximum Entropy Principle

    International Nuclear Information System (INIS)

    Steenstrup, S.

    1984-01-01

    The main problem in deconvolution in the presence of noise is the nonuniqueness. This problem is overcome by the application of the Maximum Entropy Principle. The way the noise enters in the formulation of the problem is examined in some detail and the final equations are derived such that the necessary assumptions becomes explicit. Examples using X-ray diffraction data are shown. (orig.)

  11. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.

    Science.gov (United States)

    Shalymov, Dmitry S; Fradkov, Alexander L

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.

  12. Application of the maximum entropy production principle to electrical systems

    International Nuclear Information System (INIS)

    Christen, Thomas

    2006-01-01

    For a simple class of electrical systems, the principle of the maximum entropy production rate (MaxEP) is discussed. First, we compare the MaxEP principle and the principle of the minimum entropy production rate and illustrate the superiority of the MaxEP principle for the example of two parallel constant resistors. Secondly, we show that the Steenbeck principle for the electric arc as well as the ohmic contact behaviour of space-charge limited conductors follow from the MaxEP principle. In line with work by Dewar, the investigations seem to suggest that the MaxEP principle can also be applied to systems far from equilibrium, provided appropriate information is available that enters the constraints of the optimization problem. Finally, we apply the MaxEP principle to a mesoscopic system and show that the universal conductance quantum, e 2 /h, of a one-dimensional ballistic conductor can be estimated

  13. Mixed memory, (non) Hurst effect, and maximum entropy of rainfall in the tropical Andes

    Science.gov (United States)

    Poveda, Germán

    2011-02-01

    Diverse linear and nonlinear statistical parameters of rainfall under aggregation in time and the kind of temporal memory are investigated. Data sets from the Andes of Colombia at different resolutions (15 min and 1-h), and record lengths (21 months and 8-40 years) are used. A mixture of two timescales is found in the autocorrelation and autoinformation functions, with short-term memory holding for time lags less than 15-30 min, and long-term memory onwards. Consistently, rainfall variance exhibits different temporal scaling regimes separated at 15-30 min and 24 h. Tests for the Hurst effect evidence the frailty of the R/ S approach in discerning the kind of memory in high resolution rainfall, whereas rigorous statistical tests for short-memory processes do reject the existence of the Hurst effect. Rainfall information entropy grows as a power law of aggregation time, S( T) ˜ Tβ with = 0.51, up to a timescale, TMaxEnt (70-202 h), at which entropy saturates, with β = 0 onwards. Maximum entropy is reached through a dynamic Generalized Pareto distribution, consistently with the maximum information-entropy principle for heavy-tailed random variables, and with its asymptotically infinitely divisible property. The dynamics towards the limit distribution is quantified. Tsallis q-entropies also exhibit power laws with T, such that Sq( T) ˜ Tβ( q) , with β( q) ⩽ 0 for q ⩽ 0, and β( q) ≃ 0.5 for q ⩾ 1. No clear patterns are found in the geographic distribution within and among the statistical parameters studied, confirming the strong variability of tropical Andean rainfall.

  14. Use of the maximum entropy method in X-ray astronomy

    International Nuclear Information System (INIS)

    Willingale, R.

    1981-01-01

    An algorithm used to apply the maximum entropy method in X-ray astronomy is described. It is easy to programme on a digital computer and fast enough to allow processing of two-dimensional images. The method gives good noise suppression without loss of instrumental resolution and has been successfully applied to several data analysis problems in X-ray astronomy. The restoration of a high-resolution image from the Einstein Observatory demonstrates the use of the algorithm. (author)

  15. PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation

    Energy Technology Data Exchange (ETDEWEB)

    Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.

    2007-06-23

    In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.

  16. Jarzynski equality in the context of maximum path entropy

    Science.gov (United States)

    González, Diego; Davis, Sergio

    2017-06-01

    In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.

  17. Physical entropy, information entropy and their evolution equations

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Inspired by the evolution equation of nonequilibrium statistical physics entropy and the concise statistical formula of the entropy production rate, we develop a theory of the dynamic information entropy and build a nonlinear evolution equation of the information entropy density changing in time and state variable space. Its mathematical form and physical meaning are similar to the evolution equation of the physical entropy: The time rate of change of information entropy density originates together from drift, diffusion and production. The concise statistical formula of information entropy production rate is similar to that of physical entropy also. Furthermore, we study the similarity and difference between physical entropy and information entropy and the possible unification of the two statistical entropies, and discuss the relationship among the principle of entropy increase, the principle of equilibrium maximum entropy and the principle of maximum information entropy as well as the connection between them and the entropy evolution equation.

  18. Spectral analysis of the IntCal98 calibration curve: a Bayesian view

    International Nuclear Information System (INIS)

    Palonen, V.; Tikkanen, P.

    2004-01-01

    Preliminary results from a Bayesian approach to find periodicities in the IntCal98 calibration curve are given. It has been shown in the literature that the discrete Fourier transform (Schuster periodogram) corresponds to the use of an approximate Bayesian model of one harmonic frequency and Gaussian noise. Advantages of the Bayesian approach include the possibility to use models for variable, attenuated and multiple frequencies, the capability to analyze unevenly spaced data and the possibility to assess the significance and uncertainties of spectral estimates. In this work, a new Bayesian model using random walk noise to take care of the trend in the data is developed. Both Bayesian models are described and the first results of the new model are reported and compared with results from straightforward discrete-Fourier-transform and maximum-entropy-method spectral analyses

  19. Venus atmosphere profile from a maximum entropy principle

    Directory of Open Access Journals (Sweden)

    L. N. Epele

    2007-10-01

    Full Text Available The variational method with constraints recently developed by Verkley and Gerkema to describe maximum-entropy atmospheric profiles is generalized to ideal gases but with temperature-dependent specific heats. In so doing, an extended and non standard potential temperature is introduced that is well suited for tackling the problem under consideration. This new formalism is successfully applied to the atmosphere of Venus. Three well defined regions emerge in this atmosphere up to a height of 100 km from the surface: the lowest one up to about 35 km is adiabatic, a transition layer located at the height of the cloud deck and finally a third region which is practically isothermal.

  20. Maximum Entropy and Probability Kinematics Constrained by Conditionals

    Directory of Open Access Journals (Sweden)

    Stefan Lukits

    2015-03-01

    Full Text Available Two open questions of inductive reasoning are solved: (1 does the principle of maximum entropy (PME give a solution to the obverse Majerník problem; and (2 isWagner correct when he claims that Jeffrey’s updating principle (JUP contradicts PME? Majerník shows that PME provides unique and plausible marginal probabilities, given conditional probabilities. The obverse problem posed here is whether PME also provides such conditional probabilities, given certain marginal probabilities. The theorem developed to solve the obverse Majerník problem demonstrates that in the special case introduced by Wagner PME does not contradict JUP, but elegantly generalizes it and offers a more integrated approach to probability updating.

  1. Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories

    Science.gov (United States)

    Burnier, Yannis; Rothkopf, Alexander

    2013-11-01

    We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.

  2. Short-time maximum entropy method analysis of molecular dynamics simulation: Unimolecular decomposition of formic acid

    Science.gov (United States)

    Takahashi, Osamu; Nomura, Tetsuo; Tabayashi, Kiyohiko; Yamasaki, Katsuyoshi

    2008-07-01

    We performed spectral analysis by using the maximum entropy method instead of the traditional Fourier transform technique to investigate the short-time behavior in molecular systems, such as the energy transfer between vibrational modes and chemical reactions. This procedure was applied to direct ab initio molecular dynamics calculations for the decomposition of formic acid. More reactive trajectories of dehydrolation than those of decarboxylation were obtained for Z-formic acid, which was consistent with the prediction of previous theoretical and experimental studies. Short-time maximum entropy method analyses were performed for typical reactive and non-reactive trajectories. Spectrograms of a reactive trajectory were obtained; these clearly showed the reactant, transient, and product regions, especially for the dehydrolation path.

  3. Effective updating process of seismic fragilities using Bayesian method and information entropy

    International Nuclear Information System (INIS)

    Kato, Masaaki; Takata, Takashi; Yamaguchi, Akira

    2008-01-01

    Seismic probabilistic safety assessment (SPSA) is an effective method for evaluating overall performance of seismic safety of a plant. Seismic fragilities are estimated to quantify the seismically induced accident sequences. It is a great concern that the SPSA results involve uncertainties, a part of which comes from the uncertainty in the seismic fragility of equipment and systems. A straightforward approach to reduce the uncertainty is to perform a seismic qualification test and to reflect the results on the seismic fragility estimate. In this paper, we propose a figure-of-merit to find the most cost-effective condition of the seismic qualification tests about the acceleration level and number of components tested. Then a mathematical method to reflect the test results on the fragility update is developed. A Bayesian method is used for the fragility update procedure. Since a lognormal distribution that is used for the fragility model does not have a Bayes conjugate function, a parameterization method is proposed so that the posterior distribution expresses the characteristics of the fragility. The information entropy is used as the figure-of-merit to express importance of obtained evidence. It is found that the information entropy is strongly associated with the uncertainty of the fragility. (author)

  4. Maximum Entropy Method in Moessbauer Spectroscopy - a Problem of Magnetic Texture

    International Nuclear Information System (INIS)

    Satula, D.; Szymanski, K.; Dobrzynski, L.

    2011-01-01

    A reconstruction of the three dimensional distribution of the hyperfine magnetic field, isomer shift and texture parameter z from the Moessbauer spectra by the maximum entropy method is presented. The method was tested on the simulated spectrum consisting of two Gaussian hyperfine field distributions with different values of the texture parameters. It is shown that proper prior has to be chosen in order to arrive at the physically meaningful results. (authors)

  5. Bayesian approach to inverse statistical mechanics

    Science.gov (United States)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  6. Applications of the maximum entropy principle in nuclear physics

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1990-01-01

    Soon after the advent of information theory the principle of maximum entropy was recognized as furnishing the missing rationale for the familiar rules of classical thermodynamics. More recently it has also been applied successfully in nuclear physics. As an elementary example we derive a physically meaningful macroscopic description of the spectrum of neutrons emitted in nuclear fission, and compare the well known result with accurate data on 252 Cf. A second example, derivation of an expression for resonance-averaged cross sections for nuclear reactions like scattering or fission, is less trivial. Entropy maximization, constrained by given transmission coefficients, yields probability distributions for the R- and S-matrix elements, from which average cross sections can be calculated. If constrained only by the range of the spectrum of compound-nuclear levels it produces the Gaussian Orthogonal Ensemble (GOE) of Hamiltonian matrices that again yields expressions for average cross sections. Both avenues give practically the same numbers in spite of the quite different cross section formulae. These results were employed in a new model-aided evaluation of the 238 U neutron cross sections in the unresolved resonance region. (orig.) [de

  7. Application of the Maximum Entropy Method to Risk Analysis of Mergers and Acquisitions

    Science.gov (United States)

    Xie, Jigang; Song, Wenyun

    The maximum entropy (ME) method can be used to analyze the risk of mergers and acquisitions when only pre-acquisition information is available. A practical example of the risk analysis of China listed firms’ mergers and acquisitions is provided to testify the feasibility and practicality of the method.

  8. On the prior probabilities for two-stage Bayesian estimates

    International Nuclear Information System (INIS)

    Kohut, P.

    1992-01-01

    The method of Bayesian inference is reexamined for its applicability and for the required underlying assumptions in obtaining and using prior probability estimates. Two different approaches are suggested to determine the first-stage priors in the two-stage Bayesian analysis which avoid certain assumptions required for other techniques. In the first scheme, the prior is obtained through a true frequency based distribution generated at selected intervals utilizing actual sampling of the failure rate distributions. The population variability distribution is generated as the weighed average of the frequency distributions. The second method is based on a non-parametric Bayesian approach using the Maximum Entropy Principle. Specific features such as integral properties or selected parameters of prior distributions may be obtained with minimal assumptions. It is indicated how various quantiles may also be generated with a least square technique

  9. Critical Analysis of Non-Nuclear Electron-Density Maxima and the Maximum Entropy Method

    NARCIS (Netherlands)

    de Vries, R.Y.; Briels, Willem J.; Feil, D.; Feil, D.

    1996-01-01

    Experimental evidence for the existence of non-nuclear maxima in charge densities is questioned. It is shown that the non-nuclear maxima reported for silicon are artifacts of the maximum entropy method that was used to analyze the x-ray diffraction data. This method can be improved by the use of

  10. Bayesian energy landscape tilting: towards concordant models of molecular ensembles.

    Science.gov (United States)

    Beauchamp, Kyle A; Pande, Vijay S; Das, Rhiju

    2014-03-18

    Predicting biological structure has remained challenging for systems such as disordered proteins that take on myriad conformations. Hybrid simulation/experiment strategies have been undermined by difficulties in evaluating errors from computational model inaccuracies and data uncertainties. Building on recent proposals from maximum entropy theory and nonequilibrium thermodynamics, we address these issues through a Bayesian energy landscape tilting (BELT) scheme for computing Bayesian hyperensembles over conformational ensembles. BELT uses Markov chain Monte Carlo to directly sample maximum-entropy conformational ensembles consistent with a set of input experimental observables. To test this framework, we apply BELT to model trialanine, starting from disagreeing simulations with the force fields ff96, ff99, ff99sbnmr-ildn, CHARMM27, and OPLS-AA. BELT incorporation of limited chemical shift and (3)J measurements gives convergent values of the peptide's α, β, and PPII conformational populations in all cases. As a test of predictive power, all five BELT hyperensembles recover set-aside measurements not used in the fitting and report accurate errors, even when starting from highly inaccurate simulations. BELT's principled framework thus enables practical predictions for complex biomolecular systems from discordant simulations and sparse data. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  11. Maximum-entropy data restoration using both real- and Fourier-space analysis

    International Nuclear Information System (INIS)

    Anderson, D.M.; Martin, D.C.; Thomas, E.L.

    1989-01-01

    An extension of the maximum-entropy (ME) data-restoration method is presented that is sensitive to periodic correlations in data. The method takes advantage of the higher signal-to-noise ratio for periodic information in Fourier space, thus enhancing statistically significant frequencies in a manner which avoids the user bias inherent in conventional Fourier filtering. This procedure incorporates concepts underlying new approaches in quantum mechanics that consider entropies in both position and momentum spaces, although the emphasis here is on data restoration rather than quantum physics. After a fast Fourier transform of the image, the phases are saved and the array of Fourier moduli are restored using the maximum-entropy criterion. A first-order continuation method is introduced that speeds convergence of the ME computation. The restored moduli together with the original phases are then Fourier inverted to yield a new image; traditional real-space ME restoration is applied to this new image completing one stage in the restoration process. In test cases improvement can be obtained from two to four stages of iteration. It is shown that in traditional Fourier filtering spurious features can be induced by selection or elimination of Fourier components without regard to their statistical significance. With the present approach there is no such freedom for the user to exert personal bias, so that features present in the final image and power spectrum are those which have survived the tests of statistical significance in both real and Fourier space. However, it is still possible for periodicities to 'bleed' across sharp boundaries. An 'uncertainty' relation is derived describing the inverse relationship between the resolution of these boundaries and the level of noise that can be eliminated. (orig./BHO)

  12. Targeted search for continuous gravitational waves: Bayesian versus maximum-likelihood statistics

    International Nuclear Information System (INIS)

    Prix, Reinhard; Krishnan, Badri

    2009-01-01

    We investigate the Bayesian framework for detection of continuous gravitational waves (GWs) in the context of targeted searches, where the phase evolution of the GW signal is assumed to be known, while the four amplitude parameters are unknown. We show that the orthodox maximum-likelihood statistic (known as F-statistic) can be rediscovered as a Bayes factor with an unphysical prior in amplitude parameter space. We introduce an alternative detection statistic ('B-statistic') using the Bayes factor with a more natural amplitude prior, namely an isotropic probability distribution for the orientation of GW sources. Monte Carlo simulations of targeted searches show that the resulting Bayesian B-statistic is more powerful in the Neyman-Pearson sense (i.e., has a higher expected detection probability at equal false-alarm probability) than the frequentist F-statistic.

  13. Structure of incommensurate ammonium tetrafluoroberyllate studied by structure refinements and the maximum entropy method

    Czech Academy of Sciences Publication Activity Database

    Palatinus, Lukáš; Amami, M.; van Smaalen, S.

    2004-01-01

    Roč. 60, - (2004), s. 127-137 ISSN 0108-7681 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : incommensurate modulation * superspace * maximum entropy method Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 5.418, year: 2004

  14. Quantum maximum-entropy principle for closed quantum hydrodynamic transport within a Wigner function formalism

    International Nuclear Information System (INIS)

    Trovato, M.; Reggiani, L.

    2011-01-01

    By introducing a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is asserted as fundamental principle of quantum statistical mechanics. Accordingly, we develop a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theoretical formalism is formulated in both thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of (ℎ/2π) 2 . In particular, by using an arbitrary number of moments, we prove that (1) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives, both of the numerical density n and of the effective temperature T; (2) the results available from the literature in the framework of both a quantum Boltzmann gas and a degenerate quantum Fermi gas are recovered as a particular case; (3) the statistics for the quantum Fermi and Bose gases at different levels of degeneracy are explicitly incorporated; (4) a set of relevant applications admitting exact analytical equations are explicitly given and discussed; (5) the quantum maximum entropy principle keeps full validity in the classical limit, when (ℎ/2π)→0.

  15. Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle

    Energy Technology Data Exchange (ETDEWEB)

    Barletti, Luigi, E-mail: luigi.barletti@unifi.it [Dipartimento di Matematica e Informatica “Ulisse Dini”, Università degli Studi di Firenze, Viale Morgagni 67/A, 50134 Firenze (Italy)

    2014-08-15

    The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.

  16. Lattice Field Theory with the Sign Problem and the Maximum Entropy Method

    Directory of Open Access Journals (Sweden)

    Masahiro Imachi

    2007-02-01

    Full Text Available Although numerical simulation in lattice field theory is one of the most effective tools to study non-perturbative properties of field theories, it faces serious obstacles coming from the sign problem in some theories such as finite density QCD and lattice field theory with the θ term. We reconsider this problem from the point of view of the maximum entropy method.

  17. The generalized F constraint in the maximum-entropy method - a study on simulated data

    Czech Academy of Sciences Publication Activity Database

    Palatinus, Lukáš; van Smaalen, S.

    2002-01-01

    Roč. 58, - (2002), s. 559-567 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method * electron density * oxalic acid Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.417, year: 2002

  18. Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays

    International Nuclear Information System (INIS)

    Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick

    2013-01-01

    Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand. - Highlights: ► Ecotoxicological shows significant benefits for detecting on site contaminations. ► MaxEnt to rebuild qualitative link on concentration and ecotoxicological assays. ► MaxEnt shows similar pattern when compared with concentrations map of groundwater. ► MaxEnt is a valuable method especially when quantitative relation is not at hand. - A Maximum Entropy method to rebuild qualitative relationships between Benzene groundwater concentrations and their ecotoxicological effect.

  19. Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings

    Science.gov (United States)

    Yan, Xiaoyong; Minnhagen, Petter

    2015-01-01

    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (kmax). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, kmax) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, kmax), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf’s law, the Simon-model for texts and the present results are discussed. PMID:25955175

  20. Efficient Bayesian experimental design for contaminant source identification

    Science.gov (United States)

    Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng

    2015-01-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.

  1. Quality assurance of nuclear analytical techniques based on Bayesian characteristic limits

    International Nuclear Information System (INIS)

    Michel, R.

    2000-01-01

    Based on Bayesian statistics, characteristic limits such as decision threshold, detection limit and confidence limits can be calculated taking into account all sources of experimental uncertainties. This approach separates the complete evaluation of a measurement according to the ISO Guide to the Expression of Uncertainty in Measurement from the determination of the characteristic limits. Using the principle of maximum entropy the characteristic limits are determined from the complete standard uncertainty of the measurand. (author)

  2. Maximum entropy methods for extracting the learned features of deep neural networks.

    Science.gov (United States)

    Finnegan, Alex; Song, Jun S

    2017-10-01

    New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.

  3. Bayesian ensemble refinement by replica simulations and reweighting

    Science.gov (United States)

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-01

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  4. Comments on a derivation and application of the 'maximum entropy production' principle

    International Nuclear Information System (INIS)

    Grinstein, G; Linsker, R

    2007-01-01

    We show that (1) an error invalidates the derivation (Dewar 2005 J. Phys. A: Math. Gen. 38 L371) of the maximum entropy production (MaxEP) principle for systems far from equilibrium, for which the constitutive relations are nonlinear; and (2) the claim (Dewar 2003 J. Phys. A: Math. Gen. 36 631) that the phenomenon of 'self-organized criticality' is a consequence of MaxEP for slowly driven systems is unjustified. (comment)

  5. Bayesian probability theory applications in the physical sciences

    CERN Document Server

    Linden, Wolfgang von der; Toussaint, Udo von

    2014-01-01

    From the basics to the forefront of modern research, this book presents all aspects of probability theory, statistics and data analysis from a Bayesian perspective for physicists and engineers. The book presents the roots, applications and numerical implementation of probability theory, and covers advanced topics such as maximum entropy distributions, stochastic processes, parameter estimation, model selection, hypothesis testing and experimental design. In addition, it explores state-of-the art numerical techniques required to solve demanding real-world problems. The book is ideal for students and researchers in physical sciences and engineering.

  6. LIBOR troubles: Anomalous movements detection based on maximum entropy

    Science.gov (United States)

    Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria

    2016-05-01

    According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.

  7. Using Tranformation Group Priors and Maximum Relative Entropy for Bayesian Glaciological Inversions

    Science.gov (United States)

    Arthern, R. J.; Hindmarsh, R. C. A.; Williams, C. R.

    2014-12-01

    One of the key advances that has allowed better simulations of the large ice sheets of Greenland and Antarctica has been the use of inverse methods. These have allowed poorly known parameters such as the basal drag coefficient and ice viscosity to be constrained using a wide variety of satellite observations. Inverse methods used by glaciologists have broadly followed one of two related approaches. The first is minimization of a cost function that describes the misfit to the observations, often accompanied by some kind of explicit or implicit regularization that promotes smallness or smoothness in the inverted parameters. The second approach is a probabilistic framework that makes use of Bayes' theorem to update prior assumptions about the probability of parameters, making use of data with known error estimates. Both approaches have much in common and questions of regularization often map onto implicit choices of prior probabilities that are made explicit in the Bayesian framework. In both approaches questions can arise that seem to demand subjective input. What should the functional form of the cost function be if there are alternatives? What kind of regularization should be applied, and how much? How should the prior probability distribution for a parameter such as basal slipperiness be specified when we know so little about the details of the subglacial environment? Here we consider some approaches that have been used to address these questions and discuss ways that probabilistic prior information used for regularizing glaciological inversions might be specified with greater objectivity.

  8. Application of Bayesian Decision Theory Based on Prior Information in the Multi-Objective Optimization Problem

    Directory of Open Access Journals (Sweden)

    Xia Lei

    2010-12-01

    Full Text Available General multi-objective optimization methods are hard to obtain prior information, how to utilize prior information has been a challenge. This paper analyzes the characteristics of Bayesian decision-making based on maximum entropy principle and prior information, especially in case that how to effectively improve decision-making reliability in deficiency of reference samples. The paper exhibits effectiveness of the proposed method using the real application of multi-frequency offset estimation in distributed multiple-input multiple-output system. The simulation results demonstrate Bayesian decision-making based on prior information has better global searching capability when sampling data is deficient.

  9. Reconstruction of the electron momentum density distribution by the maximum entropy method

    International Nuclear Information System (INIS)

    Dobrzynski, L.

    1996-01-01

    The application of the Maximum Entropy Algorithm to the analysis of the Compton profiles is discussed. It is shown that the reconstruction of electron momentum density may be reliably carried out. However, there are a number of technical problems which have to be overcome in order to produce trustworthy results. In particular one needs the experimental Compton profiles measured for many directions, and to have efficient computational resources. The use of various cross-checks is recommended. (orig.)

  10. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  11. A Hybrid Physical and Maximum-Entropy Landslide Susceptibility Model

    Directory of Open Access Journals (Sweden)

    Jerry Davis

    2015-06-01

    Full Text Available The clear need for accurate landslide susceptibility mapping has led to multiple approaches. Physical models are easily interpreted and have high predictive capabilities but rely on spatially explicit and accurate parameterization, which is commonly not possible. Statistical methods can include other factors influencing slope stability such as distance to roads, but rely on good landslide inventories. The maximum entropy (MaxEnt model has been widely and successfully used in species distribution mapping, because data on absence are often uncertain. Similarly, knowledge about the absence of landslides is often limited due to mapping scale or methodology. In this paper a hybrid approach is described that combines the physically-based landslide susceptibility model “Stability INdex MAPping” (SINMAP with MaxEnt. This method is tested in a coastal watershed in Pacifica, CA, USA, with a well-documented landslide history including 3 inventories of 154 scars on 1941 imagery, 142 in 1975, and 253 in 1983. Results indicate that SINMAP alone overestimated susceptibility due to insufficient data on root cohesion. Models were compared using SINMAP stability index (SI or slope alone, and SI or slope in combination with other environmental factors: curvature, a 50-m trail buffer, vegetation, and geology. For 1941 and 1975, using slope alone was similar to using SI alone; however in 1983 SI alone creates an Areas Under the receiver operator Curve (AUC of 0.785, compared with 0.749 for slope alone. In maximum-entropy models created using all environmental factors, the stability index (SI from SINMAP represented the greatest contributions in all three years (1941: 48.1%; 1975: 35.3; and 1983: 48%, with AUC of 0.795, 0822, and 0.859, respectively; however; using slope instead of SI created similar overall AUC values, likely due to the combined effect with plan curvature indicating focused hydrologic inputs and vegetation identifying the effect of root cohesion

  12. The simplest maximum entropy model for collective behavior in a neural network

    International Nuclear Information System (INIS)

    Tkačik, Gašper; Marre, Olivier; Mora, Thierry; Amodei, Dario; Bialek, William; Berry II, Michael J

    2013-01-01

    Recent work emphasizes that the maximum entropy principle provides a bridge between statistical mechanics models for collective behavior in neural networks and experiments on networks of real neurons. Most of this work has focused on capturing the measured correlations among pairs of neurons. Here we suggest an alternative, constructing models that are consistent with the distribution of global network activity, i.e. the probability that K out of N cells in the network generate action potentials in the same small time bin. The inverse problem that we need to solve in constructing the model is analytically tractable, and provides a natural ‘thermodynamics’ for the network in the limit of large N. We analyze the responses of neurons in a small patch of the retina to naturalistic stimuli, and find that the implied thermodynamics is very close to an unusual critical point, in which the entropy (in proper units) is exactly equal to the energy. (paper)

  13. The NIFTY way of Bayesian signal inference

    International Nuclear Information System (INIS)

    Selig, Marco

    2014-01-01

    We introduce NIFTY, 'Numerical Information Field Theory', a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTY can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTY as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D 3 PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy

  14. The NIFTy way of Bayesian signal inference

    Science.gov (United States)

    Selig, Marco

    2014-12-01

    We introduce NIFTy, "Numerical Information Field Theory", a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTy can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTy as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.

  15. Chaos control of ferroresonance system based on RBF-maximum entropy clustering algorithm

    International Nuclear Information System (INIS)

    Liu Fan; Sun Caixin; Sima Wenxia; Liao Ruijin; Guo Fei

    2006-01-01

    With regards to the ferroresonance overvoltage of neutral grounded power system, a maximum-entropy learning algorithm based on radial basis function neural networks is used to control the chaotic system. The algorithm optimizes the object function to derive learning rule of central vectors, and uses the clustering function of network hidden layers. It improves the regression and learning ability of neural networks. The numerical experiment of ferroresonance system testifies the effectiveness and feasibility of using the algorithm to control chaos in neutral grounded system

  16. How fast can we learn maximum entropy models of neural populations?

    Energy Technology Data Exchange (ETDEWEB)

    Ganmor, Elad; Schneidman, Elad [Department of Neuroscience, Weizmann Institute of Science, Rehovot 76100 (Israel); Segev, Ronen, E-mail: elad.ganmor@weizmann.ac.i, E-mail: elad.schneidman@weizmann.ac.i [Department of Life Sciences and Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel)

    2009-12-01

    Most of our knowledge about how the brain encodes information comes from recordings of single neurons. However, computations in the brain are carried out by large groups of neurons. Modelling the joint activity of many interacting elements is computationally hard because of the large number of possible activity patterns and limited experimental data. Recently it was shown in several different neural systems that maximum entropy pairwise models, which rely only on firing rates and pairwise correlations of neurons, are excellent models for the distribution of activity patterns of neural populations, and in particular, their responses to natural stimuli. Using simultaneous recordings of large groups of neurons in the vertebrate retina responding to naturalistic stimuli, we show here that the relevant statistics required for finding the pairwise model can be accurately estimated within seconds. Furthermore, while higher order statistics may, in theory, improve model accuracy, they are, in practice, harmful for times of up to 20 minutes due to sampling noise. Finally, we demonstrate that trading accuracy for entropy may actually improve model performance when data is limited, and suggest an optimization method that automatically adjusts model constraints in order to achieve good performance.

  17. How fast can we learn maximum entropy models of neural populations?

    International Nuclear Information System (INIS)

    Ganmor, Elad; Schneidman, Elad; Segev, Ronen

    2009-01-01

    Most of our knowledge about how the brain encodes information comes from recordings of single neurons. However, computations in the brain are carried out by large groups of neurons. Modelling the joint activity of many interacting elements is computationally hard because of the large number of possible activity patterns and limited experimental data. Recently it was shown in several different neural systems that maximum entropy pairwise models, which rely only on firing rates and pairwise correlations of neurons, are excellent models for the distribution of activity patterns of neural populations, and in particular, their responses to natural stimuli. Using simultaneous recordings of large groups of neurons in the vertebrate retina responding to naturalistic stimuli, we show here that the relevant statistics required for finding the pairwise model can be accurately estimated within seconds. Furthermore, while higher order statistics may, in theory, improve model accuracy, they are, in practice, harmful for times of up to 20 minutes due to sampling noise. Finally, we demonstrate that trading accuracy for entropy may actually improve model performance when data is limited, and suggest an optimization method that automatically adjusts model constraints in order to achieve good performance.

  18. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    OpenAIRE

    Rahnamaei, Z.; Nematollahi, N.; Farnoosh, R.

    2012-01-01

    We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  19. Improvement of the detector resolution in X-ray spectrometry by using the maximum entropy method

    International Nuclear Information System (INIS)

    Fernández, Jorge E.; Scot, Viviana; Giulio, Eugenio Di; Sabbatucci, Lorenzo

    2015-01-01

    In every X-ray spectroscopy measurement the influence of the detection system causes loss of information. Different mechanisms contribute to form the so-called detector response function (DRF): the detector efficiency, the escape of photons as a consequence of photoelectric or scattering interactions, the spectrum smearing due to the energy resolution, and, in solid states detectors (SSD), the charge collection artifacts. To recover the original spectrum, it is necessary to remove the detector influence by solving the so-called inverse problem. The maximum entropy unfolding technique solves this problem by imposing a set of constraints, taking advantage of the known a priori information and preserving the positive-defined character of the X-ray spectrum. This method has been included in the tool UMESTRAT (Unfolding Maximum Entropy STRATegy), which adopts a semi-automatic strategy to solve the unfolding problem based on a suitable combination of the codes MAXED and GRAVEL, developed at PTB. In the past UMESTRAT proved the capability to resolve characteristic peaks which were revealed as overlapped by a Si SSD, giving good qualitative results. In order to obtain quantitative results, UMESTRAT has been modified to include the additional constraint of the total number of photons of the spectrum, which can be easily determined by inverting the diagonal efficiency matrix. The features of the improved code are illustrated with some examples of unfolding from three commonly used SSD like Si, Ge, and CdTe. The quantitative unfolding can be considered as a software improvement of the detector resolution. - Highlights: • Radiation detection introduces distortions in X- and Gamma-ray spectrum measurements. • UMESTRAT is a graphical tool to unfold X- and Gamma-ray spectra. • UMESTRAT uses the maximum entropy method. • UMESTRAT’s new version produces unfolded spectra with quantitative meaning. • UMESTRAT is a software tool to improve the detector resolution.

  20. Using maximum entropy modeling to identify and prioritize red spruce forest habitat in West Virginia

    Science.gov (United States)

    Nathan R. Beane; James S. Rentch; Thomas M. Schuler

    2013-01-01

    Red spruce forests in West Virginia are found in island-like distributions at high elevations and provide essential habitat for the endangered Cheat Mountain salamander and the recently delisted Virginia northern flying squirrel. Therefore, it is important to identify restoration priorities of red spruce forests. Maximum entropy modeling was used to identify areas of...

  1. On the equivalence between the minimum entropy generation rate and the maximum conversion rate for a reactive system

    International Nuclear Information System (INIS)

    Bispo, Heleno; Silva, Nilton; Brito, Romildo; Manzi, João

    2013-01-01

    Highlights: • Minimum entropy generation (MEG) principle improved the reaction performance. • MEG rate and the maximum conversion equivalence have been analyzed. • Temperature and residence time are used to the domain establishment of MEG. • Satisfying the temperature and residence time relationship results a optimal performance. - Abstract: The analysis of the equivalence between the minimum entropy generation (MEG) rate and the maximum conversion rate for a reactive system is the main purpose of this paper. While being used as a strategy of optimization, the minimum entropy production was applied to the production of propylene glycol in a Continuous Stirred-Tank Reactor (CSTR) with a view to determining the best operating conditions, and under such conditions, a high conversion rate was found. The effects of the key variables and restrictions on the validity domain of MEG were investigated, which raises issues that are included within a broad discussion. The results from simulations indicate that from the chemical reaction standpoint a maximum conversion rate can be considered as equivalent to MEG. Such a result can be clearly explained by examining the classical Maxwell–Boltzmann distribution, where the molecules of the reactive system under the condition of the MEG rate present a distribution of energy with reduced dispersion resulting in a better quality of collision between molecules with a higher conversion rate

  2. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    Z. Rahnamaei

    2012-01-01

    Full Text Available We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  3. Maximum entropy method approach to the θ term

    International Nuclear Information System (INIS)

    Imachi, Masahiro; Shinno, Yasuhiko; Yoneyama, Hiroshi

    2004-01-01

    In Monte Carlo simulations of lattice field theory with a θ term, one confronts the complex weight problem, or the sign problem. This is circumvented by performing the Fourier transform of the topological charge distribution P(Q). This procedure, however, causes flattening phenomenon of the free energy f(θ), which makes study of the phase structure unfeasible. In order to treat this problem, we apply the maximum entropy method (MEM) to a Gaussian form of P(Q), which serves as a good example to test whether the MEM can be applied effectively to the θ term. We study the case with flattering as well as that without flattening. In the latter case, the results of the MEM agree with those obtained from the direct application of the Fourier transform. For the former, the MEM gives a smoother f(θ) than that of the Fourier transform. Among various default models investigated, the images which yield the least error do not show flattening, although some others cannot be excluded given the uncertainly related to statistical error. (author)

  4. On generalized entropies, Bayesian decisions and statistical diversity

    Czech Academy of Sciences Publication Activity Database

    Vajda, Igor; Zvárová, Jana

    2007-01-01

    Roč. 43, č. 5 (2007), s. 675-696 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GA102/07/1131; GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10750506; CEZ:AV0Z10300504 Keywords : Generalized information * Generalized entropy * Power entropy * Bayes error * Simpson diversity * Emlen diversity Subject RIV: BD - Theory of Information Impact factor: 0.552, year: 2007

  5. Embedding the results of focussed Bayesian fusion into a global context

    Science.gov (United States)

    Sander, Jennifer; Heizmann, Michael

    2014-05-01

    Bayesian statistics offers a well-founded and powerful fusion methodology also for the fusion of heterogeneous information sources. However, except in special cases, the needed posterior distribution is not analytically derivable. As consequence, Bayesian fusion may cause unacceptably high computational and storage costs in practice. Local Bayesian fusion approaches aim at reducing the complexity of the Bayesian fusion methodology significantly. This is done by concentrating the actual Bayesian fusion on the potentially most task relevant parts of the domain of the Properties of Interest. Our research on these approaches is motivated by an analogy to criminal investigations where criminalists pursue clues also only locally. This publication follows previous publications on a special local Bayesian fusion technique called focussed Bayesian fusion. Here, the actual calculation of the posterior distribution gets completely restricted to a suitably chosen local context. By this, the global posterior distribution is not completely determined. Strategies for using the results of a focussed Bayesian analysis appropriately are needed. In this publication, we primarily contrast different ways of embedding the results of focussed Bayesian fusion explicitly into a global context. To obtain a unique global posterior distribution, we analyze the application of the Maximum Entropy Principle that has been shown to be successfully applicable in metrology and in different other areas. To address the special need for making further decisions subsequently to the actual fusion task, we further analyze criteria for decision making under partial information.

  6. Principle of maximum entropy for reliability analysis in the design of machine components

    Science.gov (United States)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  7. Study on Droplet Size and Velocity Distributions of a Pressure Swirl Atomizer Based on the Maximum Entropy Formalism

    Directory of Open Access Journals (Sweden)

    Kai Yan

    2015-01-01

    Full Text Available A predictive model for droplet size and velocity distributions of a pressure swirl atomizer has been proposed based on the maximum entropy formalism (MEF. The constraint conditions of the MEF model include the conservation laws of mass, momentum, and energy. The effects of liquid swirling strength, Weber number, gas-to-liquid axial velocity ratio and gas-to-liquid density ratio on the droplet size and velocity distributions of a pressure swirl atomizer are investigated. Results show that model based on maximum entropy formalism works well to predict droplet size and velocity distributions under different spray conditions. Liquid swirling strength, Weber number, gas-to-liquid axial velocity ratio and gas-to-liquid density ratio have different effects on droplet size and velocity distributions of a pressure swirl atomizer.

  8. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    Science.gov (United States)

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  9. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    International Nuclear Information System (INIS)

    Bilsky, A V; Lozhkin, V A; Markovich, D M; Tokarev, M P

    2013-01-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART. (paper)

  10. A parametrization of two-dimensional turbulence based on a maximum entropy production principle with a local conservation of energy

    International Nuclear Information System (INIS)

    Chavanis, Pierre-Henri

    2014-01-01

    In the context of two-dimensional (2D) turbulence, we apply the maximum entropy production principle (MEPP) by enforcing a local conservation of energy. This leads to an equation for the vorticity distribution that conserves all the Casimirs, the energy, and that increases monotonically the mixing entropy (H-theorem). Furthermore, the equation for the coarse-grained vorticity dissipates monotonically all the generalized enstrophies. These equations may provide a parametrization of 2D turbulence. They do not generally relax towards the maximum entropy state. The vorticity current vanishes for any steady state of the 2D Euler equation. Interestingly, the equation for the coarse-grained vorticity obtained from the MEPP turns out to coincide, after some algebraic manipulations, with the one obtained with the anticipated vorticity method. This shows a connection between these two approaches when the conservation of energy is treated locally. Furthermore, the newly derived equation, which incorporates a diffusion term and a drift term, has a nice physical interpretation in terms of a selective decay principle. This sheds new light on both the MEPP and the anticipated vorticity method. (paper)

  11. A Framework for Final Drive Simultaneous Failure Diagnosis Based on Fuzzy Entropy and Sparse Bayesian Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Qing Ye

    2015-01-01

    Full Text Available This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach.

  12. Polarised neutron diffraction measurements of PrBa2Cu3O6+X and Bayesian statistical analysis of such data

    International Nuclear Information System (INIS)

    Markvardsen, A.J.

    2000-01-01

    The physics of the series Pr y Y 1-y Ba 2 CU 3 O 6+x , and ability of Pr to suppress superconductivity, has been a subject of frequent discussions in the literature for more than a decade. This thesis describes a polarised neutron diffraction (PND) experiment performed on PrBa 2 Cu 3 O 6.24 designed to find out something about the electron structure. This experiment pushed the limits of what can be done using the PND technique. The problem is one of a limited number of measured Fourier components that need to be inverted to form a real space image. To accomplish this inversion the maximum entropy technique has been employed. In some cases, the maximum entropy technique has the ability to increase the resolution of 'inverted' data immensely, but this ability is found to depend critically on the choice of constants used in the method. To investigate this a Bayesian robustness analysis of the maximum entropy method is carried out, resulting in an improvement of the maximum entropy technique for analysing PND data. Some results for nickel in the literature have been te-analysed and a comparison is made with different maximum entropy algorithms. Equipped with an improved data analysis technique and carefully measured PND data for PrBa 2 Cu 3 O 6.24 a number of new interesting features are observed, putting constraints on existing theoretical models of Pr y Y 1-y Ba 2 Cu 3 O 6+x and leaving room for more questions to be answered. (author)

  13. Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory

    Science.gov (United States)

    Taylor, Jamie M.

    2016-09-01

    This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.

  14. Bayesian PET image reconstruction incorporating anato-functional joint entropy

    International Nuclear Information System (INIS)

    Tang Jing; Rahmim, Arman

    2009-01-01

    We developed a maximum a posterior (MAP) reconstruction method for positron emission tomography (PET) image reconstruction incorporating magnetic resonance (MR) image information, with the joint entropy between the PET and MR image features serving as the regularization constraint. A non-parametric method was used to estimate the joint probability density of the PET and MR images. Using realistically simulated PET and MR human brain phantoms, the quantitative performance of the proposed algorithm was investigated. Incorporation of the anatomic information via this technique, after parameter optimization, was seen to dramatically improve the noise versus bias tradeoff in every region of interest, compared to the result from using conventional MAP reconstruction. In particular, hot lesions in the FDG PET image, which had no anatomical correspondence in the MR image, also had improved contrast versus noise tradeoff. Corrections were made to figures 3, 4 and 6, and to the second paragraph of section 3.1 on 13 November 2009. The corrected electronic version is identical to the print version.

  15. Combined analysis of steady state and transient transport by the maximum entropy method

    Energy Technology Data Exchange (ETDEWEB)

    Giannone, L.; Stroth, U; Koellermeyer, J [Association Euratom-Max-Planck-Institut fuer Plasmaphysik, Garching (Germany); and others

    1996-04-01

    A new maximum entropy approach has been applied to analyse three types of transient transport experiments. For sawtooth propagation experiments in the ASDEX Upgrade and ECRH power modulation and power-switching experiments in the Wendelstein 7-AS Stellarator, either the time evolution of the temperature perturbation or the phase and amplitude of the modulated temperature perturbation are used as non-linear constraints to the {chi}{sub e} profile to be fitted. Simultaneously, the constraints given by the equilibrium temperature profile for steady-state power balance are fitted. In the maximum entropy formulation, the flattest {chi}{sub e} profile consistent with the constraints is found. It was found that {chi}{sub e} determined from sawtooth propagation was greater than the power balance value by a factor of five in the ASDEX Upgrade. From power modulation experiments, employing the measurements of four modulation frequencies simultaneously, the power deposition profile as well as the {chi}{sub e} profile could be determined. A comparison of the predictions of a time-independent {chi}{sub e} model and a power-dependent {chi}{sub e} model is made. The power-switching experiments show that the {chi}{sub e} profile must change within a millisecond to a new value consistent with the power balance value at the new input power. Neither power deposition broadening due to suprathermal electrons nor temperature or gradient dependences of {chi}{sub e} can explain this observation. (author).

  16. n-Order and maximum fuzzy similarity entropy for discrimination of signals of different complexity: Application to fetal heart rate signals.

    Science.gov (United States)

    Zaylaa, Amira; Oudjemia, Souad; Charara, Jamal; Girault, Jean-Marc

    2015-09-01

    This paper presents two new concepts for discrimination of signals of different complexity. The first focused initially on solving the problem of setting entropy descriptors by varying the pattern size instead of the tolerance. This led to the search for the optimal pattern size that maximized the similarity entropy. The second paradigm was based on the n-order similarity entropy that encompasses the 1-order similarity entropy. To improve the statistical stability, n-order fuzzy similarity entropy was proposed. Fractional Brownian motion was simulated to validate the different methods proposed, and fetal heart rate signals were used to discriminate normal from abnormal fetuses. In all cases, it was found that it was possible to discriminate time series of different complexity such as fractional Brownian motion and fetal heart rate signals. The best levels of performance in terms of sensitivity (90%) and specificity (90%) were obtained with the n-order fuzzy similarity entropy. However, it was shown that the optimal pattern size and the maximum similarity measurement were related to intrinsic features of the time series. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Small-signal analysis in high-energy physics: A Bayesian approach

    International Nuclear Information System (INIS)

    Prosper, H.B.

    1988-01-01

    The statistics of small signals masked by a background of imprecisely known magnitude is addressed from a Bayesian viewpoint using a simple statistical model which may be derived from the principle of maximum entropy. The issue of the correct assignment of prior probabilities is resolved by invoking an invariance principle proposed by Jaynes. We calculate the posterior probability and use it to calculate point estimates and upper limits for the magnitude of the signal. The results are applicable to high-energy physics experiments searching for new phenomena. We illustrate this by reanalyzing some published data from a few experiments

  18. Minimal entropy approximation for cellular automata

    International Nuclear Information System (INIS)

    Fukś, Henryk

    2014-01-01

    We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim. (paper)

  19. Entropy and equilibrium via games of complexity

    Science.gov (United States)

    Topsøe, Flemming

    2004-09-01

    It is suggested that thermodynamical equilibrium equals game theoretical equilibrium. Aspects of this thesis are discussed. The philosophy is consistent with maximum entropy thinking of Jaynes, but goes one step deeper by deriving the maximum entropy principle from an underlying game theoretical principle. The games introduced are based on measures of complexity. Entropy is viewed as minimal complexity. It is demonstrated that Tsallis entropy ( q-entropy) and Kaniadakis entropy ( κ-entropy) can be obtained in this way, based on suitable complexity measures. A certain unifying effect is obtained by embedding these measures in a two-parameter family of entropy functions.

  20. Bayesian Analysis of the Survival Function and Failure Rate of Weibull Distribution with Censored Data

    Directory of Open Access Journals (Sweden)

    Chris Bambey Guure

    2012-01-01

    Full Text Available The survival function of the Weibull distribution determines the probability that a unit or an individual will survive beyond a certain specified time while the failure rate is the rate at which a randomly selected individual known to be alive at time will die at time (. The classical approach for estimating the survival function and the failure rate is the maximum likelihood method. In this study, we strive to determine the best method, by comparing the classical maximum likelihood against the Bayesian estimators using an informative prior and a proposed data-dependent prior known as generalised noninformative prior. The Bayesian estimation is considered under three loss functions. Due to the complexity in dealing with the integrals using the Bayesian estimator, Lindley’s approximation procedure is employed to reduce the ratio of the integrals. For the purpose of comparison, the mean squared error (MSE and the absolute bias are obtained. This study is conducted via simulation by utilising different sample sizes. We observed from the study that the generalised prior we assumed performed better than the others under linear exponential loss function with respect to MSE and under general entropy loss function with respect to absolute bias.

  1. Entropy and transverse section reconstruction

    International Nuclear Information System (INIS)

    Gullberg, G.T.

    1976-01-01

    A new approach to the reconstruction of a transverse section using projection data from multiple views incorporates the concept of maximum entropy. The principle of maximizing information entropy embodies the assurance of minimizing bias or prejudice in the reconstruction. Using maximum entropy is a necessary condition for the reconstructed image. This entropy criterion is most appropriate for 3-D reconstruction of objects from projections where the system is underdetermined or the data are limited statistically. This is the case in nuclear medicine time limitations in patient studies do not yield sufficient projections

  2. Vertical and horizontal processes in the global atmosphere and the maximum entropy production conjecture

    Directory of Open Access Journals (Sweden)

    S. Pascale

    2012-01-01

    Full Text Available The objective of this paper is to reconsider the Maximum Entropy Production conjecture (MEP in the context of a very simple two-dimensional zonal-vertical climate model able to represent the total material entropy production due at the same time to both horizontal and vertical heat fluxes. MEP is applied first to a simple four-box model of climate which accounts for both horizontal and vertical material heat fluxes. It is shown that, under condition of fixed insolation, a MEP solution is found with reasonably realistic temperature and heat fluxes, thus generalising results from independent two-box horizontal or vertical models. It is also shown that the meridional and the vertical entropy production terms are independently involved in the maximisation and thus MEP can be applied to each subsystem with fixed boundary conditions. We then extend the four-box model by increasing its resolution, and compare it with GCM output. A MEP solution is found which is fairly realistic as far as the horizontal large scale organisation of the climate is concerned whereas the vertical structure looks to be unrealistic and presents seriously unstable features. This study suggest that the thermal meridional structure of the atmosphere is predicted fairly well by MEP once the insolation is given but the vertical structure of the atmosphere cannot be predicted satisfactorily by MEP unless constraints are imposed to represent the determination of longwave absorption by water vapour and clouds as a function of the state of the climate. Furthermore an order-of-magnitude estimate of contributions to the material entropy production due to horizontal and vertical processes within the climate system is provided by using two different methods. In both cases we found that approximately 40 mW m−2 K−1 of material entropy production is due to vertical heat transport and 5–7 mW m−2 K−1 to horizontal heat transport.

  3. On the maximum-entropy/autoregressive modeling of time series

    Science.gov (United States)

    Chao, B. F.

    1984-01-01

    The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.

  4. Air kerma rate estimation by means of in-situ gamma spectrometry: A Bayesian approach

    International Nuclear Information System (INIS)

    Cabal, Gonzalo; Kluson, Jaroslav

    2008-01-01

    Full text: Bayesian inference is used to determine the Air Kerma Rate based on a set of in situ environmental gamma spectra measurements performed with a NaI(Tl) scintillation detector. A natural advantage of such approach is the possibility to quantify uncertainty not only in the Air Kerma Rate estimation but also for the gamma spectra which is unfolded within the procedure. The measurements were performed using a 3'' x 3'' NaI(Tl) scintillation detector. The response matrices of such detection system were calculated using a Monte Carlo code. For the calculations of the spectra as well as the Air Kerma Rate the WinBugs program was used. WinBugs is a dedicated software for Bayesian inference using Monte Carlo Markov chain methods (MCMC). The results of such calculations are shown and compared with other non-Bayesian approachs such as the Scofield-Gold iterative method and the Maximum Entropy Method

  5. Autonomous entropy-based intelligent experimental design

    Science.gov (United States)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  6. Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and Monte Carlo method

    International Nuclear Information System (INIS)

    Nasser, Hassan; Cessac, Bruno; Marre, Olivier

    2013-01-01

    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles. (paper)

  7. Using Maximum Entropy to Find Patterns in Genomes

    Science.gov (United States)

    Liu, Sophia; Hockenberry, Adam; Lancichinetti, Andrea; Jewett, Michael; Amaral, Luis

    The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. To accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. This approach can also be easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes. National Institute of General Medical Science, Northwestern University Presidential Fellowship, National Science Foundation, David and Lucile Packard Foundation, Camille Dreyfus Teacher Scholar Award.

  8. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  9. Electron density profile reconstruction by maximum entropy method with multichannel HCN laser interferometer system on SPAC VII

    International Nuclear Information System (INIS)

    Kubo, S.; Narihara, K.; Tomita, Y.; Hasegawa, M.; Tsuzuki, T.; Mohri, A.

    1988-01-01

    A multichannel HCN laser interferometer system has been developed to investigate the plasma electron confinement properties in SPAC VII device. Maximum entropy method is applied to reconstruct the electron density profile from measured line integrated data. Particle diffusion coefficient in the peripheral region of the REB ring core spherator was obtained from the evolution of the density profile. (author)

  10. Maximum Likelihood and Bayes Estimation in Randomly Censored Geometric Distribution

    Directory of Open Access Journals (Sweden)

    Hare Krishna

    2017-01-01

    Full Text Available In this article, we study the geometric distribution under randomly censored data. Maximum likelihood estimators and confidence intervals based on Fisher information matrix are derived for the unknown parameters with randomly censored data. Bayes estimators are also developed using beta priors under generalized entropy and LINEX loss functions. Also, Bayesian credible and highest posterior density (HPD credible intervals are obtained for the parameters. Expected time on test and reliability characteristics are also analyzed in this article. To compare various estimates developed in the article, a Monte Carlo simulation study is carried out. Finally, for illustration purpose, a randomly censored real data set is discussed.

  11. Exact Maximum-Entropy Estimation with Feynman Diagrams

    Science.gov (United States)

    Netser Zernik, Amitai; Schlank, Tomer M.; Tessler, Ran J.

    2018-02-01

    A longstanding open problem in statistics is finding an explicit expression for the probability measure which maximizes entropy with respect to given constraints. In this paper a solution to this problem is found, using perturbative Feynman calculus. The explicit expression is given as a sum over weighted trees.

  12. Non-equilibrium thermodynamics, maximum entropy production and Earth-system evolution.

    Science.gov (United States)

    Kleidon, Axel

    2010-01-13

    The present-day atmosphere is in a unique state far from thermodynamic equilibrium. This uniqueness is for instance reflected in the high concentration of molecular oxygen and the low relative humidity in the atmosphere. Given that the concentration of atmospheric oxygen has likely increased throughout Earth-system history, we can ask whether this trend can be generalized to a trend of Earth-system evolution that is directed away from thermodynamic equilibrium, why we would expect such a trend to take place and what it would imply for Earth-system evolution as a whole. The justification for such a trend could be found in the proposed general principle of maximum entropy production (MEP), which states that non-equilibrium thermodynamic systems maintain steady states at which entropy production is maximized. Here, I justify and demonstrate this application of MEP to the Earth at the planetary scale. I first describe the non-equilibrium thermodynamic nature of Earth-system processes and distinguish processes that drive the system's state away from equilibrium from those that are directed towards equilibrium. I formulate the interactions among these processes from a thermodynamic perspective and then connect them to a holistic view of the planetary thermodynamic state of the Earth system. In conclusion, non-equilibrium thermodynamics and MEP have the potential to provide a simple and holistic theory of Earth-system functioning. This theory can be used to derive overall evolutionary trends of the Earth's past, identify the role that life plays in driving thermodynamic states far from equilibrium, identify habitability in other planetary environments and evaluate human impacts on Earth-system functioning. This journal is © 2010 The Royal Society

  13. Bayesian and maximum likelihood estimation of genetic maps

    DEFF Research Database (Denmark)

    York, Thomas L.; Durrett, Richard T.; Tanksley, Steven

    2005-01-01

    There has recently been increased interest in the use of Markov Chain Monte Carlo (MCMC)-based Bayesian methods for estimating genetic maps. The advantage of these methods is that they can deal accurately with missing data and genotyping errors. Here we present an extension of the previous methods...... of genotyping errors. A similar advantage of the Bayesian method was not observed for missing data. We also re-analyse a recently published set of data from the eggplant and show that the use of the MCMC-based method leads to smaller estimates of genetic distances....

  14. Image Segmentation using a Refined Comprehensive Learning Particle Swarm Optimizer for Maximum Tsallis Entropy Thresholding

    OpenAIRE

    L. Jubair Ahmed; A. Ebenezer Jeyakumar

    2013-01-01

    Thresholding is one of the most important techniques for performing image segmentation. In this paper to compute optimum thresholds for Maximum Tsallis entropy thresholding (MTET) model, a new hybrid algorithm is proposed by integrating the Comprehensive Learning Particle Swarm Optimizer (CPSO) with the Powell’s Conjugate Gradient (PCG) method. Here the CPSO will act as the main optimizer for searching the near-optimal thresholds while the PCG method will be used to fine tune the best solutio...

  15. The Maximum Entropy Method for Optical Spectrum Analysis of Real-Time TDDFT

    International Nuclear Information System (INIS)

    Toogoshi, M; Kano, S S; Zempo, Y

    2015-01-01

    The maximum entropy method (MEM) is one of the key techniques for spectral analysis. The major feature is that spectra in the low frequency part can be described by the short time-series data. Thus, we applied MEM to analyse the spectrum from the time dependent dipole moment obtained from the time-dependent density functional theory (TDDFT) calculation in real time. It is intensively studied for computing optical properties. In the MEM analysis, however, the maximum lag of the autocorrelation is restricted by the total number of time-series data. We proposed that, as an improved MEM analysis, we use the concatenated data set made from the several-times repeated raw data. We have applied this technique to the spectral analysis of the TDDFT dipole moment of ethylene and oligo-fluorene with n = 8. As a result, the higher resolution can be obtained, which is closer to that of FT with practically time-evoluted data as the same total number of time steps. The efficiency and the characteristic feature of this technique are presented in this paper. (paper)

  16. Fully probabilistic design of hierarchical Bayesian models

    Czech Academy of Sciences Publication Activity Database

    Quinn, A.; Kárný, Miroslav; Guy, Tatiana Valentine

    2016-01-01

    Roč. 369, č. 1 (2016), s. 532-547 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Fully probabilistic design * Ideal distribution * Minimum cross-entropy principle * Bayesian conditioning * Kullback-Leibler divergence * Bayesian nonparametric modelling Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.832, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/karny-0463052.pdf

  17. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    Science.gov (United States)

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  18. Stochastic modeling and control system designs of the NASA/MSFC Ground Facility for large space structures: The maximum entropy/optimal projection approach

    Science.gov (United States)

    Hsia, Wei-Shen

    1986-01-01

    In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.

  19. Bayesian Recurrent Neural Network for Language Modeling.

    Science.gov (United States)

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  20. ESTIMATION OF PARAMETERS AND RELIABILITY FUNCTION OF EXPONENTIATED EXPONENTIAL DISTRIBUTION: BAYESIAN APPROACH UNDER GENERAL ENTROPY LOSS FUNCTION

    Directory of Open Access Journals (Sweden)

    Sanjay Kumar Singh

    2011-06-01

    Full Text Available In this Paper we propose Bayes estimators of the parameters of Exponentiated Exponential distribution and Reliability functions under General Entropy loss function for Type II censored sample. The proposed estimators have been compared with the corresponding Bayes estimators obtained under Squared Error loss function and maximum likelihood estimators for their simulated risks (average loss over sample space.

  1. Maximum Entropy Production Is Not a Steady State Attractor for 2D Fluid Convection

    Directory of Open Access Journals (Sweden)

    Stuart Bartlett

    2016-12-01

    Full Text Available Multiple authors have claimed that the natural convection of a fluid is a process that exhibits maximum entropy production (MEP. However, almost all such investigations were limited to fixed temperature boundary conditions (BCs. It was found that under those conditions, the system tends to maximize its heat flux, and hence it was concluded that the MEP state is a dynamical attractor. However, since entropy production varies with heat flux and difference of inverse temperature, it is essential that any complete investigation of entropy production allows for variations in heat flux and temperature difference. Only then can we legitimately assess whether the MEP state is the most attractive. Our previous work made use of negative feedback BCs to explore this possibility. We found that the steady state of the system was far from the MEP state. For any system, entropy production can only be maximized subject to a finite set of physical and material constraints. In the case of our previous work, it was possible that the adopted set of fluid parameters were constraining the system in such a way that it was entirely prevented from reaching the MEP state. Hence, in the present work, we used a different set of boundary parameters, such that the steady states of the system were in the local vicinity of the MEP state. If MEP was indeed an attractor, relaxing those constraints of our previous work should have caused a discrete perturbation to the surface of steady state heat flux values near the value corresponding to MEP. We found no such perturbation, and hence no discernible attraction to the MEP state. Furthermore, systems with fixed flux BCs actually minimize their entropy production (relative to the alternative stable state, that of pure diffusive heat transport. This leads us to conclude that the principle of MEP is not an accurate indicator of which stable steady state a convective system will adopt. However, for all BCs considered, the quotient of

  2. An automated land-use mapping comparison of the Bayesian maximum likelihood and linear discriminant analysis algorithms

    Science.gov (United States)

    Tom, C. H.; Miller, L. D.

    1984-01-01

    The Bayesian maximum likelihood parametric classifier has been tested against the data-based formulation designated 'linear discrimination analysis', using the 'GLIKE' decision and "CLASSIFY' classification algorithms in the Landsat Mapping System. Identical supervised training sets, USGS land use/land cover classes, and various combinations of Landsat image and ancilliary geodata variables, were used to compare the algorithms' thematic mapping accuracy on a single-date summer subscene, with a cellularized USGS land use map of the same time frame furnishing the ground truth reference. CLASSIFY, which accepts a priori class probabilities, is found to be more accurate than GLIKE, which assumes equal class occurrences, for all three mapping variable sets and both levels of detail. These results may be generalized to direct accuracy, time, cost, and flexibility advantages of linear discriminant analysis over Bayesian methods.

  3. The Maximum Entropy Production Principle: Its Theoretical Foundations and Applications to the Earth System

    Directory of Open Access Journals (Sweden)

    Axel Kleidon

    2010-03-01

    Full Text Available The Maximum Entropy Production (MEP principle has been remarkably successful in producing accurate predictions for non-equilibrium states. We argue that this is because the MEP principle is an effective inference procedure that produces the best predictions from the available information. Since all Earth system processes are subject to the conservation of energy, mass and momentum, we argue that in practical terms the MEP principle should be applied to Earth system processes in terms of the already established framework of non-equilibrium thermodynamics, with the assumption of local thermodynamic equilibrium at the appropriate scales.

  4. Bayesian, Maximum Parsimony and UPGMA Models for Inferring the Phylogenies of Antelopes Using Mitochondrial Markers

    OpenAIRE

    Khan, Haseeb A.; Arif, Ibrahim A.; Bahkali, Ali H.; Al Farhan, Ahmad H.; Al Homaidan, Ali A.

    2008-01-01

    This investigation was aimed to compare the inference of antelope phylogenies resulting from the 16S rRNA, cytochrome-b (cyt-b) and d-loop segments of mitochondrial DNA using three different computational models including Bayesian (BA), maximum parsimony (MP) and unweighted pair group method with arithmetic mean (UPGMA). The respective nucleotide sequences of three Oryx species (Oryx leucoryx, Oryx dammah and Oryx gazella) and an out-group (Addax nasomaculatus) were aligned and subjected to B...

  5. Online Robot Dead Reckoning Localization Using Maximum Relative Entropy Optimization With Model Constraints

    International Nuclear Information System (INIS)

    Urniezius, Renaldas

    2011-01-01

    The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigid body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.

  6. A parallel implementation of a maximum entropy reconstruction algorithm for PET images in a visual language

    International Nuclear Information System (INIS)

    Bastiens, K.; Lemahieu, I.

    1994-01-01

    The application of a maximum entropy reconstruction algorithm to PET images requires a lot of computing resources. A parallel implementation could seriously reduce the execution time. However, programming a parallel application is still a non trivial task, needing specialized people. In this paper a programming environment based on a visual programming language is used for a parallel implementation of the reconstruction algorithm. This programming environment allows less experienced programmers to use the performance of multiprocessor systems. (authors)

  7. Nuclear Enhanced X-ray Maximum Entropy Method Used to Analyze Local Distortions in Simple Structures

    DEFF Research Database (Denmark)

    Christensen, Sebastian; Bindzus, Niels; Christensen, Mogens

    We introduce a novel method for reconstructing pseudo nuclear density distributions (NDDs): Nuclear Enhanced X-ray Maximum Entropy Method (NEXMEM). NEXMEM offers an alternative route to experimental NDDs, exploiting the superior quality of synchrotron X-ray data compared to neutron data. The method...... proposed to result from anharmonic phonon scattering or from local fluctuating dipoles on the Pb site.[1,2] No macroscopic symmetry change are associated with these effects, rendering them invisible to conventional crystallographic techniques. For this reason PbX was until recently believed to adopt...

  8. Bayesian structural inference for hidden processes

    Science.gov (United States)

    Strelioff, Christopher C.; Crutchfield, James P.

    2014-04-01

    We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.

  9. A parallel implementation of a maximum entropy reconstruction algorithm for PET images in a visual language

    Energy Technology Data Exchange (ETDEWEB)

    Bastiens, K; Lemahieu, I [University of Ghent - ELIS Department, St. Pietersnieuwstraat 41, B-9000 Ghent (Belgium)

    1994-12-31

    The application of a maximum entropy reconstruction algorithm to PET images requires a lot of computing resources. A parallel implementation could seriously reduce the execution time. However, programming a parallel application is still a non trivial task, needing specialized people. In this paper a programming environment based on a visual programming language is used for a parallel implementation of the reconstruction algorithm. This programming environment allows less experienced programmers to use the performance of multiprocessor systems. (authors). 8 refs, 3 figs, 1 tab.

  10. The inverse Fourier problem in the case of poor resolution in one given direction: the maximum-entropy solution

    International Nuclear Information System (INIS)

    Papoular, R.J.; Zheludev, A.; Ressouche, E.; Schweizer, J.

    1995-01-01

    When density distributions in crystals are reconstructed from 3D diffraction data, a problem sometimes occurs when the spatial resolution in one given direction is very small compared to that in perpendicular directions. In this case, a 2D projected density is usually reconstructed. For this task, the conventional Fourier inversion method only makes use of those structure factors measured in the projection plane. All the other structure factors contribute zero to the reconstruction of a projected density. On the contrary, the maximum-entropy method uses all the 3D data, to yield 3D-enhanced 2D projected density maps. It is even possible to reconstruct a projection in the extreme case when not one structure factor in the plane of projection is known. In the case of poor resolution along one given direction, a Fourier inversion reconstruction gives very low quality 3D densities 'smeared' in the third dimension. The application of the maximum-entropy procedure reduces the smearing significantly and reasonably well resolved projections along most directions can now be obtained from the MaxEnt 3D density. To illustrate these two ideas, particular examples based on real polarized neutron diffraction data sets are presented. (orig.)

  11. Separation of Stochastic and Deterministic Information from Seismological Time Series with Nonlinear Dynamics and Maximum Entropy Methods

    International Nuclear Information System (INIS)

    Gutierrez, Rafael M.; Useche, Gina M.; Buitrago, Elias

    2007-01-01

    We present a procedure developed to detect stochastic and deterministic information contained in empirical time series, useful to characterize and make models of different aspects of complex phenomena represented by such data. This procedure is applied to a seismological time series to obtain new information to study and understand geological phenomena. We use concepts and methods from nonlinear dynamics and maximum entropy. The mentioned method allows an optimal analysis of the available information

  12. Entropy concentration and the empirical coding game

    NARCIS (Netherlands)

    Grünwald, P.D.

    2008-01-01

    We give a characterization of maximum entropy/minimum relative entropy inference by providing two 'strong entropy concentration' theorems. These theorems unify and generalize Jaynes''concentration phenomenon' and Van Campenhout and Cover's 'conditional limit theorem'. The theorems characterize

  13. On the Five-Moment Hamburger Maximum Entropy Reconstruction

    Science.gov (United States)

    Summy, D. P.; Pullin, D. I.

    2018-05-01

    We consider the Maximum Entropy Reconstruction (MER) as a solution to the five-moment truncated Hamburger moment problem in one dimension. In the case of five monomial moment constraints, the probability density function (PDF) of the MER takes the form of the exponential of a quartic polynomial. This implies a possible bimodal structure in regions of moment space. An analytical model is developed for the MER PDF applicable near a known singular line in a centered, two-component, third- and fourth-order moment (μ _3 , μ _4 ) space, consistent with the general problem of five moments. The model consists of the superposition of a perturbed, centered Gaussian PDF and a small-amplitude packet of PDF-density, called the outlying moment packet (OMP), sitting far from the mean. Asymptotic solutions are obtained which predict the shape of the perturbed Gaussian and both the amplitude and position on the real line of the OMP. The asymptotic solutions show that the presence of the OMP gives rise to an MER solution that is singular along a line in (μ _3 , μ _4 ) space emanating from, but not including, the point representing a standard normal distribution, or thermodynamic equilibrium. We use this analysis of the OMP to develop a numerical regularization of the MER, creating a procedure we call the Hybrid MER (HMER). Compared with the MER, the HMER is a significant improvement in terms of robustness and efficiency while preserving accuracy in its prediction of other important distribution features, such as higher order moments.

  14. Direct 4D reconstruction of parametric images incorporating anato-functional joint entropy.

    Science.gov (United States)

    Tang, Jing; Kuwabara, Hiroto; Wong, Dean F; Rahmim, Arman

    2010-08-07

    We developed an anatomy-guided 4D closed-form algorithm to directly reconstruct parametric images from projection data for (nearly) irreversible tracers. Conventional methods consist of individually reconstructing 2D/3D PET data, followed by graphical analysis on the sequence of reconstructed image frames. The proposed direct reconstruction approach maintains the simplicity and accuracy of the expectation-maximization (EM) algorithm by extending the system matrix to include the relation between the parametric images and the measured data. A closed-form solution was achieved using a different hidden complete-data formulation within the EM framework. Furthermore, the proposed method was extended to maximum a posterior reconstruction via incorporation of MR image information, taking the joint entropy between MR and parametric PET features as the prior. Using realistic simulated noisy [(11)C]-naltrindole PET and MR brain images/data, the quantitative performance of the proposed methods was investigated. Significant improvements in terms of noise versus bias performance were demonstrated when performing direct parametric reconstruction, and additionally upon extending the algorithm to its Bayesian counterpart using the MR-PET joint entropy measure.

  15. Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.

    Science.gov (United States)

    Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan

    2016-04-28

    This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.

  16. Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging

    Directory of Open Access Journals (Sweden)

    Shuanghui Zhang

    2016-04-01

    Full Text Available This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP estimation and the maximum likelihood estimation (MLE are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.

  17. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    Science.gov (United States)

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  18. Entropy Bounds for Constrained Two-Dimensional Fields

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Justesen, Jørn

    1999-01-01

    The maximum entropy and thereby the capacity of 2-D fields given by certain constraints on configurations are considered. Upper and lower bounds are derived.......The maximum entropy and thereby the capacity of 2-D fields given by certain constraints on configurations are considered. Upper and lower bounds are derived....

  19. Developing the fuzzy c-means clustering algorithm based on maximum entropy for multitarget tracking in a cluttered environment

    Science.gov (United States)

    Chen, Xiao; Li, Yaan; Yu, Jing; Li, Yuxing

    2018-01-01

    For fast and more effective implementation of tracking multiple targets in a cluttered environment, we propose a multiple targets tracking (MTT) algorithm called maximum entropy fuzzy c-means clustering joint probabilistic data association that combines fuzzy c-means clustering and the joint probabilistic data association (PDA) algorithm. The algorithm uses the membership value to express the probability of the target originating from measurement. The membership value is obtained through fuzzy c-means clustering objective function optimized by the maximum entropy principle. When considering the effect of the public measurement, we use a correction factor to adjust the association probability matrix to estimate the state of the target. As this algorithm avoids confirmation matrix splitting, it can solve the high computational load problem of the joint PDA algorithm. The results of simulations and analysis conducted for tracking neighbor parallel targets and cross targets in a different density cluttered environment show that the proposed algorithm can realize MTT quickly and efficiently in a cluttered environment. Further, the performance of the proposed algorithm remains constant with increasing process noise variance. The proposed algorithm has the advantages of efficiency and low computational load, which can ensure optimum performance when tracking multiple targets in a dense cluttered environment.

  20. A basic introduction to the thermodynamics of the Earth system far from equilibrium and maximum entropy production

    Science.gov (United States)

    Kleidon, A.

    2010-01-01

    The Earth system is remarkably different from its planetary neighbours in that it shows pronounced, strong global cycling of matter. These global cycles result in the maintenance of a unique thermodynamic state of the Earth's atmosphere which is far from thermodynamic equilibrium (TE). Here, I provide a simple introduction of the thermodynamic basis to understand why Earth system processes operate so far away from TE. I use a simple toy model to illustrate the application of non-equilibrium thermodynamics and to classify applications of the proposed principle of maximum entropy production (MEP) to such processes into three different cases of contrasting flexibility in the boundary conditions. I then provide a brief overview of the different processes within the Earth system that produce entropy, review actual examples of MEP in environmental and ecological systems, and discuss the role of interactions among dissipative processes in making boundary conditions more flexible. I close with a brief summary and conclusion. PMID:20368248

  1. A basic introduction to the thermodynamics of the Earth system far from equilibrium and maximum entropy production.

    Science.gov (United States)

    Kleidon, A

    2010-05-12

    The Earth system is remarkably different from its planetary neighbours in that it shows pronounced, strong global cycling of matter. These global cycles result in the maintenance of a unique thermodynamic state of the Earth's atmosphere which is far from thermodynamic equilibrium (TE). Here, I provide a simple introduction of the thermodynamic basis to understand why Earth system processes operate so far away from TE. I use a simple toy model to illustrate the application of non-equilibrium thermodynamics and to classify applications of the proposed principle of maximum entropy production (MEP) to such processes into three different cases of contrasting flexibility in the boundary conditions. I then provide a brief overview of the different processes within the Earth system that produce entropy, review actual examples of MEP in environmental and ecological systems, and discuss the role of interactions among dissipative processes in making boundary conditions more flexible. I close with a brief summary and conclusion.

  2. MAXED, a computer code for the deconvolution of multisphere neutron spectrometer data using the maximum entropy method

    International Nuclear Information System (INIS)

    Reginatto, M.; Goldhagen, P.

    1998-06-01

    The problem of analyzing data from a multisphere neutron spectrometer to infer the energy spectrum of the incident neutrons is discussed. The main features of the code MAXED, a computer program developed to apply the maximum entropy principle to the deconvolution (unfolding) of multisphere neutron spectrometer data, are described, and the use of the code is illustrated with an example. A user's guide for the code MAXED is included in an appendix. The code is available from the authors upon request

  3. A Bayesian analysis of the nucleon QCD sum rules

    International Nuclear Information System (INIS)

    Ohtani, Keisuke; Gubler, Philipp; Oka, Makoto

    2011-01-01

    QCD sum rules of the nucleon channel are reanalyzed, using the maximum-entropy method (MEM). This new approach, based on the Bayesian probability theory, does not restrict the spectral function to the usual ''pole + continuum'' form, allowing a more flexible investigation of the nucleon spectral function. Making use of this flexibility, we are able to investigate the spectral functions of various interpolating fields, finding that the nucleon ground state mainly couples to an operator containing a scalar diquark. Moreover, we formulate the Gaussian sum rule for the nucleon channel and find that it is more suitable for the MEM analysis to extract the nucleon pole in the region of its experimental value, while the Borel sum rule does not contain enough information to clearly separate the nucleon pole from the continuum. (orig.)

  4. A Maximum Entropy Approach to Assess Debonding in Honeycomb aluminum Plates

    Directory of Open Access Journals (Sweden)

    Viviana Meruane

    2014-05-01

    Full Text Available Honeycomb sandwich structures are used in a wide variety of applications. Nevertheless, due to manufacturing defects or impact loads, these structures can be subject to imperfect bonding or debonding between the skin and the honeycomb core. The presence of debonding reduces the bending stiffness of the composite panel, which causes detectable changes in its vibration characteristics. This article presents a new supervised learning algorithm to identify debonded regions in aluminum honeycomb panels. The algorithm uses a linear approximation method handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of neural networks. The honeycomb panels are modeled with finite elements using a simplified three-layer shell model. The adhesive layer between the skin and core is modeled using linear springs, the rigidities of which are reduced in debonded sectors. The algorithm is validated using experimental data of an aluminum honeycomb panel under different damage scenarios.

  5. LQG and maximum entropy control design for the Hubble Space Telescope

    Science.gov (United States)

    Collins, Emmanuel G., Jr.; Richter, Stephen

    Solar array vibrations are responsible for serious pointing control problems on the Hubble Space Telescope (HST). The original HST control law was not designed to attenuate these disturbances because they were not perceived to be a problem prior to launch. However, significant solar array vibrations do occur due to large changes in the thermal environment as the HST orbits the earth. Using classical techniques, Marshall Space Flight Center in conjunction with Lockheed Missiles and Space Company developed modified HST controllers that were able to suppress the influence of the vibrations of the solar arrays on the line-of-sight (LOS) performance. Substantial LOS improvement was observed when two of these controllers were implemented on orbit. This paper describes the development of modified HST controllers by using modern control techniques, particularly linear-quadratic-gaussian (LQG) design and Maximum Entropy robust control design, a generalization of LQG that incorporates robustness constraints with respect to modal errors. The fundamental issues are discussed candidly and controllers designed using these modern techniques are described.

  6. Lattice NRQCD study on in-medium bottomonium spectra using a novel Bayesian reconstruction approach

    Science.gov (United States)

    Kim, Seyong; Petreczky, Peter; Rothkopf, Alexander

    2016-01-01

    We present recent results on the in-medium modification of S- and P-wave bottomonium states around the deconfinement transition. Our study uses lattice QCD with Nf = 2 + 1 light quark flavors to describe the non-perturbative thermal QCD medium between 140MeV Bayesian prescription, which provides higher accuracy than the Maximum Entropy Method. Based on a systematic comparison of interacting and free spectral functions we conclude that the ground states of both the S-wave (ϒ) and P-wave (χb1) channel survive up to T = 249MeV. Stringent upper limits on the size of the in-medium modification of bottomonium masses and widths are provided.

  7. Bayesian Estimation of Two-Parameter Weibull Distribution Using Extension of Jeffreys' Prior Information with Three Loss Functions

    Directory of Open Access Journals (Sweden)

    Chris Bambey Guure

    2012-01-01

    Full Text Available The Weibull distribution has been observed as one of the most useful distribution, for modelling and analysing lifetime data in engineering, biology, and others. Studies have been done vigorously in the literature to determine the best method in estimating its parameters. Recently, much attention has been given to the Bayesian estimation approach for parameters estimation which is in contention with other estimation methods. In this paper, we examine the performance of maximum likelihood estimator and Bayesian estimator using extension of Jeffreys prior information with three loss functions, namely, the linear exponential loss, general entropy loss, and the square error loss function for estimating the two-parameter Weibull failure time distribution. These methods are compared using mean square error through simulation study with varying sample sizes. The results show that Bayesian estimator using extension of Jeffreys' prior under linear exponential loss function in most cases gives the smallest mean square error and absolute bias for both the scale parameter α and the shape parameter β for the given values of extension of Jeffreys' prior.

  8. Modelling streambank erosion potential using maximum entropy in a central Appalachian watershed

    Directory of Open Access Journals (Sweden)

    J. Pitchford

    2015-03-01

    Full Text Available We used maximum entropy to model streambank erosion potential (SEP in a central Appalachian watershed to help prioritize sites for management. Model development included measuring erosion rates, application of a quantitative approach to locate Target Eroding Areas (TEAs, and creation of maps of boundary conditions. We successfully constructed a probability distribution of TEAs using the program Maxent. All model evaluation procedures indicated that the model was an excellent predictor, and that the major environmental variables controlling these processes were streambank slope, soil characteristics, bank position, and underlying geology. A classification scheme with low, moderate, and high levels of SEP derived from logistic model output was able to differentiate sites with low erosion potential from sites with moderate and high erosion potential. A major application of this type of modelling framework is to address uncertainty in stream restoration planning, ultimately helping to bridge the gap between restoration science and practice.

  9. Reinterpreting maximum entropy in ecology: a null hypothesis constrained by ecological mechanism.

    Science.gov (United States)

    O'Dwyer, James P; Rominger, Andrew; Xiao, Xiao

    2017-07-01

    Simplified mechanistic models in ecology have been criticised for the fact that a good fit to data does not imply the mechanism is true: pattern does not equal process. In parallel, the maximum entropy principle (MaxEnt) has been applied in ecology to make predictions constrained by just a handful of state variables, like total abundance or species richness. But an outstanding question remains: what principle tells us which state variables to constrain? Here we attempt to solve both problems simultaneously, by translating a given set of mechanisms into the state variables to be used in MaxEnt, and then using this MaxEnt theory as a null model against which to compare mechanistic predictions. In particular, we identify the sufficient statistics needed to parametrise a given mechanistic model from data and use them as MaxEnt constraints. Our approach isolates exactly what mechanism is telling us over and above the state variables alone. © 2017 John Wiley & Sons Ltd/CNRS.

  10. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle

    Science.gov (United States)

    Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei

    2016-08-01

    We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.

  11. Imaging VLBI polarimetry data from Active Galactic Nuclei using the Maximum Entropy Method

    Directory of Open Access Journals (Sweden)

    Coughlan Colm P.

    2013-12-01

    Full Text Available Mapping the relativistic jets emanating from AGN requires the use of a deconvolution algorithm to account for the effects of missing baseline spacings. The CLEAN algorithm is the most commonly used algorithm in VLBI imaging today and is suitable for imaging polarisation data. The Maximum Entropy Method (MEM is presented as an alternative with some advantages over the CLEAN algorithm, including better spatial resolution and a more rigorous and unbiased approach to deconvolution. We have developed a MEM code suitable for deconvolving VLBI polarisation data. Monte Carlo simulations investigating the performance of CLEAN and the MEM code on a variety of source types are being carried out. Real polarisation (VLBA data taken at multiple wavelengths have also been deconvolved using MEM, and several of the resulting polarisation and Faraday rotation maps are presented and discussed.

  12. Application of the maximum entropy method to profile analysis

    International Nuclear Information System (INIS)

    Armstrong, N.; Kalceff, W.; Cline, J.P.

    1999-01-01

    Full text: A maximum entropy (MaxEnt) method for analysing crystallite size- and strain-induced x-ray profile broadening is presented. This method treats the problems of determining the specimen profile, crystallite size distribution, and strain distribution in a general way by considering them as inverse problems. A common difficulty faced by many experimenters is their inability to determine a well-conditioned solution of the integral equation, which preserves the positivity of the profile or distribution. We show that the MaxEnt method overcomes this problem, while also enabling a priori information, in the form of a model, to be introduced into it. Additionally, we demonstrate that the method is fully quantitative, in that uncertainties in the solution profile or solution distribution can be determined and used in subsequent calculations, including mean particle sizes and rms strain. An outline of the MaxEnt method is presented for the specific problems of determining the specimen profile and crystallite or strain distributions for the correspondingly broadened profiles. This approach offers an alternative to standard methods such as those of Williamson-Hall and Warren-Averbach. An application of the MaxEnt method is demonstrated in the analysis of alumina size-broadened diffraction data (from NIST, Gaithersburg). It is used to determine the specimen profile and column-length distribution of the scattering domains. Finally, these results are compared with the corresponding Williamson-Hall and Warren-Averbach analyses. Copyright (1999) Australian X-ray Analytical Association Inc

  13. Deconstructing Cross-Entropy for Probabilistic Binary Classifiers

    Directory of Open Access Journals (Sweden)

    Daniel Ramos

    2018-03-01

    Full Text Available In this work, we analyze the cross-entropy function, widely used in classifiers both as a performance measure and as an optimization objective. We contextualize cross-entropy in the light of Bayesian decision theory, the formal probabilistic framework for making decisions, and we thoroughly analyze its motivation, meaning and interpretation from an information-theoretical point of view. In this sense, this article presents several contributions: First, we explicitly analyze the contribution to cross-entropy of (i prior knowledge; and (ii the value of the features in the form of a likelihood ratio. Second, we introduce a decomposition of cross-entropy into two components: discrimination and calibration. This decomposition enables the measurement of different performance aspects of a classifier in a more precise way; and justifies previously reported strategies to obtain reliable probabilities by means of the calibration of the output of a discriminating classifier. Third, we give different information-theoretical interpretations of cross-entropy, which can be useful in different application scenarios, and which are related to the concept of reference probabilities. Fourth, we present an analysis tool, the Empirical Cross-Entropy (ECE plot, a compact representation of cross-entropy and its aforementioned decomposition. We show the power of ECE plots, as compared to other classical performance representations, in two diverse experimental examples: a speaker verification system, and a forensic case where some glass findings are present.

  14. The two-box model of climate: limitations and applications to planetary habitability and maximum entropy production studies.

    Science.gov (United States)

    Lorenz, Ralph D

    2010-05-12

    The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b.

  15. Bayesian hierarchical models for regional climate reconstructions of the last glacial maximum

    Science.gov (United States)

    Weitzel, Nils; Hense, Andreas; Ohlwein, Christian

    2017-04-01

    Spatio-temporal reconstructions of past climate are important for the understanding of the long term behavior of the climate system and the sensitivity to forcing changes. Unfortunately, they are subject to large uncertainties, have to deal with a complex proxy-climate structure, and a physically reasonable interpolation between the sparse proxy observations is difficult. Bayesian Hierarchical Models (BHMs) are a class of statistical models that is well suited for spatio-temporal reconstructions of past climate because they permit the inclusion of multiple sources of information (e.g. records from different proxy types, uncertain age information, output from climate simulations) and quantify uncertainties in a statistically rigorous way. BHMs in paleoclimatology typically consist of three stages which are modeled individually and are combined using Bayesian inference techniques. The data stage models the proxy-climate relation (often named transfer function), the process stage models the spatio-temporal distribution of the climate variables of interest, and the prior stage consists of prior distributions of the model parameters. For our BHMs, we translate well-known proxy-climate transfer functions for pollen to a Bayesian framework. In addition, we can include Gaussian distributed local climate information from preprocessed proxy records. The process stage combines physically reasonable spatial structures from prior distributions with proxy records which leads to a multivariate posterior probability distribution for the reconstructed climate variables. The prior distributions that constrain the possible spatial structure of the climate variables are calculated from climate simulation output. We present results from pseudoproxy tests as well as new regional reconstructions of temperatures for the last glacial maximum (LGM, ˜ 21,000 years BP). These reconstructions combine proxy data syntheses with information from climate simulations for the LGM that were

  16. Quantifying uncertainty in soot volume fraction estimates using Bayesian inference of auto-correlated laser-induced incandescence measurements

    Science.gov (United States)

    Hadwin, Paul J.; Sipkens, T. A.; Thomson, K. A.; Liu, F.; Daun, K. J.

    2016-01-01

    Auto-correlated laser-induced incandescence (AC-LII) infers the soot volume fraction (SVF) of soot particles by comparing the spectral incandescence from laser-energized particles to the pyrometrically inferred peak soot temperature. This calculation requires detailed knowledge of model parameters such as the absorption function of soot, which may vary with combustion chemistry, soot age, and the internal structure of the soot. This work presents a Bayesian methodology to quantify such uncertainties. This technique treats the additional "nuisance" model parameters, including the soot absorption function, as stochastic variables and incorporates the current state of knowledge of these parameters into the inference process through maximum entropy priors. While standard AC-LII analysis provides a point estimate of the SVF, Bayesian techniques infer the posterior probability density, which will allow scientists and engineers to better assess the reliability of AC-LII inferred SVFs in the context of environmental regulations and competing diagnostics.

  17. Phylogenetic systematics and biogeography of hummingbirds: Bayesian and maximum likelihood analyses of partitioned data and selection of an appropriate partitioning strategy.

    Science.gov (United States)

    McGuire, Jimmy A; Witt, Christopher C; Altshuler, Douglas L; Remsen, J V

    2007-10-01

    Hummingbirds are an important model system in avian biology, but to date the group has been the subject of remarkably few phylogenetic investigations. Here we present partitioned Bayesian and maximum likelihood phylogenetic analyses for 151 of approximately 330 species of hummingbirds and 12 outgroup taxa based on two protein-coding mitochondrial genes (ND2 and ND4), flanking tRNAs, and two nuclear introns (AK1 and BFib). We analyzed these data under several partitioning strategies ranging between unpartitioned and a maximum of nine partitions. In order to select a statistically justified partitioning strategy following partitioned Bayesian analysis, we considered four alternative criteria including Bayes factors, modified versions of the Akaike information criterion for small sample sizes (AIC(c)), Bayesian information criterion (BIC), and a decision-theoretic methodology (DT). Following partitioned maximum likelihood analyses, we selected a best-fitting strategy using hierarchical likelihood ratio tests (hLRTS), the conventional AICc, BIC, and DT, concluding that the most stringent criterion, the performance-based DT, was the most appropriate methodology for selecting amongst partitioning strategies. In the context of our well-resolved and well-supported phylogenetic estimate, we consider the historical biogeography of hummingbirds using ancestral state reconstructions of (1) primary geographic region of occurrence (i.e., South America, Central America, North America, Greater Antilles, Lesser Antilles), (2) Andean or non-Andean geographic distribution, and (3) minimum elevational occurrence. These analyses indicate that the basal hummingbird assemblages originated in the lowlands of South America, that most of the principle clades of hummingbirds (all but Mountain Gems and possibly Bees) originated on this continent, and that there have been many (at least 30) independent invasions of other primary landmasses, especially Central America.

  18. Applications of Entropy in Finance: A Review

    Directory of Open Access Journals (Sweden)

    Guanqun Tong

    2013-11-01

    Full Text Available Although the concept of entropy is originated from thermodynamics, its concepts and relevant principles, especially the principles of maximum entropy and minimum cross-entropy, have been extensively applied in finance. In this paper, we review the concepts and principles of entropy, as well as their applications in the field of finance, especially in portfolio selection and asset pricing. Furthermore, we review the effects of the applications of entropy and compare them with other traditional and new methods.

  19. Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity.

    Directory of Open Access Journals (Sweden)

    Lorenzo Asti

    2016-04-01

    Full Text Available The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high-frequency mutation rate in the genome region that codes for the antibody active site. Eventually, cells that produce antibodies with higher affinity for their cognate antigen are selected and clonally expanded. Here, we propose a new statistical approach based on maximum entropy modeling in which a scoring function related to the binding affinity of antibodies against a specific antigen is inferred from a sample of sequences of the immune repertoire of an individual. We use our inference strategy to infer a statistical model on a data set obtained by sequencing a fairly large portion of the immune repertoire of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10-6, outperforming other sequence- and structure-based models.

  20. Entropic Inference

    Science.gov (United States)

    Caticha, Ariel

    2011-03-01

    In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme.

  1. Applicability of the minimum entropy generation method for optimizing thermodynamic cycles

    Institute of Scientific and Technical Information of China (English)

    Cheng Xue-Tao; Liang Xin-Gang

    2013-01-01

    Entropy generation is often used as a figure of merit in thermodynamic cycle optimizations.In this paper,it is shown that the applicability of the minimum entropy generation method to optimizing output power is conditional.The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power when the total heat into the system of interest is not prescribed.For the cycles whose working medium is heated or cooled by streams with prescribed inlet temperatures and prescribed heat capacity flow rates,it is theoretically proved that both the minimum entropy generation rate and the minimum entropy generation number correspond to the maximum output power when the virtual entropy generation induced by dumping the used streams into the environment is considered.However,the minimum principle of entropy generation is not tenable in the case that the virtual entropy generation is not included,because the total heat into the system of interest is not fixed.An irreversible Carnot cycle and an irreversible Brayton cycle are analysed.The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power if the heat into the system of interest is not prescribed.

  2. Applicability of the minimum entropy generation method for optimizing thermodynamic cycles

    International Nuclear Information System (INIS)

    Cheng Xue-Tao; Liang Xin-Gang

    2013-01-01

    Entropy generation is often used as a figure of merit in thermodynamic cycle optimizations. In this paper, it is shown that the applicability of the minimum entropy generation method to optimizing output power is conditional. The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power when the total heat into the system of interest is not prescribed. For the cycles whose working medium is heated or cooled by streams with prescribed inlet temperatures and prescribed heat capacity flow rates, it is theoretically proved that both the minimum entropy generation rate and the minimum entropy generation number correspond to the maximum output power when the virtual entropy generation induced by dumping the used streams into the environment is considered. However, the minimum principle of entropy generation is not tenable in the case that the virtual entropy generation is not included, because the total heat into the system of interest is not fixed. An irreversible Carnot cycle and an irreversible Brayton cycle are analysed. The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power if the heat into the system of interest is not prescribed. (general)

  3. Network Inference and Maximum Entropy Estimation on Information Diagrams

    Czech Academy of Sciences Publication Activity Database

    Martin, E.A.; Hlinka, J.; Meinke, A.; Děchtěrenko, Filip; Tintěra, J.; Oliver, I.; Davidsen, J.

    2017-01-01

    Roč. 7, č. 1 (2017), s. 1-15, č. článku 7062. ISSN 2045-2322 R&D Projects: GA ČR GA13-23940S Institutional support: RVO:68081740 Keywords : complex networks * mutual information * entropy maximization * fMRI Subject RIV: AN - Psychology OBOR OECD: Cognitive sciences Impact factor: 4.259, year: 2016

  4. Studies of the pressure dependence of the charge density distribution in cerium phosphide by the maximum-entropy method

    CERN Document Server

    Ishimatsu, N; Takata, M; Nishibori, E; Sakata, M; Hayashi, J; Shirotani, I; Shimomura, O

    2002-01-01

    The physical properties relating to 4f electrons in cerium phosphide, especially the temperature dependence and the isomorphous transition that occurs at around 10 GPa, were studied by means of x-ray powder diffraction and charge density distribution maps derived by the maximum-entropy method. The compressibility of CeP was exactly determined using a helium pressure medium and the anomaly that indicated the isomorphous transition was observed in the compressibility. We also discuss the anisotropic charge density distribution of Ce ions and its temperature dependence.

  5. Bayesian, maximum parsimony and UPGMA models for inferring the phylogenies of antelopes using mitochondrial markers.

    Science.gov (United States)

    Khan, Haseeb A; Arif, Ibrahim A; Bahkali, Ali H; Al Farhan, Ahmad H; Al Homaidan, Ali A

    2008-10-06

    This investigation was aimed to compare the inference of antelope phylogenies resulting from the 16S rRNA, cytochrome-b (cyt-b) and d-loop segments of mitochondrial DNA using three different computational models including Bayesian (BA), maximum parsimony (MP) and unweighted pair group method with arithmetic mean (UPGMA). The respective nucleotide sequences of three Oryx species (Oryx leucoryx, Oryx dammah and Oryx gazella) and an out-group (Addax nasomaculatus) were aligned and subjected to BA, MP and UPGMA models for comparing the topologies of respective phylogenetic trees. The 16S rRNA region possessed the highest frequency of conserved sequences (97.65%) followed by cyt-b (94.22%) and d-loop (87.29%). There were few transitions (2.35%) and none transversions in 16S rRNA as compared to cyt-b (5.61% transitions and 0.17% transversions) and d-loop (11.57% transitions and 1.14% transversions) while comparing the four taxa. All the three mitochondrial segments clearly differentiated the genus Addax from Oryx using the BA or UPGMA models. The topologies of all the gamma-corrected Bayesian trees were identical irrespective of the marker type. The UPGMA trees resulting from 16S rRNA and d-loop sequences were also identical (Oryx dammah grouped with Oryx leucoryx) to Bayesian trees except that the UPGMA tree based on cyt-b showed a slightly different phylogeny (Oryx dammah grouped with Oryx gazella) with a low bootstrap support. However, the MP model failed to differentiate the genus Addax from Oryx. These findings demonstrate the efficiency and robustness of BA and UPGMA methods for phylogenetic analysis of antelopes using mitochondrial markers.

  6. Application of the maximum entropy method to dynamical fermion simulations

    Science.gov (United States)

    Clowser, Jonathan

    This thesis presents results for spectral functions extracted from imaginary-time correlation functions obtained from Monte Carlo simulations using the Maximum Entropy Method (MEM). The advantages this method are (i) no a priori assumptions or parametrisations of the spectral function are needed, (ii) a unique solution exists and (iii) the statistical significance of the resulting image can be quantitatively analysed. The Gross Neveu model in d = 3 spacetime dimensions (GNM3) is a particularly interesting model to study with the MEM because at T = 0 it has a broken phase with a rich spectrum of mesonic bound states and a symmetric phase where there are resonances. Results for the elementary fermion, the Goldstone boson (pion), the sigma, the massive pseudoscalar meson and the symmetric phase resonances are presented. UKQCD Nf = 2 dynamical QCD data is also studied with MEM. Results are compared to those found from the quenched approximation, where the effects of quark loops in the QCD vacuum are neglected, to search for sea-quark effects in the extracted spectral functions. Information has been extract from the difficult axial spatial and scalar as well as the pseudoscalar, vector and axial temporal channels. An estimate for the non-singlet scalar mass in the chiral limit is given which is in agreement with the experimental value of Mao = 985 MeV.

  7. Forest Tree Species Distribution Mapping Using Landsat Satellite Imagery and Topographic Variables with the Maximum Entropy Method in Mongolia

    Science.gov (United States)

    Hao Chiang, Shou; Valdez, Miguel; Chen, Chi-Farn

    2016-06-01

    Forest is a very important ecosystem and natural resource for living things. Based on forest inventories, government is able to make decisions to converse, improve and manage forests in a sustainable way. Field work for forestry investigation is difficult and time consuming, because it needs intensive physical labor and the costs are high, especially surveying in remote mountainous regions. A reliable forest inventory can give us a more accurate and timely information to develop new and efficient approaches of forest management. The remote sensing technology has been recently used for forest investigation at a large scale. To produce an informative forest inventory, forest attributes, including tree species are unavoidably required to be considered. In this study the aim is to classify forest tree species in Erdenebulgan County, Huwsgul province in Mongolia, using Maximum Entropy method. The study area is covered by a dense forest which is almost 70% of total territorial extension of Erdenebulgan County and is located in a high mountain region in northern Mongolia. For this study, Landsat satellite imagery and a Digital Elevation Model (DEM) were acquired to perform tree species mapping. The forest tree species inventory map was collected from the Forest Division of the Mongolian Ministry of Nature and Environment as training data and also used as ground truth to perform the accuracy assessment of the tree species classification. Landsat images and DEM were processed for maximum entropy modeling, and this study applied the model with two experiments. The first one is to use Landsat surface reflectance for tree species classification; and the second experiment incorporates terrain variables in addition to the Landsat surface reflectance to perform the tree species classification. All experimental results were compared with the tree species inventory to assess the classification accuracy. Results show that the second one which uses Landsat surface reflectance coupled

  8. FOREST TREE SPECIES DISTRIBUTION MAPPING USING LANDSAT SATELLITE IMAGERY AND TOPOGRAPHIC VARIABLES WITH THE MAXIMUM ENTROPY METHOD IN MONGOLIA

    Directory of Open Access Journals (Sweden)

    S. H. Chiang

    2016-06-01

    Full Text Available Forest is a very important ecosystem and natural resource for living things. Based on forest inventories, government is able to make decisions to converse, improve and manage forests in a sustainable way. Field work for forestry investigation is difficult and time consuming, because it needs intensive physical labor and the costs are high, especially surveying in remote mountainous regions. A reliable forest inventory can give us a more accurate and timely information to develop new and efficient approaches of forest management. The remote sensing technology has been recently used for forest investigation at a large scale. To produce an informative forest inventory, forest attributes, including tree species are unavoidably required to be considered. In this study the aim is to classify forest tree species in Erdenebulgan County, Huwsgul province in Mongolia, using Maximum Entropy method. The study area is covered by a dense forest which is almost 70% of total territorial extension of Erdenebulgan County and is located in a high mountain region in northern Mongolia. For this study, Landsat satellite imagery and a Digital Elevation Model (DEM were acquired to perform tree species mapping. The forest tree species inventory map was collected from the Forest Division of the Mongolian Ministry of Nature and Environment as training data and also used as ground truth to perform the accuracy assessment of the tree species classification. Landsat images and DEM were processed for maximum entropy modeling, and this study applied the model with two experiments. The first one is to use Landsat surface reflectance for tree species classification; and the second experiment incorporates terrain variables in addition to the Landsat surface reflectance to perform the tree species classification. All experimental results were compared with the tree species inventory to assess the classification accuracy. Results show that the second one which uses Landsat surface

  9. Least squares autoregressive (maximum entropy) spectral estimation for Fourier spectroscopy and its application to the electron cyclotron emission from plasma

    International Nuclear Information System (INIS)

    Iwama, N.; Inoue, A.; Tsukishima, T.; Sato, M.; Kawahata, K.

    1981-07-01

    A new procedure for the maximum entropy spectral estimation is studied for the purpose of data processing in Fourier transform spectroscopy. The autoregressive model fitting is examined under a least squares criterion based on the Yule-Walker equations. An AIC-like criterion is suggested for selecting the model order. The principal advantage of the new procedure lies in the enhanced frequency resolution particularly for small values of the maximum optical path-difference of the interferogram. The usefulness of the procedure is ascertained by some numerical simulations and further by experiments with respect to a highly coherent submillimeter wave and the electron cyclotron emission from a stellarator plasma. (author)

  10. Entropy: From Thermodynamics to Hydrology

    Directory of Open Access Journals (Sweden)

    Demetris Koutsoyiannis

    2014-02-01

    Full Text Available Some known results from statistical thermophysics as well as from hydrology are revisited from a different perspective trying: (a to unify the notion of entropy in thermodynamic and statistical/stochastic approaches of complex hydrological systems and (b to show the power of entropy and the principle of maximum entropy in inference, both deductive and inductive. The capability for deductive reasoning is illustrated by deriving the law of phase change transition of water (Clausius-Clapeyron from scratch by maximizing entropy in a formal probabilistic frame. However, such deductive reasoning cannot work in more complex hydrological systems with diverse elements, yet the entropy maximization framework can help in inductive inference, necessarily based on data. Several examples of this type are provided in an attempt to link statistical thermophysics with hydrology with a unifying view of entropy.

  11. Bayesian soft X-ray tomography using non-stationary Gaussian Processes

    International Nuclear Information System (INIS)

    Li, Dong; Svensson, J.; Thomsen, H.; Werner, A.; Wolf, R.; Medina, F.

    2013-01-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods

  12. Bayesian soft X-ray tomography using non-stationary Gaussian Processes

    Science.gov (United States)

    Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  13. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  14. A Bayesian approach to extracting meaning from system behavior

    Energy Technology Data Exchange (ETDEWEB)

    Dress, W.B.

    1998-08-01

    The modeling relation and its reformulation to include the semiotic hierarchy is essential for the understanding, control, and successful re-creation of natural systems. This presentation will argue for a careful application of Rosen`s modeling relationship to the problems of intelligence and autonomy in natural and artificial systems. To this end, the authors discuss the essential need for a correct theory of induction, learning, and probability; and suggest that modern Bayesian probability theory, developed by Cox, Jaynes, and others, can adequately meet such demands, especially on the operational level of extracting meaning from observations. The methods of Bayesian and maximum Entropy parameter estimation have been applied to measurements of system observables to directly infer the underlying differential equations generating system behavior. This approach by-passes the usual method of parameter estimation based on assuming a functional form for the observable and then estimating the parameters that would lead to the particular observed behavior. The computational savings is great since only location parameters enter into the maximum-entropy calculations; this innovation finesses the need for nonlinear parameters altogether. Such an approach more directly extracts the semantics inherent in a given system by going to the root of system meaning as expressed by abstract form or shape, rather than in syntactic particulars, such as signal amplitude and phase. Examples will be shown how the form of a system can be followed while ignoring unnecessary details. In this sense, the authors are observing the meaning of the words rather than being concerned with their particular expression or language. For the present discussion, empirical models are embodied by the differential equations underlying, producing, or describing the behavior of a process as measured or tracked by a particular variable set--the observables. The a priori models are probability structures that

  15. Identification of a Threshold Value for the DEMATEL Method: Using the Maximum Mean De-Entropy Algorithm

    Science.gov (United States)

    Chung-Wei, Li; Gwo-Hshiung, Tzeng

    To deal with complex problems, structuring them through graphical representations and analyzing causal influences can aid in illuminating complex issues, systems, or concepts. The DEMATEL method is a methodology which can be used for researching and solving complicated and intertwined problem groups. The end product of the DEMATEL process is a visual representation—the impact-relations map—by which respondents organize their own actions in the world. The applicability of the DEMATEL method is widespread, ranging from analyzing world problematique decision making to industrial planning. The most important property of the DEMATEL method used in the multi-criteria decision making (MCDM) field is to construct interrelations between criteria. In order to obtain a suitable impact-relations map, an appropriate threshold value is needed to obtain adequate information for further analysis and decision-making. In this paper, we propose a method based on the entropy approach, the maximum mean de-entropy algorithm, to achieve this purpose. Using real cases to find the interrelationships between the criteria for evaluating effects in E-learning programs as an examples, we will compare the results obtained from the respondents and from our method, and discuss that the different impact-relations maps from these two methods.

  16. Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states

    CERN Document Server

    Dewar, R

    2003-01-01

    Jaynes' information theory formalism of statistical mechanics is applied to the stationary states of open, non-equilibrium systems. First, it is shown that the probability distribution p subGAMMA of the underlying microscopic phase space trajectories GAMMA over a time interval of length tau satisfies p subGAMMA propor to exp(tau sigma subGAMMA/2k sub B) where sigma subGAMMA is the time-averaged rate of entropy production of GAMMA. Three consequences of this result are then derived: (1) the fluctuation theorem, which describes the exponentially declining probability of deviations from the second law of thermodynamics as tau -> infinity; (2) the selection principle of maximum entropy production for non-equilibrium stationary states, empirical support for which has been found in studies of phenomena as diverse as the Earth's climate and crystal growth morphology; and (3) the emergence of self-organized criticality for flux-driven systems in the slowly-driven limit. The explanation of these results on general inf...

  17. Bayesian estimation and entropy for economic dynamic stochastic models: An exploration of overconsumption

    International Nuclear Information System (INIS)

    Argentiero, Amedeo; Bovi, Maurizio; Cerqueti, Roy

    2016-01-01

    This paper examines psycho-induced overconsumption in a dynamic stochastic context. As emphasized by well-established psychological results, these psycho-distortions derive from a decision making based on simple rules-of-thumb, not on analytically sounded optimizations. To our end, we therefore compare two New Keynesian models. The first is populated by optimizing Muth-rational agents and acts as the normative benchmark. The other is a “psycho-perturbed” version of the benchmark that allows for the potential presence of overoptimism and, hence, of overconsumption. The parameters of these models are estimated through a Bayesian-type procedure, and performances are evaluated by employing an entropy measure. Such methodologies are particularly appropriate here since they take in full consideration the complexity generated by the randomness of the considered systems. In particular, they let to derive a not negligible information on the size and on the cyclical properties of the biases. In line with cognitive psychology suggestions our evidence shows that the overoptimism/overconsumption is: widespread—it is detected in nation-wide data; persistent—it emerges in full-sample estimations; it moves according to the expected cyclical behavior—larger in booms, and it disappears in crises. Moreover, by taking into account the effect of these psycho-biases, the model fits actual data better than the benchmark. All considered, then, enhancing the existing literature our findings: i) sustain the importance of inserting psychological distortions in macroeconomic models and ii) underline that system dynamics and psycho biases have statistically significant and economically important connections.

  18. Analysis of QCD sum rule based on the maximum entropy method

    International Nuclear Information System (INIS)

    Gubler, Philipp

    2012-01-01

    QCD sum rule was developed about thirty years ago and has been used up to the present to calculate various physical quantities like hadrons. It has been, however, needed to assume 'pole + continuum' for the spectral function in the conventional analyses. Application of this method therefore came across with difficulties when the above assumption is not satisfied. In order to avoid this difficulty, analysis to make use of the maximum entropy method (MEM) has been developed by the present author. It is reported here how far this new method can be successfully applied. In the first section, the general feature of the QCD sum rule is introduced. In section 2, it is discussed why the analysis by the QCD sum rule based on the MEM is so effective. In section 3, the MEM analysis process is described, and in the subsection 3.1 likelihood function and prior probability are considered then in subsection 3.2 numerical analyses are picked up. In section 4, some cases of applications are described starting with ρ mesons, then charmoniums in the finite temperature and finally recent developments. Some figures of the spectral functions are shown. In section 5, summing up of the present analysis method and future view are given. (S. Funahashi)

  19. A Novel Maximum Entropy Markov Model for Human Facial Expression Recognition.

    Directory of Open Access Journals (Sweden)

    Muhammad Hameed Siddiqi

    Full Text Available Research in video based FER systems has exploded in the past decade. However, most of the previous methods work well when they are trained and tested on the same dataset. Illumination settings, image resolution, camera angle, and physical characteristics of the people differ from one dataset to another. Considering a single dataset keeps the variance, which results from differences, to a minimum. Having a robust FER system, which can work across several datasets, is thus highly desirable. The aim of this work is to design, implement, and validate such a system using different datasets. In this regard, the major contribution is made at the recognition module which uses the maximum entropy Markov model (MEMM for expression recognition. In this model, the states of the human expressions are modeled as the states of an MEMM, by considering the video-sensor observations as the observations of MEMM. A modified Viterbi is utilized to generate the most probable expression state sequence based on such observations. Lastly, an algorithm is designed which predicts the expression state from the generated state sequence. Performance is compared against several existing state-of-the-art FER systems on six publicly available datasets. A weighted average accuracy of 97% is achieved across all datasets.

  20. Electronic structure of beta-FeSi sub 2 obtained by maximum entropy method and photoemission spectroscopy

    CERN Document Server

    Kakemoto, H; Makita, Y; Kino, Y; Tsukamoto, T; Shin, S; Wada, S; Tsurumi, T

    2003-01-01

    The electronic structure of beta-FeSi sub 2 was investigated by maximum entropy method (MEM) and photoemission spectroscopy. The electronic structure obtained by MEM using X-ray diffraction data at room temperature (RT) showed covalent bonds of Fe-Si and Si-Si electrons. The photoemission spectra of beta-FeSi sub 2 at RT were changed by incidence photon energies. For photon energies between 50 and 100 eV, resonant photoemission spectra caused by a super Coster-Kronig transition were observed. In order to reduce resonant effect about Fe(3d) for obtained photoemission spectra, difference spectrum between 53 and 57 eV was calculated, and it was compared with ab-initio band calculation and spectra function.

  1. Toward efficient computation of the expected relative entropy for nonlinear experimental design

    International Nuclear Information System (INIS)

    Coles, Darrell; Prange, Michael

    2012-01-01

    The expected relative entropy between prior and posterior model-parameter distributions is a Bayesian objective function in experimental design theory that quantifies the expected gain in information of an experiment relative to a previous state of knowledge. The expected relative entropy is a preferred measure of experimental quality because it can handle nonlinear data-model relationships, an important fact due to the ubiquity of nonlinearity in science and engineering and its effects on post-inversion parameter uncertainty. This objective function does not necessarily yield experiments that mediate well-determined systems, but, being a Bayesian quality measure, it rigorously accounts for prior information which constrains model parameters that may be only weakly constrained by the optimized dataset. Historically, use of the expected relative entropy has been limited by the computing and storage requirements associated with high-dimensional numerical integration. Herein, a bifocal algorithm is developed that makes these computations more efficient. The algorithm is demonstrated on a medium-sized problem of sampling relaxation phenomena and on a large problem of source–receiver selection for a 2D vertical seismic profile. The method is memory intensive but workarounds are discussed. (paper)

  2. A Note on Burg’s Modified Entropy in Statistical Mechanics

    Directory of Open Access Journals (Sweden)

    Amritansu Ray

    2016-02-01

    Full Text Available Burg’s entropy plays an important role in this age of information euphoria, particularly in understanding the emergent behavior of a complex system such as statistical mechanics. For discrete or continuous variable, maximization of Burg’s Entropy subject to its only natural and mean constraint always provide us a positive density function though the Entropy is always negative. On the other hand, Burg’s modified entropy is a better measure than the standard Burg’s entropy measure since this is always positive and there is no computational problem for small probabilistic values. Moreover, the maximum value of Burg’s modified entropy increases with the number of possible outcomes. In this paper, a premium has been put on the fact that if Burg’s modified entropy is used instead of conventional Burg’s entropy in a maximum entropy probability density (MEPD function, the result yields a better approximation of the probability distribution. An important lemma in basic algebra and a suitable example with tables and graphs in statistical mechanics have been given to illustrate the whole idea appropriately.

  3. A subjective supply–demand model: the maximum Boltzmann/Shannon entropy solution

    International Nuclear Information System (INIS)

    Piotrowski, Edward W; Sładkowski, Jan

    2009-01-01

    The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a

  4. A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution

    Science.gov (United States)

    Piotrowski, Edward W.; Sładkowski, Jan

    2009-03-01

    The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a

  5. Robust Bayesian Experimental Design for Conceptual Model Discrimination

    Science.gov (United States)

    Pham, H. V.; Tsai, F. T. C.

    2015-12-01

    A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.

  6. Multifield stochastic particle production: beyond a maximum entropy ansatz

    Energy Technology Data Exchange (ETDEWEB)

    Amin, Mustafa A.; Garcia, Marcos A.G.; Xie, Hong-Yi; Wen, Osmond, E-mail: mustafa.a.amin@gmail.com, E-mail: marcos.garcia@rice.edu, E-mail: hxie39@wisc.edu, E-mail: ow4@rice.edu [Physics and Astronomy Department, Rice University, 6100 Main Street, Houston, TX 77005 (United States)

    2017-09-01

    We explore non-adiabatic particle production for N {sub f} coupled scalar fields in a time-dependent background with stochastically varying effective masses, cross-couplings and intervals between interactions. Under the assumption of weak scattering per interaction, we provide a framework for calculating the typical particle production rates after a large number of interactions. After setting up the framework, for analytic tractability, we consider interactions (effective masses and cross couplings) characterized by series of Dirac-delta functions in time with amplitudes and locations drawn from different distributions. Without assuming that the fields are statistically equivalent, we present closed form results (up to quadratures) for the asymptotic particle production rates for the N {sub f}=1 and N {sub f}=2 cases. We also present results for the general N {sub f} >2 case, but with more restrictive assumptions. We find agreement between our analytic results and direct numerical calculations of the total occupation number of the produced particles, with departures that can be explained in terms of violation of our assumptions. We elucidate the precise connection between the maximum entropy ansatz (MEA) used in Amin and Baumann (2015) and the underlying statistical distribution of the self and cross couplings. We provide and justify a simple to use (MEA-inspired) expression for the particle production rate, which agrees with our more detailed treatment when the parameters characterizing the effective mass and cross-couplings between fields are all comparable to each other. However, deviations are seen when some parameters differ significantly from others. We show that such deviations become negligible for a broad range of parameters when N {sub f}>> 1.

  7. Bayesian view of single-qubit clocks, and an energy versus accuracy tradeoff

    Science.gov (United States)

    Gopalkrishnan, Manoj; Kandula, Varshith; Sriram, Praveen; Deshpande, Abhishek; Muralidharan, Bhaskaran

    2017-09-01

    We bring a Bayesian approach to the analysis of clocks. Using exponential distributions as priors for clocks, we analyze how well one can keep time with a single qubit freely precessing under a magnetic field. We find that, at least with a single qubit, quantum mechanics does not allow exact timekeeping, in contrast to classical mechanics, which does. We find the design of the single-qubit clock that leads to maximum accuracy. Further, we find an energy versus accuracy tradeoff—the energy cost is at least kBT times the improvement in accuracy as measured by the entropy reduction in going from the prior distribution to the posterior distribution. We propose a physical realization of the single-qubit clock using charge transport across a capacitively coupled quantum dot.

  8. Maximum entropy models of ecosystem functioning

    International Nuclear Information System (INIS)

    Bertram, Jason

    2014-01-01

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example

  9. Maximum entropy models of ecosystem functioning

    Energy Technology Data Exchange (ETDEWEB)

    Bertram, Jason, E-mail: jason.bertram@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)

    2014-12-05

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.

  10. A Bayesian perspective on Markovian dynamics and the fluctuation theorem

    Science.gov (United States)

    Virgo, Nathaniel

    2013-08-01

    One of E. T. Jaynes' most important achievements was to derive statistical mechanics from the maximum entropy (MaxEnt) method. I re-examine a relatively new result in statistical mechanics, the Evans-Searles fluctuation theorem, from a MaxEnt perspective. This is done in the belief that interpreting such results in Bayesian terms will lead to new advances in statistical physics. The version of the fluctuation theorem that I will discuss applies to discrete, stochastic systems that begin in a non-equilibrium state and relax toward equilibrium. I will show that for such systems the fluctuation theorem can be seen as a consequence of the fact that the equilibrium distribution must obey the property of detailed balance. Although the principle of detailed balance applies only to equilibrium ensembles, it puts constraints on the form of non-equilibrium trajectories. This will be made clear by taking a novel kind of Bayesian perspective, in which the equilibrium distribution is seen as a prior over the system's set of possible trajectories. Non-equilibrium ensembles are calculated from this prior using Bayes' theorem, with the initial conditions playing the role of the data. I will also comment on the implications of this perspective for the question of how to derive the second law.

  11. Absorption and scattering coefficients estimation in two-dimensional participating media using the generalized maximum entropy and Levenberg-Marquardt methods

    International Nuclear Information System (INIS)

    Berrocal T, Mariella J.; Roberty, Nilson C.; Silva Neto, Antonio J.; Universidade Federal, Rio de Janeiro, RJ

    2002-01-01

    The solution of inverse problems in participating media where there is emission, absorption and dispersion of the radiation possesses several applications in engineering and medicine. The objective of this work is to estimative the coefficients of absorption and dispersion in two-dimensional heterogeneous participating media, using in independent form the Generalized Maximum Entropy and Levenberg Marquardt methods. Both methods are based on the solution of the direct problem that is modeled by the Boltzmann equation in cartesian geometry. Some cases testes are presented. (author)

  12. Tsallis Entropy and the Transition to Scaling in Fragmentation

    Science.gov (United States)

    Sotolongo-Costa, Oscar; Rodriguez, Arezky H.; Rodgers, G. J.

    2000-12-01

    By using the maximum entropy principle with Tsallis entropy we obtain a fragment size distribution function which undergoes a transition to scaling. This distribution function reduces to those obtained by other authors using Shannon entropy. The treatment is easily generalisable to any process of fractioning with suitable constraints.

  13. Scaling-Laws of Flow Entropy with Topological Metrics of Water Distribution Networks

    Directory of Open Access Journals (Sweden)

    Giovanni Francesco Santonastaso

    2018-01-01

    Full Text Available Robustness of water distribution networks is related to their connectivity and topological structure, which also affect their reliability. Flow entropy, based on Shannon’s informational entropy, has been proposed as a measure of network redundancy and adopted as a proxy of reliability in optimal network design procedures. In this paper, the scaling properties of flow entropy of water distribution networks with their size and other topological metrics are studied. To such aim, flow entropy, maximum flow entropy, link density and average path length have been evaluated for a set of 22 networks, both real and synthetic, with different size and topology. The obtained results led to identify suitable scaling laws of flow entropy and maximum flow entropy with water distribution network size, in the form of power–laws. The obtained relationships allow comparing the flow entropy of water distribution networks with different size, and provide an easy tool to define the maximum achievable entropy of a specific water distribution network. An example of application of the obtained relationships to the design of a water distribution network is provided, showing how, with a constrained multi-objective optimization procedure, a tradeoff between network cost and robustness is easily identified.

  14. Hierarchical Bayesian spatial models for multispecies conservation planning and monitoring.

    Science.gov (United States)

    Carroll, Carlos; Johnson, Devin S; Dunk, Jeffrey R; Zielinski, William J

    2010-12-01

    Biologists who develop and apply habitat models are often familiar with the statistical challenges posed by their data's spatial structure but are unsure of whether the use of complex spatial models will increase the utility of model results in planning. We compared the relative performance of nonspatial and hierarchical Bayesian spatial models for three vertebrate and invertebrate taxa of conservation concern (Church's sideband snails [Monadenia churchi], red tree voles [Arborimus longicaudus], and Pacific fishers [Martes pennanti pacifica]) that provide examples of a range of distributional extents and dispersal abilities. We used presence-absence data derived from regional monitoring programs to develop models with both landscape and site-level environmental covariates. We used Markov chain Monte Carlo algorithms and a conditional autoregressive or intrinsic conditional autoregressive model framework to fit spatial models. The fit of Bayesian spatial models was between 35 and 55% better than the fit of nonspatial analogue models. Bayesian spatial models outperformed analogous models developed with maximum entropy (Maxent) methods. Although the best spatial and nonspatial models included similar environmental variables, spatial models provided estimates of residual spatial effects that suggested how ecological processes might structure distribution patterns. Spatial models built from presence-absence data improved fit most for localized endemic species with ranges constrained by poorly known biogeographic factors and for widely distributed species suspected to be strongly affected by unmeasured environmental variables or population processes. By treating spatial effects as a variable of interest rather than a nuisance, hierarchical Bayesian spatial models, especially when they are based on a common broad-scale spatial lattice (here the national Forest Inventory and Analysis grid of 24 km(2) hexagons), can increase the relevance of habitat models to multispecies

  15. Comparison of two views of maximum entropy in biodiversity: Frank (2011) and Pueyo et al. (2007).

    Science.gov (United States)

    Pueyo, Salvador

    2012-05-01

    An increasing number of authors agree in that the maximum entropy principle (MaxEnt) is essential for the understanding of macroecological patterns. However, there are subtle but crucial differences among the approaches by several of these authors. This poses a major obstacle for anyone interested in applying the methodology of MaxEnt in this context. In a recent publication, Frank (2011) gives some arguments why his own approach would represent an improvement as compared to the earlier paper by Pueyo et al. (2007) and also to the views by Edwin T. Jaynes, who first formulated MaxEnt in the context of statistical physics. Here I show that his criticisms are flawed and that there are fundamental reasons to prefer the original approach.

  16. Maximizing entropy of image models for 2-D constrained coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino

    2010-01-01

    This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square constraint given by forbidding neighboring 1s and provide novel results for the constraint that no uniform 2...... £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...

  17. Entropy-based implied volatility and its information content

    NARCIS (Netherlands)

    X. Xiao (Xiao); C. Zhou (Chen)

    2016-01-01

    markdownabstractThis paper investigates the maximum entropy approach on estimating implied volatility. The entropy approach also allows to measure option implied skewness and kurtosis nonparametrically, and to construct confidence intervals. Simulations show that the en- tropy approach outperforms

  18. Spectrum unfolding, sensitivity analysis and propagation of uncertainties with the maximum entropy deconvolution code MAXED

    CERN Document Server

    Reginatto, M; Neumann, S

    2002-01-01

    MAXED was developed to apply the maximum entropy principle to the unfolding of neutron spectrometric measurements. The approach followed in MAXED has several features that make it attractive: it permits inclusion of a priori information in a well-defined and mathematically consistent way, the algorithm used to derive the solution spectrum is not ad hoc (it can be justified on the basis of arguments that originate in information theory), and the solution spectrum is a non-negative function that can be written in closed form. This last feature permits the use of standard methods for the sensitivity analysis and propagation of uncertainties of MAXED solution spectra. We illustrate its use with unfoldings of NE 213 scintillation detector measurements of photon calibration spectra, and of multisphere neutron spectrometer measurements of cosmic-ray induced neutrons at high altitude (approx 20 km) in the atmosphere.

  19. Bayesian estimates of linkage disequilibrium

    Directory of Open Access Journals (Sweden)

    Abad-Grau María M

    2007-06-01

    Full Text Available Abstract Background The maximum likelihood estimator of D' – a standard measure of linkage disequilibrium – is biased toward disequilibrium, and the bias is particularly evident in small samples and rare haplotypes. Results This paper proposes a Bayesian estimation of D' to address this problem. The reduction of the bias is achieved by using a prior distribution on the pair-wise associations between single nucleotide polymorphisms (SNPs that increases the likelihood of equilibrium with increasing physical distances between pairs of SNPs. We show how to compute the Bayesian estimate using a stochastic estimation based on MCMC methods, and also propose a numerical approximation to the Bayesian estimates that can be used to estimate patterns of LD in large datasets of SNPs. Conclusion Our Bayesian estimator of D' corrects the bias toward disequilibrium that affects the maximum likelihood estimator. A consequence of this feature is a more objective view about the extent of linkage disequilibrium in the human genome, and a more realistic number of tagging SNPs to fully exploit the power of genome wide association studies.

  20. Maximum a posteriori probability estimates in infinite-dimensional Bayesian inverse problems

    International Nuclear Information System (INIS)

    Helin, T; Burger, M

    2015-01-01

    A demanding challenge in Bayesian inversion is to efficiently characterize the posterior distribution. This task is problematic especially in high-dimensional non-Gaussian problems, where the structure of the posterior can be very chaotic and difficult to analyse. Current inverse problem literature often approaches the problem by considering suitable point estimators for the task. Typically the choice is made between the maximum a posteriori (MAP) or the conditional mean (CM) estimate. The benefits of either choice are not well-understood from the perspective of infinite-dimensional theory. Most importantly, there exists no general scheme regarding how to connect the topological description of a MAP estimate to a variational problem. The recent results by Dashti and others (Dashti et al 2013 Inverse Problems 29 095017) resolve this issue for nonlinear inverse problems in Gaussian framework. In this work we improve the current understanding by introducing a novel concept called the weak MAP (wMAP) estimate. We show that any MAP estimate in the sense of Dashti et al (2013 Inverse Problems 29 095017) is a wMAP estimate and, moreover, how the wMAP estimate connects to a variational formulation in general infinite-dimensional non-Gaussian problems. The variational formulation enables to study many properties of the infinite-dimensional MAP estimate that were earlier impossible to study. In a recent work by the authors (Burger and Lucka 2014 Maximum a posteriori estimates in linear inverse problems with logconcave priors are proper bayes estimators preprint) the MAP estimator was studied in the context of the Bayes cost method. Using Bregman distances, proper convex Bayes cost functions were introduced for which the MAP estimator is the Bayes estimator. Here, we generalize these results to the infinite-dimensional setting. Moreover, we discuss the implications of our results for some examples of prior models such as the Besov prior and hierarchical prior. (paper)

  1. Scaling of the magnetic entropy change of Fe3−xMnxSi

    International Nuclear Information System (INIS)

    Said, M.R.; Hamam, Y.A.; Abu-Aljarayesh, I.

    2014-01-01

    The magnetic entropy change of Fe 3−x Mn x Si (for x=1.15, 1.3 and 1.5) has been extracted from isothermal magnetization measurements near the Curie temperature. We used the scaling hypotheses of the thermodynamic potentials to scale the magnetic entropy change to a single universal curve for each sample. The effect of the exchange field and the Curie temperature on the maximum entropy change is discussed. - Highlights: • The maximum of the magnetic entropy change occurs at temperatures T>T C . • The exchange field enhances the magnetic entropy change. • The magnetic entropy change at T C is inversely proportional to T C . • Scaling hypothesis is used to scale the magnetic entropy change

  2. Thermoeconomic diagnosis and entropy generation paradox

    DEFF Research Database (Denmark)

    Sigthorsson, Oskar; Ommen, Torben Schmidt; Elmegaard, Brian

    2017-01-01

    In the entropy generation paradox, the entropy generation number, as a function of heat exchanger effectiveness, counter-intuitively approaches zero in two limits symmetrically from a single maximum. In thermoeconomic diagnosis, namely in the characteristic curve method, the exergy destruction...... to the entropy generation paradox, as a decreased heat exchanger effectiveness (as in the case of an operation anomaly in the component) can counter-intuitively result in decreased exergy destruction rate of the component. Therefore, along with an improper selection of independent variables, the heat exchanger...... increases in case of an operation anomaly in a component. The normalised exergy destruction rate as the dependent variable therefore resolves the relation of the characteristic curve method with the entropy generation paradox....

  3. Network Inference and Maximum Entropy Estimation on Information Diagrams

    Czech Academy of Sciences Publication Activity Database

    Martin, E.A.; Hlinka, Jaroslav; Meinke, A.; Děchtěrenko, Filip; Tintěra, J.; Oliver, I.; Davidsen, J.

    2017-01-01

    Roč. 7, č. 1 (2017), č. článku 7062. ISSN 2045-2322 R&D Projects: GA ČR GA13-23940S; GA MZd(CZ) NV15-29835A Grant - others:GA MŠk(CZ) LO1611 Institutional support: RVO:67985807 Keywords : complex networks * mutual information * entropy maximization * fMRI Subject RIV: BD - Theory of Information OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 4.259, year: 2016

  4. Entropy of international trades

    Science.gov (United States)

    Oh, Chang-Young; Lee, D.-S.

    2017-05-01

    The organization of international trades is highly complex under the collective efforts towards economic profits of participating countries given inhomogeneous resources for production. Considering the trade flux as the probability of exporting a product from a country to another, we evaluate the entropy of the world trades in the period 1950-2000. The trade entropy has increased with time, and we show that it is mainly due to the extension of trade partnership. For a given number of trade partners, the mean trade entropy is about 60% of the maximum possible entropy, independent of time, which can be regarded as a characteristic of the trade fluxes' heterogeneity and is shown to be derived from the scaling and functional behaviors of the universal trade-flux distribution. The correlation and time evolution of the individual countries' gross-domestic products and the number of trade partners show that most countries achieved their economic growth partly by extending their trade relationship.

  5. Linearized semiclassical initial value time correlation functions with maximum entropy analytic continuation.

    Science.gov (United States)

    Liu, Jian; Miller, William H

    2008-09-28

    The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. LSC-IVR provides a very effective "prior" for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25 and 14 K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T=25 K, but the MEAC procedure produces a significant correction at the lower temperature (T=14 K). Comparisons are also made as to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.

  6. Optimized Kernel Entropy Components.

    Science.gov (United States)

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  7. Entropy jump across an inviscid shock wave

    Science.gov (United States)

    Salas, Manuel D.; Iollo, Angelo

    1995-01-01

    The shock jump conditions for the Euler equations in their primitive form are derived by using generalized functions. The shock profiles for specific volume, speed, and pressure are shown to be the same, however density has a different shock profile. Careful study of the equations that govern the entropy shows that the inviscid entropy profile has a local maximum within the shock layer. We demonstrate that because of this phenomenon, the entropy, propagation equation cannot be used as a conservation law.

  8. Entropy-Based Experimental Design for Optimal Model Discrimination in the Geosciences

    Directory of Open Access Journals (Sweden)

    Wolfgang Nowak

    2016-11-01

    Full Text Available Choosing between competing models lies at the heart of scientific work, and is a frequent motivation for experimentation. Optimal experimental design (OD methods maximize the benefit of experiments towards a specified goal. We advance and demonstrate an OD approach to maximize the information gained towards model selection. We make use of so-called model choice indicators, which are random variables with an expected value equal to Bayesian model weights. Their uncertainty can be measured with Shannon entropy. Since the experimental data are still random variables in the planning phase of an experiment, we use mutual information (the expected reduction in Shannon entropy to quantify the information gained from a proposed experimental design. For implementation, we use the Preposterior Data Impact Assessor framework (PreDIA, because it is free of the lower-order approximations of mutual information often found in the geosciences. In comparison to other studies in statistics, our framework is not restricted to sequential design or to discrete-valued data, and it can handle measurement errors. As an application example, we optimize an experiment about the transport of contaminants in clay, featuring the problem of choosing between competing isotherms to describe sorption. We compare the results of optimizing towards maximum model discrimination with an alternative OD approach that minimizes the overall predictive uncertainty under model choice uncertainty.

  9. A measure of uncertainty regarding the interval constraint of normal mean elicited by two stages of a prior hierarchy.

    Science.gov (United States)

    Kim, Hea-Jung

    2014-01-01

    This paper considers a hierarchical screened Gaussian model (HSGM) for Bayesian inference of normal models when an interval constraint in the mean parameter space needs to be incorporated in the modeling but when such a restriction is uncertain. An objective measure of the uncertainty, regarding the interval constraint, accounted for by using the HSGM is proposed for the Bayesian inference. For this purpose, we drive a maximum entropy prior of the normal mean, eliciting the uncertainty regarding the interval constraint, and then obtain the uncertainty measure by considering the relationship between the maximum entropy prior and the marginal prior of the normal mean in HSGM. Bayesian estimation procedure of HSGM is developed and two numerical illustrations pertaining to the properties of the uncertainty measure are provided.

  10. Efficient reliability analysis of structures with the rotational quasi-symmetric point- and the maximum entropy methods

    Science.gov (United States)

    Xu, Jun; Dang, Chao; Kong, Fan

    2017-10-01

    This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.

  11. Predictive modeling and mapping of Malayan Sun Bear (Helarctos malayanus) distribution using maximum entropy.

    Science.gov (United States)

    Nazeri, Mona; Jusoff, Kamaruzaman; Madani, Nima; Mahmud, Ahmad Rodzi; Bahman, Abdul Rani; Kumar, Lalit

    2012-01-01

    One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt) is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus) in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear's population.

  12. Predictive modeling and mapping of Malayan Sun Bear (Helarctos malayanus distribution using maximum entropy.

    Directory of Open Access Journals (Sweden)

    Mona Nazeri

    Full Text Available One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear's population.

  13. Low Streamflow Forcasting using Minimum Relative Entropy

    Science.gov (United States)

    Cui, H.; Singh, V. P.

    2013-12-01

    Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.

  14. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  15. Maximum a posteriori Bayesian estimation of mycophenolic Acid area under the concentration-time curve: is this clinically useful for dosage prediction yet?

    Science.gov (United States)

    Staatz, Christine E; Tett, Susan E

    2011-12-01

    This review seeks to summarize the available data about Bayesian estimation of area under the plasma concentration-time curve (AUC) and dosage prediction for mycophenolic acid (MPA) and evaluate whether sufficient evidence is available for routine use of Bayesian dosage prediction in clinical practice. A literature search identified 14 studies that assessed the predictive performance of maximum a posteriori Bayesian estimation of MPA AUC and one report that retrospectively evaluated how closely dosage recommendations based on Bayesian forecasting achieved targeted MPA exposure. Studies to date have mostly been undertaken in renal transplant recipients, with limited investigation in patients treated with MPA for autoimmune disease or haematopoietic stem cell transplantation. All of these studies have involved use of the mycophenolate mofetil (MMF) formulation of MPA, rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation. Bias associated with estimation of MPA AUC using Bayesian forecasting was generally less than 10%. However some difficulties with imprecision was evident, with values ranging from 4% to 34% (based on estimation involving two or more concentration measurements). Evaluation of whether MPA dosing decisions based on Bayesian forecasting (by the free website service https://pharmaco.chu-limoges.fr) achieved target drug exposure has only been undertaken once. When MMF dosage recommendations were applied by clinicians, a higher proportion (72-80%) of subsequent estimated MPA AUC values were within the 30-60 mg · h/L target range, compared with when dosage recommendations were not followed (only 39-57% within target range). Such findings provide evidence that Bayesian dosage prediction is clinically useful for achieving target MPA AUC. This study, however, was retrospective and focussed only on adult renal transplant recipients. Furthermore, in this study, Bayesian-generated AUC estimations and dosage predictions were not compared

  16. Applications of quantum entropy to statistics

    International Nuclear Information System (INIS)

    Silver, R.N.; Martz, H.F.

    1994-01-01

    This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to heirarchical Bayes methods

  17. Well posedness and maximum entropy approximation for the dynamics of quantitative traits

    KAUST Repository

    Boďová , Katarí na; Haskovec, Jan; Markowich, Peter A.

    2017-01-01

    We study the Fokker–Planck equation derived in the large system limit of the Markovian process describing the dynamics of quantitative traits. The Fokker–Planck equation is posed on a bounded domain and its transport and diffusion coefficients vanish on the domain’s boundary. We first argue that, despite this degeneracy, the standard no-flux boundary condition is valid. We derive the weak formulation of the problem and prove the existence and uniqueness of its solutions by constructing the corresponding contraction semigroup on a suitable function space. Then, we prove that for the parameter regime with high enough mutation rate the problem exhibits a positive spectral gap, which implies exponential convergence to equilibrium.Next, we provide a simple derivation of the so-called Dynamic Maximum Entropy (DynMaxEnt) method for approximation of observables (moments) of the Fokker–Planck solution, which can be interpreted as a nonlinear Galerkin approximation. The limited applicability of the DynMaxEnt method inspires us to introduce its modified version that is valid for the whole range of admissible parameters. Finally, we present several numerical experiments to demonstrate the performance of both the original and modified DynMaxEnt methods. We observe that in the parameter regimes where both methods are valid, the modified one exhibits slightly better approximation properties compared to the original one.

  18. Well posedness and maximum entropy approximation for the dynamics of quantitative traits

    KAUST Repository

    Boďová, Katarína

    2017-11-06

    We study the Fokker–Planck equation derived in the large system limit of the Markovian process describing the dynamics of quantitative traits. The Fokker–Planck equation is posed on a bounded domain and its transport and diffusion coefficients vanish on the domain’s boundary. We first argue that, despite this degeneracy, the standard no-flux boundary condition is valid. We derive the weak formulation of the problem and prove the existence and uniqueness of its solutions by constructing the corresponding contraction semigroup on a suitable function space. Then, we prove that for the parameter regime with high enough mutation rate the problem exhibits a positive spectral gap, which implies exponential convergence to equilibrium.Next, we provide a simple derivation of the so-called Dynamic Maximum Entropy (DynMaxEnt) method for approximation of observables (moments) of the Fokker–Planck solution, which can be interpreted as a nonlinear Galerkin approximation. The limited applicability of the DynMaxEnt method inspires us to introduce its modified version that is valid for the whole range of admissible parameters. Finally, we present several numerical experiments to demonstrate the performance of both the original and modified DynMaxEnt methods. We observe that in the parameter regimes where both methods are valid, the modified one exhibits slightly better approximation properties compared to the original one.

  19. The mechanics of granitoid systems and maximum entropy production rates.

    Science.gov (United States)

    Hobbs, Bruce E; Ord, Alison

    2010-01-13

    A model for the formation of granitoid systems is developed involving melt production spatially below a rising isotherm that defines melt initiation. Production of the melt volumes necessary to form granitoid complexes within 10(4)-10(7) years demands control of the isotherm velocity by melt advection. This velocity is one control on the melt flux generated spatially just above the melt isotherm, which is the control valve for the behaviour of the complete granitoid system. Melt transport occurs in conduits initiated as sheets or tubes comprising melt inclusions arising from Gurson-Tvergaard constitutive behaviour. Such conduits appear as leucosomes parallel to lineations and foliations, and ductile and brittle dykes. The melt flux generated at the melt isotherm controls the position of the melt solidus isotherm and hence the physical height of the Transport/Emplacement Zone. A conduit width-selection process, driven by changes in melt viscosity and constitutive behaviour, operates within the Transport Zone to progressively increase the width of apertures upwards. Melt can also be driven horizontally by gradients in topography; these horizontal fluxes can be similar in magnitude to vertical fluxes. Fluxes induced by deformation can compete with both buoyancy and topographic-driven flow over all length scales and results locally in transient 'ponds' of melt. Pluton emplacement is controlled by the transition in constitutive behaviour of the melt/magma from elastic-viscous at high temperatures to elastic-plastic-viscous approaching the melt solidus enabling finite thickness plutons to develop. The system involves coupled feedback processes that grow at the expense of heat supplied to the system and compete with melt advection. The result is that limits are placed on the size and time scale of the system. Optimal characteristics of the system coincide with a state of maximum entropy production rate. This journal is © 2010 The Royal Society

  20. Tail Risk Constraints and Maximum Entropy

    Directory of Open Access Journals (Sweden)

    Donald Geman

    2015-06-01

    Full Text Available Portfolio selection in the financial literature has essentially been analyzed under two central assumptions: full knowledge of the joint probability distribution of the returns of the securities that will comprise the target portfolio; and investors’ preferences are expressed through a utility function. In the real world, operators build portfolios under risk constraints which are expressed both by their clients and regulators and which bear on the maximal loss that may be generated over a given time period at a given confidence level (the so-called Value at Risk of the position. Interestingly, in the finance literature, a serious discussion of how much or little is known from a probabilistic standpoint about the multi-dimensional density of the assets’ returns seems to be of limited relevance. Our approach in contrast is to highlight these issues and then adopt throughout a framework of entropy maximization to represent the real world ignorance of the “true” probability distributions, both univariate and multivariate, of traded securities’ returns. In this setting, we identify the optimal portfolio under a number of downside risk constraints. Two interesting results are exhibited: (i the left- tail constraints are sufficiently powerful to override all other considerations in the conventional theory; (ii the “barbell portfolio” (maximal certainty/ low risk in one set of holdings, maximal uncertainty in another, which is quite familiar to traders, naturally emerges in our construction.

  1. Maximum entropy approach to H-theory: Statistical mechanics of hierarchical systems.

    Science.gov (United States)

    Vasconcelos, Giovani L; Salazar, Domingos S P; Macêdo, A M S

    2018-02-01

    A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem-representing the region where the measurements are made-in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017)10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.

  2. High-resolution elastic recoil detection utilizing Bayesian probability theory

    International Nuclear Information System (INIS)

    Neumaier, P.; Dollinger, G.; Bergmaier, A.; Genchev, I.; Goergens, L.; Fischer, R.; Ronning, C.; Hofsaess, H.

    2001-01-01

    Elastic recoil detection (ERD) analysis is improved in view of depth resolution and the reliability of the measured spectra. Good statistics at even low ion fluences is obtained utilizing a large solid angle of 5 msr at the Munich Q3D magnetic spectrograph and using a 40 MeV 197 Au beam. In this way the elemental depth profiles are not essentially altered during analysis even if distributions with area densities below 1x10 14 atoms/cm 2 are measured. As the energy spread due to the angular acceptance is fully eliminated by ion-optical and numerical corrections, an accurate and reliable apparatus function is derived. It allows to deconvolute the measured spectra using the adaptive kernel method, a maximum entropy concept in the framework of Bayesian probability theory. In addition, the uncertainty of the reconstructed spectra is quantified. The concepts are demonstrated at 13 C depth profiles measured at ultra-thin films of tetrahedral amorphous carbon (ta-C). Depth scales of those profiles are given with an accuracy of 1.4x10 15 atoms/cm 2

  3. Downstream-Conditioned Maximum Entropy Method for Exit Boundary Conditions in the Lattice Boltzmann Method

    Directory of Open Access Journals (Sweden)

    Javier A. Dottori

    2015-01-01

    Full Text Available A method for modeling outflow boundary conditions in the lattice Boltzmann method (LBM based on the maximization of the local entropy is presented. The maximization procedure is constrained by macroscopic values and downstream components. The method is applied to fully developed boundary conditions of the Navier-Stokes equations in rectangular channels. Comparisons are made with other alternative methods. In addition, the new downstream-conditioned entropy is studied and it was found that there is a correlation with the velocity gradient during the flow development.

  4. Horton Ratios Link Self-Similarity with Maximum Entropy of Eco-Geomorphological Properties in Stream Networks

    Directory of Open Access Journals (Sweden)

    Bruce T. Milne

    2017-05-01

    Full Text Available Stream networks are branched structures wherein water and energy move between land and atmosphere, modulated by evapotranspiration and its interaction with the gravitational dissipation of potential energy as runoff. These actions vary among climates characterized by Budyko theory, yet have not been integrated with Horton scaling, the ubiquitous pattern of eco-hydrological variation among Strahler streams that populate river basins. From Budyko theory, we reveal optimum entropy coincident with high biodiversity. Basins on either side of optimum respond in opposite ways to precipitation, which we evaluated for the classic Hubbard Brook experiment in New Hampshire and for the Whitewater River basin in Kansas. We demonstrate that Horton ratios are equivalent to Lagrange multipliers used in the extremum function leading to Shannon information entropy being maximal, subject to constraints. Properties of stream networks vary with constraints and inter-annual variation in water balance that challenge vegetation to match expected resource supply throughout the network. The entropy-Horton framework informs questions of biodiversity, resilience to perturbations in water supply, changes in potential evapotranspiration, and land use changes that move ecosystems away from optimal entropy with concomitant loss of productivity and biodiversity.

  5. Bayesian network modelling of upper gastrointestinal bleeding

    Science.gov (United States)

    Aisha, Nazziwa; Shohaimi, Shamarina; Adam, Mohd Bakri

    2013-09-01

    Bayesian networks are graphical probabilistic models that represent causal and other relationships between domain variables. In the context of medical decision making, these models have been explored to help in medical diagnosis and prognosis. In this paper, we discuss the Bayesian network formalism in building medical support systems and we learn a tree augmented naive Bayes Network (TAN) from gastrointestinal bleeding data. The accuracy of the TAN in classifying the source of gastrointestinal bleeding into upper or lower source is obtained. The TAN achieves a high classification accuracy of 86% and an area under curve of 92%. A sensitivity analysis of the model shows relatively high levels of entropy reduction for color of the stool, history of gastrointestinal bleeding, consistency and the ratio of blood urea nitrogen to creatinine. The TAN facilitates the identification of the source of GIB and requires further validation.

  6. A Measure of Uncertainty regarding the Interval Constraint of Normal Mean Elicited by Two Stages of a Prior Hierarchy

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2014-01-01

    Full Text Available This paper considers a hierarchical screened Gaussian model (HSGM for Bayesian inference of normal models when an interval constraint in the mean parameter space needs to be incorporated in the modeling but when such a restriction is uncertain. An objective measure of the uncertainty, regarding the interval constraint, accounted for by using the HSGM is proposed for the Bayesian inference. For this purpose, we drive a maximum entropy prior of the normal mean, eliciting the uncertainty regarding the interval constraint, and then obtain the uncertainty measure by considering the relationship between the maximum entropy prior and the marginal prior of the normal mean in HSGM. Bayesian estimation procedure of HSGM is developed and two numerical illustrations pertaining to the properties of the uncertainty measure are provided.

  7. The criteria for selecting a method for unfolding neutron spectra based on the information entropy theory

    International Nuclear Information System (INIS)

    Zhu, Qingjun; Song, Fengquan; Ren, Jie; Chen, Xueyong; Zhou, Bin

    2014-01-01

    To further expand the application of an artificial neural network in the field of neutron spectrometry, the criteria for choosing between an artificial neural network and the maximum entropy method for the purpose of unfolding neutron spectra was presented. The counts of the Bonner spheres for IAEA neutron spectra were used as a database, and the artificial neural network and the maximum entropy method were used to unfold neutron spectra; the mean squares of the spectra were defined as the differences between the desired and unfolded spectra. After the information entropy of each spectrum was calculated using information entropy theory, the relationship between the mean squares of the spectra and the information entropy was acquired. Useful information from the information entropy guided the selection of unfolding methods. Due to the importance of the information entropy, the method for predicting the information entropy using the Bonner spheres' counts was established. The criteria based on the information entropy theory can be used to choose between the artificial neural network and the maximum entropy method unfolding methods. The application of an artificial neural network to unfold neutron spectra was expanded. - Highlights: • Two neutron spectra unfolding methods, ANN and MEM, were compared. • The spectrum's entropy offers useful information for selecting unfolding methods. • For the spectrum with low entropy, the ANN was generally better than MEM. • The spectrum's entropy was predicted based on the Bonner spheres' counts

  8. Entropy - Some Cosmological Questions Answered by Model of Expansive Nondecelerative Universe

    Directory of Open Access Journals (Sweden)

    Miroslav Sukenik

    2003-01-01

    Full Text Available Abstract: The paper summarizes the background of Expansive Nondecelerative Universe model and its potential to offer answers to some open cosmological questions related to entropy. Three problems are faced in more detail, namely that of Hawkings phenomenon of black holes evaporation, maximum entropy of the Universe during its evolution, and time evolution of specific entropy.

  9. Testing students' e-learning via Facebook through Bayesian structural equation modeling.

    Science.gov (United States)

    Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.

  10. Testing students' e-learning via Facebook through Bayesian structural equation modeling.

    Directory of Open Access Journals (Sweden)

    Hashem Salarzadeh Jenatabadi

    Full Text Available Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.

  11. Towards operational interpretations of generalized entropies

    Science.gov (United States)

    Topsøe, Flemming

    2010-12-01

    The driving force behind our study has been to overcome the difficulties you encounter when you try to extend the clear and convincing operational interpretations of classical Boltzmann-Gibbs-Shannon entropy to other notions, especially to generalized entropies as proposed by Tsallis. Our approach is philosophical, based on speculations regarding the interplay between truth, belief and knowledge. The main result demonstrates that, accepting philosophically motivated assumptions, the only possible measures of entropy are those suggested by Tsallis - which, as we know, include classical entropy. This result constitutes, so it seems, a more transparent interpretation of entropy than previously available. However, further research to clarify the assumptions is still needed. Our study points to the thesis that one should never consider the notion of entropy in isolation - in order to enable a rich and technically smooth study, further concepts, such as divergence, score functions and descriptors or controls should be included in the discussion. This will clarify the distinction between Nature and Observer and facilitate a game theoretical discussion. The usefulness of this distinction and the subsequent exploitation of game theoretical results - such as those connected with the notion of Nash equilibrium - is demonstrated by a discussion of the Maximum Entropy Principle.

  12. Towards operational interpretations of generalized entropies

    International Nuclear Information System (INIS)

    Topsoee, Flemming

    2010-01-01

    The driving force behind our study has been to overcome the difficulties you encounter when you try to extend the clear and convincing operational interpretations of classical Boltzmann-Gibbs-Shannon entropy to other notions, especially to generalized entropies as proposed by Tsallis. Our approach is philosophical, based on speculations regarding the interplay between truth, belief and knowledge. The main result demonstrates that, accepting philosophically motivated assumptions, the only possible measures of entropy are those suggested by Tsallis - which, as we know, include classical entropy. This result constitutes, so it seems, a more transparent interpretation of entropy than previously available. However, further research to clarify the assumptions is still needed. Our study points to the thesis that one should never consider the notion of entropy in isolation - in order to enable a rich and technically smooth study, further concepts, such as divergence, score functions and descriptors or controls should be included in the discussion. This will clarify the distinction between Nature and Observer and facilitate a game theoretical discussion. The usefulness of this distinction and the subsequent exploitation of game theoretical results - such as those connected with the notion of Nash equilibrium - is demonstrated by a discussion of the Maximum Entropy Principle.

  13. Black hole entropy, curved space and monsters

    International Nuclear Information System (INIS)

    Hsu, Stephen D.H.; Reeb, David

    2008-01-01

    We investigate the microscopic origin of black hole entropy, in particular the gap between the maximum entropy of ordinary matter and that of black holes. Using curved space, we construct configurations with entropy greater than the area A of a black hole of equal mass. These configurations have pathological properties and we refer to them as monsters. When monsters are excluded we recover the entropy bound on ordinary matter S 3/4 . This bound implies that essentially all of the microstates of a semiclassical black hole are associated with the growth of a slightly smaller black hole which absorbs some additional energy. Our results suggest that the area entropy of black holes is the logarithm of the number of distinct ways in which one can form the black hole from ordinary matter and smaller black holes, but only after the exclusion of monster states

  14. 2D Tsallis Entropy for Image Segmentation Based on Modified Chaotic Bat Algorithm

    Directory of Open Access Journals (Sweden)

    Zhiwei Ye

    2018-03-01

    Full Text Available Image segmentation is a significant step in image analysis and computer vision. Many entropy based approaches have been presented in this topic; among them, Tsallis entropy is one of the best performing methods. However, 1D Tsallis entropy does not consider make use of the spatial correlation information within the neighborhood results might be ruined by noise. Therefore, 2D Tsallis entropy is proposed to solve the problem, and results are compared with 1D Fisher, 1D maximum entropy, 1D cross entropy, 1D Tsallis entropy, fuzzy entropy, 2D Fisher, 2D maximum entropy and 2D cross entropy. On the other hand, due to the existence of huge computational costs, meta-heuristics algorithms like genetic algorithm (GA, particle swarm optimization (PSO, ant colony optimization algorithm (ACO and differential evolution algorithm (DE are used to accelerate the 2D Tsallis entropy thresholding method. In this paper, considering 2D Tsallis entropy as a constrained optimization problem, the optimal thresholds are acquired by maximizing the objective function using a modified chaotic Bat algorithm (MCBA. The proposed algorithm has been tested on some actual and infrared images. The results are compared with that of PSO, GA, ACO and DE and demonstrate that the proposed method outperforms other approaches involved in the paper, which is a feasible and effective option for image segmentation.

  15. Empirical study on entropy models of cellular manufacturing systems

    Institute of Scientific and Technical Information of China (English)

    Zhifeng Zhang; Renbin Xiao

    2009-01-01

    From the theoretical point of view,the states of manufacturing resources can be monitored and assessed through the amount of information needed to describe their technological structure and operational state.The amount of information needed to describe cellular manufacturing systems is investigated by two measures:the structural entropy and the operational entropy.Based on the Shannon entropy,the models of the structural entropy and the operational entropy of cellular manufacturing systems are developed,and the cognizance of the states of manufacturing resources is also illustrated.Scheduling is introduced to measure the entropy models of cellular manufacturing systems,and the feasible concepts of maximum schedule horizon and schedule adherence are advanced to quantitatively evaluate the effectiveness of schedules.Finally,an example is used to demonstrate the validity of the proposed methodology.

  16. Merging Digital Surface Models Implementing Bayesian Approaches

    Science.gov (United States)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  17. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  18. Optimization between heating load and entropy-production rate for endoreversible absorption heat-transformers

    International Nuclear Information System (INIS)

    Sun Fengrui; Qin Xiaoyong; Chen Lingen; Wu Chih

    2005-01-01

    For an endoreversible four-heat-reservoir absorption heat-transformer cycle, for which a linear (Newtonian) heat-transfer law applies, an ecological optimization criterion is proposed for the best mode of operation of the cycle. This involves maximizing a function representing the compromise between the heating load and the entropy-production rate. The optimal relation between the ecological criterion and the COP (coefficient of performance), the maximum ecological criterion and the corresponding COP, heating load and entropy production rate, as well as the ecological criterion and entropy-production rate at the maximum heating load are derived using finite-time thermodynamics. Moreover, compared with the heating-load criterion, the effects of the cycle parameters on the ecological performance are studied by numerical examples. These show that achieving the maximum ecological criterion makes the entropy-production rate decrease by 77.0% and the COP increase by 55.4% with only 27.3% heating-load losses compared with the maximum heating-load objective. The results reflect that the ecological criterion has long-term significance for optimal design of absorption heat-transformers

  19. Entropy estimates for simple random fields

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Justesen, Jørn

    1995-01-01

    We consider the problem of determining the maximum entropy of a discrete random field on a lattice subject to certain local constraints on symbol configurations. The results are expected to be of interest in the analysis of digitized images and two dimensional codes. We shall present some examples...... of binary and ternary fields with simple constraints. Exact results on the entropies are known only in a few cases, but we shall present close bounds and estimates that are computationally efficient...

  20. Bayesian data fusion for spatial prediction of categorical variables in environmental sciences

    Science.gov (United States)

    Gengler, Sarah; Bogaert, Patrick

    2014-12-01

    First developed to predict continuous variables, Bayesian Maximum Entropy (BME) has become a complete framework in the context of space-time prediction since it has been extended to predict categorical variables and mixed random fields. This method proposes solutions to combine several sources of data whatever the nature of the information. However, the various attempts that were made for adapting the BME methodology to categorical variables and mixed random fields faced some limitations, as a high computational burden. The main objective of this paper is to overcome this limitation by generalizing the Bayesian Data Fusion (BDF) theoretical framework to categorical variables, which is somehow a simplification of the BME method through the convenient conditional independence hypothesis. The BDF methodology for categorical variables is first described and then applied to a practical case study: the estimation of soil drainage classes using a soil map and point observations in the sandy area of Flanders around the city of Mechelen (Belgium). The BDF approach is compared to BME along with more classical approaches, as Indicator CoKringing (ICK) and logistic regression. Estimators are compared using various indicators, namely the Percentage of Correctly Classified locations (PCC) and the Average Highest Probability (AHP). Although BDF methodology for categorical variables is somehow a simplification of BME approach, both methods lead to similar results and have strong advantages compared to ICK and logistic regression.

  1. Bayesian data fusion for spatial prediction of categorical variables in environmental sciences

    International Nuclear Information System (INIS)

    Gengler, Sarah; Bogaert, Patrick

    2014-01-01

    First developed to predict continuous variables, Bayesian Maximum Entropy (BME) has become a complete framework in the context of space-time prediction since it has been extended to predict categorical variables and mixed random fields. This method proposes solutions to combine several sources of data whatever the nature of the information. However, the various attempts that were made for adapting the BME methodology to categorical variables and mixed random fields faced some limitations, as a high computational burden. The main objective of this paper is to overcome this limitation by generalizing the Bayesian Data Fusion (BDF) theoretical framework to categorical variables, which is somehow a simplification of the BME method through the convenient conditional independence hypothesis. The BDF methodology for categorical variables is first described and then applied to a practical case study: the estimation of soil drainage classes using a soil map and point observations in the sandy area of Flanders around the city of Mechelen (Belgium). The BDF approach is compared to BME along with more classical approaches, as Indicator CoKringing (ICK) and logistic regression. Estimators are compared using various indicators, namely the Percentage of Correctly Classified locations (PCC) and the Average Highest Probability (AHP). Although BDF methodology for categorical variables is somehow a simplification of BME approach, both methods lead to similar results and have strong advantages compared to ICK and logistic regression

  2. An entropy approach for evaluating the maximum information content achievable by an urban rainfall network

    Directory of Open Access Journals (Sweden)

    E. Ridolfi

    2011-07-01

    Full Text Available Hydrological models are the basis of operational flood-forecasting systems. The accuracy of these models is strongly dependent on the quality and quantity of the input information represented by rainfall height. Finer space-time rainfall resolution results in more accurate hazard forecasting. In this framework, an optimum raingauge network is essential in predicting flood events.

    This paper develops an entropy-based approach to evaluate the maximum information content achievable by a rainfall network for different sampling time intervals. The procedure is based on the determination of the coefficients of transferred and nontransferred information and on the relative isoinformation contours.

    The nontransferred information value achieved by the whole network is strictly dependent on the sampling time intervals considered. An empirical curve is defined, to assess the objective of the research: the nontransferred information value is plotted versus the associated sampling time on a semi-log scale. The curve has a linear trend.

    In this paper, the methodology is applied to the high-density raingauge network of the urban area of Rome.

  3. Learning to Detect Traffic Incidents from Data Based on Tree Augmented Naive Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Dawei Li

    2017-01-01

    Full Text Available This study develops a tree augmented naive Bayesian (TAN classifier based incident detection algorithm. Compared with the Bayesian networks based detection algorithms developed in the previous studies, this algorithm has less dependency on experts’ knowledge. The structure of TAN classifier for incident detection is learned from data. The discretization of continuous attributes is processed using an entropy-based method automatically. A simulation dataset on the section of the Ayer Rajah Expressway (AYE in Singapore is used to demonstrate the development of proposed algorithm, including wavelet denoising, normalization, entropy-based discretization, and structure learning. The performance of TAN based algorithm is evaluated compared with the previous developed Bayesian network (BN based and multilayer feed forward (MLF neural networks based algorithms with the same AYE data. The experiment results show that the TAN based algorithms perform better than the BN classifiers and have a similar performance to the MLF based algorithm. However, TAN based algorithm would have wider vista of applications because the theory of TAN classifiers is much less complicated than MLF. It should be found from the experiment that the TAN classifier based algorithm has a significant superiority over the speed of model training and calibration compared with MLF.

  4. Bayesian methodology for generic seismic fragility evaluation of components in nuclear power plants

    International Nuclear Information System (INIS)

    Yamaguchi, Akira; Campbell, R.D.; Ravindra, M.K.

    1991-01-01

    Bayesian methodology for updating the seismic fragility of components in nuclear power plants is presented. The generic fragility data which have been evaluated based on the past SPSAs are combined with the seismic experience data. Although the seismic experience is limited to the acceleration range below the median capacity of the components, it has been found that the evidence is effective to update the fragility tail. In other words, the uncertainty of the fragility is reduced although the median capacity itself is not modified to a great extent. The annual frequency of failure is also reduced as a result of the updating of the fragility tail. The PDF of the seismic capacity is handled in discrete form, which enables the use of arbitrary type of prior distribution. Accordingly, the Log-N prior can be used which is consistent with the widely used fragility model. For evaluating posterior fragility parameters (A m and B U ), two methods have been proposed. Furthermore, it has been found that the importance of evidence used in the Bayesian methodology can be quantified by the entropy of the evidence. Only the events with high entropy need to be considered in the Bayesian updating of the fragility. The currently available seismic experience database for typical components can be utilized to develop the fragility tail which is contributive to the seismically-induced failure frequency. The combined use of generic fragility and seismic experience data, with the aid of Bayesian methodology, provides refined generic fragility curves which are useful for SPSA studies. (author)

  5. SpatEntropy: Spatial Entropy Measures in R

    OpenAIRE

    Altieri, Linda; Cocchi, Daniela; Roli, Giulia

    2018-01-01

    This article illustrates how to measure the heterogeneity of spatial data presenting a finite number of categories via computation of spatial entropy. The R package SpatEntropy contains functions for the computation of entropy and spatial entropy measures. The extension to spatial entropy measures is a unique feature of SpatEntropy. In addition to the traditional version of Shannon's entropy, the package includes Batty's spatial entropy, O'Neill's entropy, Li and Reynolds' contagion index, Ka...

  6. Gravitational entropies in LTB dust models

    International Nuclear Information System (INIS)

    Sussman, Roberto A; Larena, Julien

    2014-01-01

    We consider generic Lemaître–Tolman–Bondi (LTB) dust models to probe the gravitational entropy proposals of Clifton, Ellis and Tavakol (CET) and of Hosoya and Buchert (HB). We also consider a variant of the HB proposal based on a suitable quasi-local scalar weighted average. We show that the conditions for entropy growth for all proposals are directly related to a negative correlation of similar fluctuations of the energy density and Hubble scalar. While this correlation is evaluated locally for the CET proposal, it must be evaluated in a non-local domain dependent manner for the two HB proposals. By looking at the fulfilment of these conditions at the relevant asymptotic limits we are able to provide a well grounded qualitative description of the full time evolution and radial asymptotic scaling of the three entropies in generic models. The following rigorous analytic results are obtained for the three proposals: (i) entropy grows when the density growing mode is dominant, (ii) all ever-expanding hyperbolic models reach a stable terminal equilibrium characterized by an inhomogeneous entropy maximum in their late time evolution; (iii) regions with decaying modes and collapsing elliptic models exhibit unstable equilibria associated with an entropy minimum (iv) near singularities the CET entropy diverges while the HB entropies converge; (v) the CET entropy converges for all models in the radial asymptotic range, whereas the HB entropies only converge for models asymptotic to a Friedmann–Lemaître–Robertson–Walker background. The fact that different independent proposals yield fairly similar conditions for entropy production, time evolution and radial scaling in generic LTB models seems to suggest that their common notion of a ‘gravitational entropy’ may be a theoretically robust concept applicable to more general spacetimes. (paper)

  7. Metainference: A Bayesian inference method for heterogeneous systems.

    Science.gov (United States)

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.

  8. Adjoint entropy vs topological entropy

    OpenAIRE

    Giordano Bruno, Anna

    2012-01-01

    Recently the adjoint algebraic entropy of endomorphisms of abelian groups was introduced and studied. We generalize the notion of adjoint entropy to continuous endomorphisms of topological abelian groups. Indeed, the adjoint algebraic entropy is defined using the family of all finite-index subgroups, while we take only the subfamily of all open finite-index subgroups to define the topological adjoint entropy. This allows us to compare the (topological) adjoint entropy with the known topologic...

  9. Upper entropy axioms and lower entropy axioms

    International Nuclear Information System (INIS)

    Guo, Jin-Li; Suo, Qi

    2015-01-01

    The paper suggests the concepts of an upper entropy and a lower entropy. We propose a new axiomatic definition, namely, upper entropy axioms, inspired by axioms of metric spaces, and also formulate lower entropy axioms. We also develop weak upper entropy axioms and weak lower entropy axioms. Their conditions are weaker than those of Shannon–Khinchin axioms and Tsallis axioms, while these conditions are stronger than those of the axiomatics based on the first three Shannon–Khinchin axioms. The subadditivity and strong subadditivity of entropy are obtained in the new axiomatics. Tsallis statistics is a special case of satisfying our axioms. Moreover, different forms of information measures, such as Shannon entropy, Daroczy entropy, Tsallis entropy and other entropies, can be unified under the same axiomatics

  10. Entropy of adsorption of mixed surfactants from solutions onto the air/water interface

    Science.gov (United States)

    Chen, L.-W.; Chen, J.-H.; Zhou, N.-F.

    1995-01-01

    The partial molar entropy change for mixed surfactant molecules adsorbed from solution at the air/water interface has been investigated by surface thermodynamics based upon the experimental surface tension isotherms at various temperatures. Results for different surfactant mixtures of sodium dodecyl sulfate and sodium tetradecyl sulfate, decylpyridinium chloride and sodium alkylsulfonates have shown that the partial molar entropy changes for adsorption of the mixed surfactants were generally negative and decreased with increasing adsorption to a minimum near the maximum adsorption and then increased abruptly. The entropy decrease can be explained by the adsorption-orientation of surfactant molecules in the adsorbed monolayer and the abrupt entropy increase at the maximum adsorption is possible due to the strong repulsion between the adsorbed molecules.

  11. Optimal Detection under the Restricted Bayesian Criterion

    Directory of Open Access Journals (Sweden)

    Shujun Liu

    2017-07-01

    Full Text Available This paper aims to find a suitable decision rule for a binary composite hypothesis-testing problem with a partial or coarse prior distribution. To alleviate the negative impact of the information uncertainty, a constraint is considered that the maximum conditional risk cannot be greater than a predefined value. Therefore, the objective of this paper becomes to find the optimal decision rule to minimize the Bayes risk under the constraint. By applying the Lagrange duality, the constrained optimization problem is transformed to an unconstrained optimization problem. In doing so, the restricted Bayesian decision rule is obtained as a classical Bayesian decision rule corresponding to a modified prior distribution. Based on this transformation, the optimal restricted Bayesian decision rule is analyzed and the corresponding algorithm is developed. Furthermore, the relation between the Bayes risk and the predefined value of the constraint is also discussed. The Bayes risk obtained via the restricted Bayesian decision rule is a strictly decreasing and convex function of the constraint on the maximum conditional risk. Finally, the numerical results including a detection example are presented and agree with the theoretical results.

  12. Quantum Rényi relative entropies affirm universality of thermodynamics.

    Science.gov (United States)

    Misra, Avijit; Singh, Uttam; Bera, Manabendra Nath; Rajagopal, A K

    2015-10-01

    We formulate a complete theory of quantum thermodynamics in the Rényi entropic formalism exploiting the Rényi relative entropies, starting from the maximum entropy principle. In establishing the first and second laws of quantum thermodynamics, we have correctly identified accessible work and heat exchange in both equilibrium and nonequilibrium cases. The free energy (internal energy minus temperature times entropy) remains unaltered, when all the entities entering this relation are suitably defined. Exploiting Rényi relative entropies we have shown that this "form invariance" holds even beyond equilibrium and has profound operational significance in isothermal process. These results reduce to the Gibbs-von Neumann results when the Rényi entropic parameter α approaches 1. Moreover, it is shown that the universality of the Carnot statement of the second law is the consequence of the form invariance of the free energy, which is in turn the consequence of maximum entropy principle. Further, the Clausius inequality, which is the precursor to the Carnot statement, is also shown to hold based on the data processing inequalities for the traditional and sandwiched Rényi relative entropies. Thus, we find that the thermodynamics of nonequilibrium state and its deviation from equilibrium together determine the thermodynamic laws. This is another important manifestation of the concepts of information theory in thermodynamics when they are extended to the quantum realm. Our work is a substantial step towards formulating a complete theory of quantum thermodynamics and corresponding resource theory.

  13. A Bayesian Framework of Uncertainties Integration in 3D Geological Model

    Science.gov (United States)

    Liang, D.; Liu, X.

    2017-12-01

    3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.

  14. Entropy, neutro-entropy and anti-entropy for neutrosophic information

    OpenAIRE

    Vasile Patrascu

    2017-01-01

    This approach presents a multi-valued representation of the neutrosophic information. It highlights the link between the bifuzzy information and neutrosophic one. The constructed deca-valued structure shows the neutrosophic information complexity. This deca-valued structure led to construction of two new concepts for the neutrosophic information: neutro-entropy and anti-entropy. These two concepts are added to the two existing: entropy and non-entropy. Thus, we obtained the following triad: e...

  15. Developing Soil Moisture Profiles Utilizing Remotely Sensed MW and TIR Based SM Estimates Through Principle of Maximum Entropy

    Science.gov (United States)

    Mishra, V.; Cruise, J. F.; Mecikalski, J. R.

    2015-12-01

    Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Earlier studies show that the principle of maximum entropy (POME) can be utilized to develop vertical soil moisture profiles with accuracy (MAE of about 1% for a monotonically dry profile; nearly 2% for monotonically wet profiles and 3.8% for mixed profiles) with minimum constraints (surface, mean and bottom soil moisture contents). In this study, the constraints for the vertical soil moisture profiles were obtained from remotely sensed data. Low resolution (25 km) MW soil moisture estimates (AMSR-E) were downscaled to 4 km using a soil evaporation efficiency index based disaggregation approach. The downscaled MW soil moisture estimates served as a surface boundary condition, while 4 km resolution TIR based Atmospheric Land Exchange Inverse (ALEXI) estimates provided the required mean root-zone soil moisture content. Bottom soil moisture content is assumed to be a soil dependent constant. Mulit-year (2002-2011) gridded profiles were developed for the southeastern United States using the POME method. The soil moisture profiles were compared to those generated in land surface models (Land Information System (LIS) and an agricultural model DSSAT) along with available NRCS SCAN sites in the study region. The end product, spatial soil moisture profiles, can be assimilated into agricultural and hydrologic models in lieu of precipitation for data scarce regions.Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Previous studies have shown that the principle of maximum entropy (POME) can be utilized with minimal constraints to develop vertical soil moisture profiles with accuracy (MAE = 1% for monotonically dry profiles; MAE = 2% for monotonically wet profiles and MAE = 3.8% for mixed profiles) when compared to laboratory and field

  16. A Maximum Entropy-Based Chaotic Time-Variant Fragile Watermarking Scheme for Image Tampering Detection

    Directory of Open Access Journals (Sweden)

    Guo-Jheng Yang

    2013-08-01

    Full Text Available The fragile watermarking technique is used to protect intellectual property rights while also providing security and rigorous protection. In order to protect the copyright of the creators, it can be implanted in some representative text or totem. Because all of the media on the Internet are digital, protection has become a critical issue, and determining how to use digital watermarks to protect digital media is thus the topic of our research. This paper uses the Logistic map with parameter u = 4 to generate chaotic dynamic behavior with the maximum entropy 1. This approach increases the security and rigor of the protection. The main research target of information hiding is determining how to hide confidential data so that the naked eye cannot see the difference. Next, we introduce one method of information hiding. Generally speaking, if the image only goes through Arnold’s cat map and the Logistic map, it seems to lack sufficient security. Therefore, our emphasis is on controlling Arnold’s cat map and the initial value of the chaos system to undergo small changes and generate different chaos sequences. Thus, the current time is used to not only make encryption more stringent but also to enhance the security of the digital media.

  17. Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains

    Science.gov (United States)

    Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.

    2018-01-01

    We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.

  18. Explaining the entropy concept and entropy components

    Directory of Open Access Journals (Sweden)

    Marko Popovic

    2018-04-01

    Full Text Available Total entropy of a thermodynamic system consists of two components: thermal entropy due to energy, and residual entropy due to molecular orientation. In this article, a three-step method for explaining entropy is suggested. Step one is to use a classical method to introduce thermal entropy STM as a function of temperature T and heat capacity at constant pressure Cp: STM = ∫(Cp/T dT. Thermal entropy is the entropy due to uncertainty in motion of molecules and vanishes at absolute zero (zero-point energy state. It is also the measure of useless thermal energy that cannot be converted into useful work. The next step is to introduce residual entropy S0 as a function of the number of molecules N and the number of distinct orientations available to them in a crystal m: S0 = N kB ln m, where kB is the Boltzmann constant. Residual entropy quantifies the uncertainty in molecular orientation. Residual entropy, unlike thermal entropy, is independent of temperature and remains present at absolute zero. The third step is to show that thermal entropy and residual entropy add up to the total entropy of a thermodynamic system S: S = S0 + STM. This method of explanation should result in a better comprehension of residual entropy and thermal entropy, as well as of their similarities and differences. The new method was tested in teaching at Faculty of Chemistry University of Belgrade, Serbia. The results of the test show that the new method has a potential to improve the quality of teaching.

  19. Bayesian logistic regression approaches to predict incorrect DRG assignment.

    Science.gov (United States)

    Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural

    2018-05-07

    Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.

  20. Entropy, neutro-entropy and anti-entropy for neutrosophic information

    OpenAIRE

    Vasile Patrascu

    2017-01-01

    This article shows a deca-valued representation of neutrosophic information in which are defined the following features: truth, falsity, weak truth, weak falsity, ignorance, contradiction, saturation, neutrality, ambiguity and hesitation. Using these features, there are constructed computing formulas for entropy, neutro-entropy and anti-entropy.

  1. Energy conservation and maximal entropy production in enzyme reactions.

    Science.gov (United States)

    Dobovišek, Andrej; Vitas, Marko; Brumen, Milan; Fajmut, Aleš

    2017-08-01

    A procedure for maximization of the density of entropy production in a single stationary two-step enzyme reaction is developed. Under the constraints of mass conservation, fixed equilibrium constant of a reaction and fixed products of forward and backward enzyme rate constants the existence of maximum in the density of entropy production is demonstrated. In the state with maximal density of entropy production the optimal enzyme rate constants, the stationary concentrations of the substrate and the product, the stationary product yield as well as the stationary reaction flux are calculated. The test, whether these calculated values of the reaction parameters are consistent with their corresponding measured values, is performed for the enzyme Glucose Isomerase. It is found that calculated and measured rate constants agree within an order of magnitude, whereas the calculated reaction flux and the product yield differ from their corresponding measured values for less than 20 % and 5 %, respectively. This indicates that the enzyme Glucose Isomerase, considered in a non-equilibrium stationary state, as found in experiments using the continuous stirred tank reactors, possibly operates close to the state with the maximum in the density of entropy production. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Beyond the second law entropy production and non-equilibrium systems

    CERN Document Server

    Lineweaver, Charles; Niven, Robert; Regenauer-Lieb, Klaus

    2014-01-01

    The Second Law, a cornerstone of thermodynamics, governs the average direction of dissipative, non-equilibrium processes. But it says nothing about their actual rates or the probability of fluctuations about the average. This interdisciplinary book, written and peer-reviewed by international experts, presents recent advances in the search for new non-equilibrium principles beyond the Second Law, and their applications to a wide range of systems across physics, chemistry and biology. Beyond The Second Law brings together traditionally isolated areas of non-equilibrium research and highlights potentially fruitful connections between them, with entropy production playing the unifying role. Key theoretical concepts include the Maximum Entropy Production principle, the Fluctuation Theorem, and the Maximum Entropy method of statistical inference. Applications of these principles are illustrated in such diverse fields as climatology, cosmology, crystal growth morphology, Earth system science, environmental physics, ...

  3. Linking entropy flow with typhoon evolution: a case-study

    International Nuclear Information System (INIS)

    Liu, C; Xu, H; Liu, Y

    2007-01-01

    This paper is mainly aimed at investigating the relationship of entropy flow with an atmospheric system (typhoon), based on the observational analyses covering its whole life-cycle. The formula for calculating entropy flow is derived starting with the Gibbs relation with data from the NCEP/NCAR reanalysis. The results show that: (i) entropy flow characteristics at different vertical layers of the system are heterogeneous with predominant negative entropy flow in the large portion of the troposphere and positive ones at upper levels during its development; (ii) changes in the maximum surface wind velocity or the intensity of a typhoon are synchronous with the total entropy flow around the typhoon centre and its neighbourhood, suggesting that the growth of a severe atmospheric system relies greatly upon the negative entropy flow being strong enough, and that entropy flow analysis might provide a particular point of view and a powerful tool to understand the mechanism responsible for the life-cycle of an atmospheric system and associated weather events; and (iii) the horizontal pattern of negative entropy flow near the surface might contain some significant information conducive to the track forecast of typhoons

  4. Lattice NRQCD study on in-medium bottomonium spectra using a novel Bayesian reconstruction approach

    International Nuclear Information System (INIS)

    Kim, Seyong; Petreczky, Peter; Rothkopf, Alexander

    2016-01-01

    We present recent results on the in-medium modification of S- and P-wave bottomonium states around the deconfinement transition. Our study uses lattice QCD with N f = 2 + 1 light quark flavors to describe the non-perturbative thermal QCD medium between 140MeV < T < 249MeV and deploys lattice regularized non-relativistic QCD (NRQCD) effective field theory to capture the physics of heavy quark bound states immersed therein. The spectral functions of the 3 S 1 (ϒ) and 3 P 1 (χ b1 ) bottomonium states are extracted from Euclidean time Monte Carlo simulations using a novel Bayesian prescription, which provides higher accuracy than the Maximum Entropy Method. Based on a systematic comparison of interacting and free spectral functions we conclude that the ground states of both the S-wave (ϒ) and P-wave (χ b1 ) channel survive up to T = 249MeV. Stringent upper limits on the size of the in-medium modification of bottomonium masses and widths are provided

  5. Comparative Study of Inference Methods for Bayesian Nonnegative Matrix Factorisation

    DEFF Research Database (Denmark)

    Brouwer, Thomas; Frellsen, Jes; Liò, Pietro

    2017-01-01

    In this paper, we study the trade-offs of different inference approaches for Bayesian matrix factorisation methods, which are commonly used for predicting missing values, and for finding patterns in the data. In particular, we consider Bayesian nonnegative variants of matrix factorisation and tri......-factorisation, and compare non-probabilistic inference, Gibbs sampling, variational Bayesian inference, and a maximum-a-posteriori approach. The variational approach is new for the Bayesian nonnegative models. We compare their convergence, and robustness to noise and sparsity of the data, on both synthetic and real...

  6. Electron density distribution in Si and Ge using multipole, maximum ...

    Indian Academy of Sciences (India)

    Si and Ge has been studied using multipole, maximum entropy method (MEM) and ... and electron density distribution using the currently available versatile ..... data should be subjected to maximum possible utility for the characterization of.

  7. Force-Time Entropy of Isometric Impulse.

    Science.gov (United States)

    Hsieh, Tsung-Yu; Newell, Karl M

    2016-01-01

    The relation between force and temporal variability in discrete impulse production has been viewed as independent (R. A. Schmidt, H. Zelaznik, B. Hawkins, J. S. Frank, & J. T. Quinn, 1979 ) or dependent on the rate of force (L. G. Carlton & K. M. Newell, 1993 ). Two experiments in an isometric single finger force task investigated the joint force-time entropy with (a) fixed time to peak force and different percentages of force level and (b) fixed percentage of force level and different times to peak force. The results showed that the peak force variability increased either with the increment of force level or through a shorter time to peak force that also reduced timing error variability. The peak force entropy and entropy of time to peak force increased on the respective dimension as the parameter conditions approached either maximum force or a minimum rate of force production. The findings show that force error and timing error are dependent but complementary when considered in the same framework with the joint force-time entropy at a minimum in the middle parameter range of discrete impulse.

  8. A Bayesian analysis of QCD sum rules

    International Nuclear Information System (INIS)

    Gubler, Philipp; Oka, Makoto

    2011-01-01

    A new technique has recently been developed, in which the Maximum Entropy Method is used to analyze QCD sum rules. This approach has the virtue of being able to directly generate the spectral function of a given operator, without the need of making an assumption about its specific functional form. To investigate whether useful results can be extracted within this method, we have first studied the vector meson channel, where QCD sum rules are traditionally known to provide a valid description of the spectral function. Our results show a significant peak in the region of the experimentally observed ρ-meson mass, which is in agreement with earlier QCD sum rules studies and suggests that the Maximum Entropy Method is a strong tool for analyzing QCD sum rules.

  9. Entropy Inequality Violations from Ultraspinning Black Holes.

    Science.gov (United States)

    Hennigar, Robie A; Mann, Robert B; Kubizňák, David

    2015-07-17

    We construct a new class of rotating anti-de Sitter (AdS) black hole solutions with noncompact event horizons of finite area in any dimension and study their thermodynamics. In four dimensions these black holes are solutions to gauged supergravity. We find that their entropy exceeds the maximum implied from the conjectured reverse isoperimetric inequality, which states that for a given thermodynamic volume, the black hole entropy is maximized for Schwarzschild-AdS space. We use this result to suggest more stringent conditions under which this conjecture may hold.

  10. Predicting the distribution of the Asian tapir in Peninsular Malaysia using maximum entropy modeling.

    Science.gov (United States)

    Clements, Gopalasamy Reuben; Rayan, D Mark; Aziz, Sheema Abdul; Kawanishi, Kae; Traeholt, Carl; Magintan, David; Yazi, Muhammad Fadlli Abdul; Tingley, Reid

    2012-12-01

    In 2008, the IUCN threat status of the Asian tapir (Tapirus indicus) was reclassified from 'vulnerable' to 'endangered'. The latest distribution map from the IUCN Red List suggests that the tapirs' native range is becoming increasingly fragmented in Peninsular Malaysia, but distribution data collected by local researchers suggest a more extensive geographical range. Here, we compile a database of 1261 tapir occurrence records within Peninsular Malaysia, and demonstrate that this species, indeed, has a much broader geographical range than the IUCN range map suggests. However, extreme spatial and temporal bias in these records limits their utility for conservation planning. Therefore, we used maximum entropy (MaxEnt) modeling to elucidate the potential extent of the Asian tapir's occurrence in Peninsular Malaysia while accounting for bias in existing distribution data. Our MaxEnt model predicted that the Asian tapir has a wider geographic range than our fine-scale data and the IUCN range map both suggest. Approximately 37% of Peninsular Malaysia contains potentially suitable tapir habitats. Our results justify a revision to the Asian tapir's extent of occurrence in the IUCN Red List. Furthermore, our modeling demonstrated that selectively logged forests encompass 45% of potentially suitable tapir habitats, underscoring the importance of these habitats for the conservation of this species in Peninsular Malaysia. © 2012 Wiley Publishing Asia Pty Ltd, ISZS and IOZ/CAS.

  11. The Maximum Entropy Limit of Small-scale Magnetic Field Fluctuations in the Quiet Sun

    Science.gov (United States)

    Gorobets, A. Y.; Berdyugina, S. V.; Riethmüller, T. L.; Blanco Rodríguez, J.; Solanki, S. K.; Barthol, P.; Gandorfer, A.; Gizon, L.; Hirzberger, J.; van Noort, M.; Del Toro Iniesta, J. C.; Orozco Suárez, D.; Schmidt, W.; Martínez Pillet, V.; Knölker, M.

    2017-11-01

    The observed magnetic field on the solar surface is characterized by a very complex spatial and temporal behavior. Although feature-tracking algorithms have allowed us to deepen our understanding of this behavior, subjectivity plays an important role in the identification and tracking of such features. In this paper, we continue studies of the temporal stochasticity of the magnetic field on the solar surface without relying either on the concept of magnetic features or on subjective assumptions about their identification and interaction. We propose a data analysis method to quantify fluctuations of the line-of-sight magnetic field by means of reducing the temporal field’s evolution to the regular Markov process. We build a representative model of fluctuations converging to the unique stationary (equilibrium) distribution in the long time limit with maximum entropy. We obtained different rates of convergence to the equilibrium at fixed noise cutoff for two sets of data. This indicates a strong influence of the data spatial resolution and mixing-polarity fluctuations on the relaxation process. The analysis is applied to observations of magnetic fields of the relatively quiet areas around an active region carried out during the second flight of the Sunrise/IMaX and quiet Sun areas at the disk center from the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory satellite.

  12. Bayesian learning for spatial filtering in an EEG-based brain-computer interface.

    Science.gov (United States)

    Zhang, Haihong; Yang, Huijuan; Guan, Cuntai

    2013-07-01

    Spatial filtering for EEG feature extraction and classification is an important tool in brain-computer interface. However, there is generally no established theory that links spatial filtering directly to Bayes classification error. To address this issue, this paper proposes and studies a Bayesian analysis theory for spatial filtering in relation to Bayes error. Following the maximum entropy principle, we introduce a gamma probability model for describing single-trial EEG power features. We then formulate and analyze the theoretical relationship between Bayes classification error and the so-called Rayleigh quotient, which is a function of spatial filters and basically measures the ratio in power features between two classes. This paper also reports our extensive study that examines the theory and its use in classification, using three publicly available EEG data sets and state-of-the-art spatial filtering techniques and various classifiers. Specifically, we validate the positive relationship between Bayes error and Rayleigh quotient in real EEG power features. Finally, we demonstrate that the Bayes error can be practically reduced by applying a new spatial filter with lower Rayleigh quotient.

  13. Spherical electron momentum density distribution and Bayesian analysis of the renormalization parameter in Li metal

    International Nuclear Information System (INIS)

    Dobrzynski, Ludwik

    2000-01-01

    The Bayesian analysis of the spherical part of the electron momentum density was carried out with the goal of finding the best estimation of the spherically averaged renormalization parameter, z , quantifying the discontinuity in the electron momentum density distribution in Li metal. Three models parametrizing the electron momentum density were considered and nuisance parameters integrated out. The analysis show that the most likely value of z following from the data of Sakurai et al is in the range of 0.45-0.50, while 0.55 is obtained for the data of Schuelke et al . In the maximum entropy reconstruction of the spherical part of the electron momentum density three different algorithms were used. It is shown that all of them produce essentially the same results. The paper shows that the accurate Compton scattering experiments are capable of bringing information on this very important Fermiological aspect of the electron gas in a metal. (author)

  14. Information entropies in antikaon-nucleon scattering and optimal state analysis

    International Nuclear Information System (INIS)

    Ion, D.B.; Ion, M.L.; Petrascu, C.

    1998-01-01

    It is known that Jaynes interpreted the entropy as the expected self-information of a class of mutually exclusive and exhaustive events, while the probability is considered to be the rational degree of belief we assign to events based on available experimental evidence. The axiomatic derivation of Jaynes principle of maximum entropy as well as of the Kullback principle of minimum cross-entropy have been reported. Moreover, the optimal states in the Hilbert space of the scattering amplitude, which are analogous to the coherent states from the Hilbert space of the wave functions, were introduced and developed. The possibility that each optimal state possesses a specific minimum entropic uncertainty relation similar to that of the coherent states was recently conjectured. In fact, the (angle and angular momenta) information entropies, as well as the entropic angle-angular momentum uncertainty relations, in the hadron-hadron scattering, are introduced. The experimental information entropies for the pion-nucleon scattering are calculated by using the available phase shift analyses. These results are compared with the information entropies of the optimal states. Then, the optimal state dominance in the pion-nucleon scattering is systematically observed for all P LAB = 0.02 - 10 GeV/c. Also, it is shown that the angle-angular momentum entropic uncertainty relations are satisfied with high accuracy by all the experimental information entropies. In this paper the (angle and angular momentum) information entropies of hadron-hadron scattering are experimentally investigated by using the antikaon-nucleon phase shift analysis. Then, it is shown that the experimental entropies are in agreement with the informational entropies of optimal states. The results obtained in this paper can be explained not only by the presence of an optimal background which accompanied the production of the elementary resonances but also by the presence of the optimal resonances. On the other hand

  15. The Role of Configurational Entropy in Amorphous Systems

    Directory of Open Access Journals (Sweden)

    Kirsten A. Graeser

    2010-05-01

    Full Text Available Configurational entropy is an important parameter in amorphous systems. It is involved in the thermodynamic considerations, plays an important role in the molecular mobility calculations through its appearance in the Adam-Gibbs equation and provides information on the solubility increase of an amorphous form compared to its crystalline counterpart. This paper presents a calorimetric method which enables the scientist to quickly determine the values for the configurational entropy at any temperature and obtain the maximum of information from these measurements.

  16. Decision Aggregation in Distributed Classification by a Transductive Extension of Maximum Entropy/Improved Iterative Scaling

    Directory of Open Access Journals (Sweden)

    George Kesidis

    2008-06-01

    Full Text Available In many ensemble classification paradigms, the function which combines local/base classifier decisions is learned in a supervised fashion. Such methods require common labeled training examples across the classifier ensemble. However, in some scenarios, where an ensemble solution is necessitated, common labeled data may not exist: (i legacy/proprietary classifiers, and (ii spatially distributed and/or multiple modality sensors. In such cases, it is standard to apply fixed (untrained decision aggregation such as voting, averaging, or naive Bayes rules. In recent work, an alternative transductive learning strategy was proposed. There, decisions on test samples were chosen aiming to satisfy constraints measured by each local classifier. This approach was shown to reliably correct for class prior mismatch and to robustly account for classifier dependencies. Significant gains in accuracy over fixed aggregation rules were demonstrated. There are two main limitations of that work. First, feasibility of the constraints was not guaranteed. Second, heuristic learning was applied. Here, we overcome these problems via a transductive extension of maximum entropy/improved iterative scaling for aggregation in distributed classification. This method is shown to achieve improved decision accuracy over the earlier transductive approach and fixed rules on a number of UC Irvine datasets.

  17. Entropy resistance analyses of a two-stream parallel flow heat exchanger with viscous heating

    International Nuclear Information System (INIS)

    Cheng Xue-Tao; Liang Xin-Gang

    2013-01-01

    Heat exchangers are widely used in industry, and analyses and optimizations of the performance of heat exchangers are important topics. In this paper, we define the concept of entropy resistance based on the entropy generation analyses of a one-dimensional heat transfer process. With this concept, a two-stream parallel flow heat exchanger with viscous heating is analyzed and discussed. It is found that the minimization of entropy resistance always leads to the maximum heat transfer rate for the discussed two-stream parallel flow heat exchanger, while the minimizations of entropy generation rate, entropy generation numbers, and revised entropy generation number do not always. (general)

  18. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  19. Maximum entropy state of the quasi-geostrophic bi-disperse point vortex system: bifurcation phenomena under periodic boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Funakoshi, Satoshi; Sato, Tomoyoshi; Miyazaki, Takeshi, E-mail: funakosi@miyazaki.mce.uec.ac.jp, E-mail: miyazaki@mce.uec.ac.jp [Department of Mechanical Engineering and Intelligent Systems, University of Electro-Communications, 1-5-1, Chofugaoka, Chofu, Tokyo 182-8585 (Japan)

    2012-06-01

    We investigate the statistical mechanics of quasi-geostrophic point vortices of mixed sign (bi-disperse system) numerically and theoretically. Direct numerical simulations under periodic boundary conditions are performed using a fast special-purpose computer for molecular dynamics (GRAPE-DR). Clustering of point vortices of like sign is observed and two-dimensional (2D) equilibrium states are formed. It is shown that they are the solutions of the 2D mean-field equation, i.e. the sinh-Poisson equation. The sinh-Poisson equation is generalized to study the 3D nature of the equilibrium states, and a new mean-field equation with the 3D Laplace operator is derived based on the maximum entropy theory. 3D solutions are obtained at very low energy level. These solution branches, however, cannot be traced up to the higher energy level at which the direct numerical simulations are performed, and transitions to 2D solution branches take place when the energy is increased. (paper)

  20. Solutions to the Cosmic Initial Entropy Problem without Equilibrium Initial Conditions

    Directory of Open Access Journals (Sweden)

    Vihan M. Patel

    2017-08-01

    Full Text Available The entropy of the observable universe is increasing. Thus, at earlier times the entropy was lower. However, the cosmic microwave background radiation reveals an apparently high entropy universe close to thermal and chemical equilibrium. A two-part solution to this cosmic initial entropy problem is proposed. Following Penrose, we argue that the evenly distributed matter of the early universe is equivalent to low gravitational entropy. There are two competing explanations for how this initial low gravitational entropy comes about. (1 Inflation and baryogenesis produce a virtually homogeneous distribution of matter with a low gravitational entropy. (2 Dissatisfied with explaining a low gravitational entropy as the product of a ‘special’ scalar field, some theorists argue (following Boltzmann for a “more natural” initial condition in which the entire universe is in an initial equilibrium state of maximum entropy. In this equilibrium model, our observable universe is an unusual low entropy fluctuation embedded in a high entropy universe. The anthropic principle and the fluctuation theorem suggest that this low entropy region should be as small as possible and have as large an entropy as possible, consistent with our existence. However, our low entropy universe is much larger than needed to produce observers, and we see no evidence for an embedding in a higher entropy background. The initial conditions of inflationary models are as natural as the equilibrium background favored by many theorists.

  1. Causal nexus between energy consumption and carbon dioxide emission for Malaysia using maximum entropy bootstrap approach.

    Science.gov (United States)

    Gul, Sehrish; Zou, Xiang; Hassan, Che Hashim; Azam, Muhammad; Zaman, Khalid

    2015-12-01

    This study investigates the relationship between energy consumption and carbon dioxide emission in the causal framework, as the direction of causality remains has a significant policy implication for developed and developing countries. The study employed maximum entropy bootstrap (Meboot) approach to examine the causal nexus between energy consumption and carbon dioxide emission using bivariate as well as multivariate framework for Malaysia, over a period of 1975-2013. This is a unified approach without requiring the use of conventional techniques based on asymptotical theory such as testing for possible unit root and cointegration. In addition, it can be applied in the presence of non-stationary of any type including structural breaks without any type of data transformation to achieve stationary. Thus, it provides more reliable and robust inferences which are insensitive to time span as well as lag length used. The empirical results show that there is a unidirectional causality running from energy consumption to carbon emission both in the bivariate model and multivariate framework, while controlling for broad money supply and population density. The results indicate that Malaysia is an energy-dependent country and hence energy is stimulus to carbon emissions.

  2. Quantum entropy and uncertainty for two-mode squeezed, coherent and intelligent spin states

    Science.gov (United States)

    Aragone, C.; Mundarain, D.

    1993-01-01

    We compute the quantum entropy for monomode and two-mode systems set in squeezed states. Thereafter, the quantum entropy is also calculated for angular momentum algebra when the system is either in a coherent or in an intelligent spin state. These values are compared with the corresponding values of the respective uncertainties. In general, quantum entropies and uncertainties have the same minimum and maximum points. However, for coherent and intelligent spin states, it is found that some minima for the quantum entropy turn out to be uncertainty maxima. We feel that the quantum entropy we use provides the right answer, since it is given in an essentially unique way.

  3. Identification of Random Dynamic Force Using an Improved Maximum Entropy Regularization Combined with a Novel Conjugate Gradient

    Directory of Open Access Journals (Sweden)

    ChunPing Ren

    2017-01-01

    Full Text Available We propose a novel mathematical algorithm to offer a solution for the inverse random dynamic force identification in practical engineering. Dealing with the random dynamic force identification problem using the proposed algorithm, an improved maximum entropy (IME regularization technique is transformed into an unconstrained optimization problem, and a novel conjugate gradient (NCG method was applied to solve the objective function, which was abbreviated as IME-NCG algorithm. The result of IME-NCG algorithm is compared with that of ME, ME-CG, ME-NCG, and IME-CG algorithm; it is found that IME-NCG algorithm is available for identifying the random dynamic force due to smaller root mean-square-error (RMSE, lower restoration time, and fewer iterative steps. Example of engineering application shows that L-curve method is introduced which is better than Generalized Cross Validation (GCV method and is applied to select regularization parameter; thus the proposed algorithm can be helpful to alleviate the ill-conditioned problem in identification of dynamic force and to acquire an optimal solution of inverse problem in practical engineering.

  4. Evaluating Bayesian spatial methods for modelling species distributions with clumped and restricted occurrence data.

    Directory of Open Access Journals (Sweden)

    David W Redding

    Full Text Available Statistical approaches for inferring the spatial distribution of taxa (Species Distribution Models, SDMs commonly rely on available occurrence data, which is often clumped and geographically restricted. Although available SDM methods address some of these factors, they could be more directly and accurately modelled using a spatially-explicit approach. Software to fit models with spatial autocorrelation parameters in SDMs are now widely available, but whether such approaches for inferring SDMs aid predictions compared to other methodologies is unknown. Here, within a simulated environment using 1000 generated species' ranges, we compared the performance of two commonly used non-spatial SDM methods (Maximum Entropy Modelling, MAXENT and boosted regression trees, BRT, to a spatial Bayesian SDM method (fitted using R-INLA, when the underlying data exhibit varying combinations of clumping and geographic restriction. Finally, we tested how any recommended methodological settings designed to account for spatially non-random patterns in the data impact inference. Spatial Bayesian SDM method was the most consistently accurate method, being in the top 2 most accurate methods in 7 out of 8 data sampling scenarios. Within high-coverage sample datasets, all methods performed fairly similarly. When sampling points were randomly spread, BRT had a 1-3% greater accuracy over the other methods and when samples were clumped, the spatial Bayesian SDM method had a 4%-8% better AUC score. Alternatively, when sampling points were restricted to a small section of the true range all methods were on average 10-12% less accurate, with greater variation among the methods. Model inference under the recommended settings to account for autocorrelation was not impacted by clumping or restriction of data, except for the complexity of the spatial regression term in the spatial Bayesian model. Methods, such as those made available by R-INLA, can be successfully used to account

  5. Evaluating Bayesian spatial methods for modelling species distributions with clumped and restricted occurrence data.

    Science.gov (United States)

    Redding, David W; Lucas, Tim C D; Blackburn, Tim M; Jones, Kate E

    2017-01-01

    Statistical approaches for inferring the spatial distribution of taxa (Species Distribution Models, SDMs) commonly rely on available occurrence data, which is often clumped and geographically restricted. Although available SDM methods address some of these factors, they could be more directly and accurately modelled using a spatially-explicit approach. Software to fit models with spatial autocorrelation parameters in SDMs are now widely available, but whether such approaches for inferring SDMs aid predictions compared to other methodologies is unknown. Here, within a simulated environment using 1000 generated species' ranges, we compared the performance of two commonly used non-spatial SDM methods (Maximum Entropy Modelling, MAXENT and boosted regression trees, BRT), to a spatial Bayesian SDM method (fitted using R-INLA), when the underlying data exhibit varying combinations of clumping and geographic restriction. Finally, we tested how any recommended methodological settings designed to account for spatially non-random patterns in the data impact inference. Spatial Bayesian SDM method was the most consistently accurate method, being in the top 2 most accurate methods in 7 out of 8 data sampling scenarios. Within high-coverage sample datasets, all methods performed fairly similarly. When sampling points were randomly spread, BRT had a 1-3% greater accuracy over the other methods and when samples were clumped, the spatial Bayesian SDM method had a 4%-8% better AUC score. Alternatively, when sampling points were restricted to a small section of the true range all methods were on average 10-12% less accurate, with greater variation among the methods. Model inference under the recommended settings to account for autocorrelation was not impacted by clumping or restriction of data, except for the complexity of the spatial regression term in the spatial Bayesian model. Methods, such as those made available by R-INLA, can be successfully used to account for spatial

  6. Bayesian modeling of the assimilative capacity component of nutrient total maximum daily loads

    Science.gov (United States)

    Faulkner, B. R.

    2008-08-01

    Implementing stream restoration techniques and best management practices to reduce nonpoint source nutrients implies enhancement of the assimilative capacity for the stream system. In this paper, a Bayesian method for evaluating this component of a total maximum daily load (TMDL) load capacity is developed and applied. The joint distribution of nutrient retention metrics from a literature review of 495 measurements was used for Monte Carlo sampling with a process transfer function for nutrient attenuation. Using the resulting histograms of nutrient retention, reference prior distributions were developed for sites in which some of the metrics contributing to the transfer function were measured. Contributing metrics for the prior include stream discharge, cross-sectional area, fraction of storage volume to free stream volume, denitrification rate constant, storage zone mass transfer rate, dispersion coefficient, and others. Confidence of compliance (CC) that any given level of nutrient retention has been achieved is also determined using this approach. The shape of the CC curve is dependent on the metrics measured and serves in part as a measure of the information provided by the metrics to predict nutrient retention. It is also a direct measurement, with a margin of safety, of the fraction of export load that can be reduced through changing retention metrics. For an impaired stream in western Oklahoma, a combination of prior information and measurement of nutrient attenuation was used to illustrate the proposed approach. This method may be considered for TMDL implementation.

  7. Entropy in an expanding universe

    International Nuclear Information System (INIS)

    Frautschi, S.

    1982-01-01

    The question of how the observed evolution of organized structures from initial chaos in the expanding universe can be reconciled with the laws of statistical mechanics is studied, with emphasis on effects of the expansion and gravity. Some major sources of entropy increase are listed. An expanding causal region is defined in which the entropy, though increasing, tends to fall further and further behind its maximum possible value, thus allowing for the development of order. The related questions of whether entropy will continue increasing without limit in the future, and whether such increase in the form of Hawking radiation or radiation from positronium might enable life to maintain itself permanently, are considered. Attempts to find a scheme for preserving life based on solid structures fail because events such as quantum tunneling recurrently disorganize matter on a very long but fixed time scale, whereas all energy sources slow down progressively in an expanding universe. However, there remains hope that other modes of life capable of maintaining themselves permanently can be found

  8. Entropy fluxes, endoreversibility, and solar energy conversion

    Science.gov (United States)

    de Vos, A.; Landsberg, P. T.; Baruch, P.; Parrott, J. E.

    1993-09-01

    A formalism illustrating the conversion of radiation energy into work can be obtained in terms of energy and entropy fluxes. Whereas the Landsberg equality was derived for photothermal conversion with zero bandgap, a generalized inequality for photothermal/photovoltaic conversion with a single, but arbitrary, bandgap was deduced. This result was derived for a direct energy and entropy balance. The formalism of endoreversible dynamics was adopted in order to show the correlation with the latter approach. It was a surprising fact that the generalized Landsberg inequality was derived by optimizing some quantity W(sup *), which obtains it maximum value under short-circuit condition.

  9. Modeling Electric Discharges with Entropy Production Rate Principles

    Directory of Open Access Journals (Sweden)

    Thomas Christen

    2009-12-01

    Full Text Available Under which circumstances are variational principles based on entropy production rate useful tools for modeling steady states of electric (gas discharge systems far from equilibrium? It is first shown how various different approaches, as Steenbeck’s minimum voltage and Prigogine’s minimum entropy production rate principles are related to the maximum entropy production rate principle (MEPP. Secondly, three typical examples are discussed, which provide a certain insight in the structure of the models that are candidates for MEPP application. It is then thirdly argued that MEPP, although not being an exact physical law, may provide reasonable model parameter estimates, provided the constraints contain the relevant (nonlinear physical effects and the parameters to be determined are related to disregarded weak constraints that affect mainly global entropy production. Finally, it is additionally conjectured that a further reason for the success of MEPP in certain far from equilibrium systems might be based on a hidden linearity of the underlying kinetic equation(s.

  10. Evidence of shallow positron traps in ion-implanted InP observed by maximum entropy reconstruction of positron lifetime distribution: a test of MELT

    International Nuclear Information System (INIS)

    Chen, Z.Q.; Wang, S.J.

    1999-01-01

    A newly developed maximum entropy method, which was realized by the computer program MELT introduced by Shukla et al., was used to analyze positron lifetime spectra measured in semiconductors. Several simulation studies were done to test the performance of this algorithm. Reliable reconstruction of positron lifetime distributions can be extracted at relatively lower counts, which shows the applicability and superiority of this method. Two positron lifetime spectra measured in ion-implanted p-InP(Zn) at 140 and 280 K, respectively were analyzed by this program. The lifetime distribution differed greatly for the two temperatures, giving direct evidence of the existence of shallow positron traps at low temperature

  11. Entropy corresponding to the interior of a Schwarzschild black hole

    Directory of Open Access Journals (Sweden)

    Bibhas Ranjan Majhi

    2017-07-01

    Full Text Available Interior volume within the horizon of a black hole is a non-trivial concept which turns out to be very important to explain several issues in the context of quantum nature of black hole. Here we show that the entropy, contained by the maximum interior volume for massless modes, is proportional to the Bekenstein–Hawking expression. The proportionality constant is less than unity implying the horizon bears maximum entropy than that by the interior. The derivation is very systematic and free of any ambiguity. To do so the precise value of the energy of the modes, living in the interior, is derived by constraint analysis. Finally, the implications of the result are discussed.

  12. Entropy corresponding to the interior of a Schwarzschild black hole

    Science.gov (United States)

    Majhi, Bibhas Ranjan; Samanta, Saurav

    2017-07-01

    Interior volume within the horizon of a black hole is a non-trivial concept which turns out to be very important to explain several issues in the context of quantum nature of black hole. Here we show that the entropy, contained by the maximum interior volume for massless modes, is proportional to the Bekenstein-Hawking expression. The proportionality constant is less than unity implying the horizon bears maximum entropy than that by the interior. The derivation is very systematic and free of any ambiguity. To do so the precise value of the energy of the modes, living in the interior, is derived by constraint analysis. Finally, the implications of the result are discussed.

  13. Entropy Coherent and Entropy Convex Measures of Risk

    NARCIS (Netherlands)

    Laeven, R.J.A.; Stadje, M.A.

    2011-01-01

    We introduce two subclasses of convex measures of risk, referred to as entropy coherent and entropy convex measures of risk. We prove that convex, entropy convex and entropy coherent measures of risk emerge as certainty equivalents under variational, homothetic and multiple priors preferences,

  14. Entropy coherent and entropy convex measures of risk

    NARCIS (Netherlands)

    Laeven, Roger; Stadje, M.A.

    2010-01-01

    We introduce entropy coherent and entropy convex measures of risk and prove a collection of axiomatic characterization and duality results. We show in particular that entropy coherent and entropy convex measures of risk emerge as negative certainty equivalents in (the regular and a generalized

  15. Definition and measurement of entropy in high energy heavy ion collisions

    International Nuclear Information System (INIS)

    Remler, E.A.

    1986-01-01

    This talk has two parts: the first on the definition and the second on the measurement of entropy. The connection to nuclear thermodynamics can be retained without the local equilibrium assumption via two steps. The first is relatively simple and goes as follows. The authors make the certainly reasonable assumption that in central collisions, at the moment of maximum compression, the state is similar to one or more fireballs and that the total entropy of each fireball approximates that of an equilibrated system at the same total energy and average density. This entropy, if measurable, would determine much of the thermodynamic properties of nuclear matter. The second step therefore concerns measurement of this entropy. This paper develops a method by which entropy may be measured using a minimum amount of theory. In particular, it is not based on any assumption local equilibrium

  16. Use of SAMC for Bayesian analysis of statistical models with intractable normalizing constants

    KAUST Repository

    Jin, Ick Hoon

    2014-03-01

    Statistical inference for the models with intractable normalizing constants has attracted much attention. During the past two decades, various approximation- or simulation-based methods have been proposed for the problem, such as the Monte Carlo maximum likelihood method and the auxiliary variable Markov chain Monte Carlo methods. The Bayesian stochastic approximation Monte Carlo algorithm specifically addresses this problem: It works by sampling from a sequence of approximate distributions with their average converging to the target posterior distribution, where the approximate distributions can be achieved using the stochastic approximation Monte Carlo algorithm. A strong law of large numbers is established for the Bayesian stochastic approximation Monte Carlo estimator under mild conditions. Compared to the Monte Carlo maximum likelihood method, the Bayesian stochastic approximation Monte Carlo algorithm is more robust to the initial guess of model parameters. Compared to the auxiliary variable MCMC methods, the Bayesian stochastic approximation Monte Carlo algorithm avoids the requirement for perfect samples, and thus can be applied to many models for which perfect sampling is not available or very expensive. The Bayesian stochastic approximation Monte Carlo algorithm also provides a general framework for approximate Bayesian analysis. © 2012 Elsevier B.V. All rights reserved.

  17. Population distribution of flexible molecules from maximum entropy analysis using different priors as background information: application to the Φ, Ψ-conformational space of the α-(1-->2)-linked mannose disaccharide present in N- and O-linked glycoproteins.

    Science.gov (United States)

    Säwén, Elin; Massad, Tariq; Landersjö, Clas; Damberg, Peter; Widmalm, Göran

    2010-08-21

    The conformational space available to the flexible molecule α-D-Manp-(1-->2)-α-D-Manp-OMe, a model for the α-(1-->2)-linked mannose disaccharide in N- or O-linked glycoproteins, is determined using experimental data and molecular simulation combined with a maximum entropy approach that leads to a converged population distribution utilizing different input information. A database survey of the Protein Data Bank where structures having the constituent disaccharide were retrieved resulted in an ensemble with >200 structures. Subsequent filtering removed erroneous structures and gave the database (DB) ensemble having three classes of mannose-containing compounds, viz., N- and O-linked structures, and ligands to proteins. A molecular dynamics (MD) simulation of the disaccharide revealed a two-state equilibrium with a major and a minor conformational state, i.e., the MD ensemble. These two different conformation ensembles of the disaccharide were compared to measured experimental spectroscopic data for the molecule in water solution. However, neither of the two populations were compatible with experimental data from optical rotation, NMR (1)H,(1)H cross-relaxation rates as well as homo- and heteronuclear (3)J couplings. The conformational distributions were subsequently used as background information to generate priors that were used in a maximum entropy analysis. The resulting posteriors, i.e., the population distributions after the application of the maximum entropy analysis, still showed notable deviations that were not anticipated based on the prior information. Therefore, reparameterization of homo- and heteronuclear Karplus relationships for the glycosidic torsion angles Φ and Ψ were carried out in which the importance of electronegative substituents on the coupling pathway was deemed essential resulting in four derived equations, two (3)J(COCC) and two (3)J(COCH) being different for the Φ and Ψ torsions, respectively. These Karplus relationships are denoted

  18. Entropy flow and generation in radiative transfer between surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Z.M.; Basu, S. [Georgia Institute of Technolgy, Atlanta, GA (United States). George W. Woodruff School of Mechanical Engineering

    2007-02-15

    Entropy of radiation has been used to derive the laws of blackbody radiation and determine the maximum efficiency of solar energy conversion. Along with the advancement in thermophotovoltaic technologies and nanoscale heat radiation, there is an urgent need to determine the entropy flow and generation in radiative transfer between nonideal surfaces when multiple reflections are significant. This paper investigates entropy flow and generation when incoherent multiple reflections are included, without considering the effects of interference and photon tunneling. The concept of partial equilibrium is applied to interpret the monochromatic radiation temperature of thermal radiation, T{sub l}(l,{omega}), which is dependent on both wavelength l and direction {omega}. The entropy flux and generation can thus be evaluated for nonideal surfaces. It is shown that several approximate expressions found in the literature can result in significant errors in entropy analysis even for diffuse-gray surfaces. The present study advances the thermodynamics of nonequilibrium thermal radiation and will have a significant impact on the future development of thermophotovoltaic and other radiative energy conversion devices. (author)

  19. Entropy coherent and entropy convex measures of risk

    NARCIS (Netherlands)

    Laeven, R.J.A.; Stadje, M.

    2013-01-01

    We introduce two subclasses of convex measures of risk, referred to as entropy coherent and entropy convex measures of risk. Entropy coherent and entropy convex measures of risk are special cases of φ-coherent and φ-convex measures of risk. Contrary to the classical use of coherent and convex

  20. Bayesian analysis of systems with random chemical composition: renormalization-group approach to Dirichlet distributions and the statistical theory of dilution.

    Science.gov (United States)

    Vlad, Marcel Ovidiu; Tsuchiya, Masa; Oefner, Peter; Ross, John

    2002-01-01

    We investigate the statistical properties of systems with random chemical composition and try to obtain a theoretical derivation of the self-similar Dirichlet distribution, which is used empirically in molecular biology, environmental chemistry, and geochemistry. We consider a system made up of many chemical species and assume that the statistical distribution of the abundance of each chemical species in the system is the result of a succession of a variable number of random dilution events, which can be described by using the renormalization-group theory. A Bayesian approach is used for evaluating the probability density of the chemical composition of the system in terms of the probability densities of the abundances of the different chemical species. We show that for large cascades of dilution events, the probability density of the composition vector of the system is given by a self-similar probability density of the Dirichlet type. We also give an alternative formal derivation for the Dirichlet law based on the maximum entropy approach, by assuming that the average values of the chemical potentials of different species, expressed in terms of molar fractions, are constant. Although the maximum entropy approach leads formally to the Dirichlet distribution, it does not clarify the physical origin of the Dirichlet statistics and has serious limitations. The random theory of dilution provides a physical picture for the emergence of Dirichlet statistics and makes it possible to investigate its validity range. We discuss the implications of our theory in molecular biology, geochemistry, and environmental science.

  1. Quantifying Trace Amounts of Aggregates in Biopharmaceuticals Using Analytical Ultracentrifugation Sedimentation Velocity: Bayesian Analyses and F Statistics.

    Science.gov (United States)

    Wafer, Lucas; Kloczewiak, Marek; Luo, Yin

    2016-07-01

    Analytical ultracentrifugation-sedimentation velocity (AUC-SV) is often used to quantify high molar mass species (HMMS) present in biopharmaceuticals. Although these species are often present in trace quantities, they have received significant attention due to their potential immunogenicity. Commonly, AUC-SV data is analyzed as a diffusion-corrected, sedimentation coefficient distribution, or c(s), using SEDFIT to numerically solve Lamm-type equations. SEDFIT also utilizes maximum entropy or Tikhonov-Phillips regularization to further allow the user to determine relevant sample information, including the number of species present, their sedimentation coefficients, and their relative abundance. However, this methodology has several, often unstated, limitations, which may impact the final analysis of protein therapeutics. These include regularization-specific effects, artificial "ripple peaks," and spurious shifts in the sedimentation coefficients. In this investigation, we experimentally verified that an explicit Bayesian approach, as implemented in SEDFIT, can largely correct for these effects. Clear guidelines on how to implement this technique and interpret the resulting data, especially for samples containing micro-heterogeneity (e.g., differential glycosylation), are also provided. In addition, we demonstrated how the Bayesian approach can be combined with F statistics to draw more accurate conclusions and rigorously exclude artifactual peaks. Numerous examples with an antibody and an antibody-drug conjugate were used to illustrate the strengths and drawbacks of each technique.

  2. Entropy-based Probabilistic Fatigue Damage Prognosis and Algorithmic Performance Comparison

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper, a maximum entropy-based general framework for probabilistic fatigue damage prognosis is investigated. The proposed methodology is based on an...

  3. Entropy-based probabilistic fatigue damage prognosis and algorithmic performance comparison

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper, a maximum entropy-based general framework for probabilistic fatigue damage prognosis is investigated. The proposed methodology is based on an...

  4. Multi-Level Wavelet Shannon Entropy-Based Method for Single-Sensor Fault Location

    Directory of Open Access Journals (Sweden)

    Qiaoning Yang

    2015-10-01

    Full Text Available In actual application, sensors are prone to failure because of harsh environments, battery drain, and sensor aging. Sensor fault location is an important step for follow-up sensor fault detection. In this paper, two new multi-level wavelet Shannon entropies (multi-level wavelet time Shannon entropy and multi-level wavelet time-energy Shannon entropy are defined. They take full advantage of sensor fault frequency distribution and energy distribution across multi-subband in wavelet domain. Based on the multi-level wavelet Shannon entropy, a method is proposed for single sensor fault location. The method firstly uses a criterion of maximum energy-to-Shannon entropy ratio to select the appropriate wavelet base for signal analysis. Then multi-level wavelet time Shannon entropy and multi-level wavelet time-energy Shannon entropy are used to locate the fault. The method is validated using practical chemical gas concentration data from a gas sensor array. Compared with wavelet time Shannon entropy and wavelet energy Shannon entropy, the experimental results demonstrate that the proposed method can achieve accurate location of a single sensor fault and has good anti-noise ability. The proposed method is feasible and effective for single-sensor fault location.

  5. Solution of Inverse Problems using Bayesian Approach with Application to Estimation of Material Parameters in Darcy Flow

    Czech Academy of Sciences Publication Activity Database

    Domesová, Simona; Beres, Michal

    2017-01-01

    Roč. 15, č. 2 (2017), s. 258-266 ISSN 1336-1376 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : Bayesian statistics * Cross-Entropy method * Darcy flow * Gaussian random field * inverse problem Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://advances.utc.sk/index.php/AEEE/article/view/2236

  6. Maximum entropy algorithm and its implementation for the neutral beam profile measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Wook; Cho, Gyu Seong [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Cho, Yong Sub [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A tomography algorithm to maximize the entropy of image using Lagrangian multiplier technique and conjugate gradient method has been designed for the measurement of 2D spatial distribution of intense neutral beams of KSTAR NBI (Korea Superconducting Tokamak Advanced Research Neutral Beam Injector), which is now being designed. A possible detection system was assumed and a numerical simulation has been implemented to test the reconstruction quality of given beam profiles. This algorithm has the good applicability for sparse projection data and thus, can be used for the neutral beam tomography. 8 refs., 3 figs. (Author)

  7. Maximum entropy algorithm and its implementation for the neutral beam profile measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Wook; Cho, Gyu Seong [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Cho, Yong Sub [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    A tomography algorithm to maximize the entropy of image using Lagrangian multiplier technique and conjugate gradient method has been designed for the measurement of 2D spatial distribution of intense neutral beams of KSTAR NBI (Korea Superconducting Tokamak Advanced Research Neutral Beam Injector), which is now being designed. A possible detection system was assumed and a numerical simulation has been implemented to test the reconstruction quality of given beam profiles. This algorithm has the good applicability for sparse projection data and thus, can be used for the neutral beam tomography. 8 refs., 3 figs. (Author)

  8. Bayesian Plackett-Luce Mixture Models for Partially Ranked Data.

    Science.gov (United States)

    Mollica, Cristina; Tardella, Luca

    2017-06-01

    The elicitation of an ordinal judgment on multiple alternatives is often required in many psychological and behavioral experiments to investigate preference/choice orientation of a specific population. The Plackett-Luce model is one of the most popular and frequently applied parametric distributions to analyze rankings of a finite set of items. The present work introduces a Bayesian finite mixture of Plackett-Luce models to account for unobserved sample heterogeneity of partially ranked data. We describe an efficient way to incorporate the latent group structure in the data augmentation approach and the derivation of existing maximum likelihood procedures as special instances of the proposed Bayesian method. Inference can be conducted with the combination of the Expectation-Maximization algorithm for maximum a posteriori estimation and the Gibbs sampling iterative procedure. We additionally investigate several Bayesian criteria for selecting the optimal mixture configuration and describe diagnostic tools for assessing the fitness of ranking distributions conditionally and unconditionally on the number of ranked items. The utility of the novel Bayesian parametric Plackett-Luce mixture for characterizing sample heterogeneity is illustrated with several applications to simulated and real preference ranked data. We compare our method with the frequentist approach and a Bayesian nonparametric mixture model both assuming the Plackett-Luce model as a mixture component. Our analysis on real datasets reveals the importance of an accurate diagnostic check for an appropriate in-depth understanding of the heterogenous nature of the partial ranking data.

  9. Logarithmic black hole entropy corrections and holographic Renyi entropy

    Energy Technology Data Exchange (ETDEWEB)

    Mahapatra, Subhash [The Institute of Mathematical Sciences, Chennai (India); KU Leuven - KULAK, Department of Physics, Kortrijk (Belgium)

    2018-01-15

    The entanglement and Renyi entropies for spherical entangling surfaces in CFTs with gravity duals can be explicitly calculated by mapping these entropies first to the thermal entropy on hyperbolic space and then, using the AdS/CFT correspondence, to the Wald entropy of topological black holes. Here we extend this idea by taking into account corrections to the Wald entropy. Using the method based on horizon symmetries and the asymptotic Cardy formula, we calculate corrections to the Wald entropy and find that these corrections are proportional to the logarithm of the area of the horizon. With the corrected expression for the entropy of the black hole, we then find corrections to the Renyi entropies. We calculate these corrections for both Einstein and Gauss-Bonnet gravity duals. Corrections with logarithmic dependence on the area of the entangling surface naturally occur at the order G{sub D}{sup 0}. The entropic c-function and the inequalities of the Renyi entropy are also satisfied even with the correction terms. (orig.)

  10. Logarithmic black hole entropy corrections and holographic Renyi entropy

    International Nuclear Information System (INIS)

    Mahapatra, Subhash

    2018-01-01

    The entanglement and Renyi entropies for spherical entangling surfaces in CFTs with gravity duals can be explicitly calculated by mapping these entropies first to the thermal entropy on hyperbolic space and then, using the AdS/CFT correspondence, to the Wald entropy of topological black holes. Here we extend this idea by taking into account corrections to the Wald entropy. Using the method based on horizon symmetries and the asymptotic Cardy formula, we calculate corrections to the Wald entropy and find that these corrections are proportional to the logarithm of the area of the horizon. With the corrected expression for the entropy of the black hole, we then find corrections to the Renyi entropies. We calculate these corrections for both Einstein and Gauss-Bonnet gravity duals. Corrections with logarithmic dependence on the area of the entangling surface naturally occur at the order G D 0 . The entropic c-function and the inequalities of the Renyi entropy are also satisfied even with the correction terms. (orig.)

  11. An entropy approach to size and variance heterogeneity

    NARCIS (Netherlands)

    Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.

    2012-01-01

    In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity

  12. Application of maximum entropy to statistical inference for inversion of data from a single track segment.

    Science.gov (United States)

    Stotts, Steven A; Koch, Robert A

    2017-08-01

    In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.

  13. Maximizing Entropy of Pickard Random Fields for 2x2 Binary Constraints

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren

    2014-01-01

    This paper considers the problem of maximizing the entropy of two-dimensional (2D) Pickard Random Fields (PRF) subject to constraints. We consider binary Pickard Random Fields, which provides a 2D causal finite context model and use it to define stationary probabilities for 2x2 squares, thus...... allowing us to calculate the entropy of the field. All possible binary 2x2 constraints are considered and all constraints are categorized into groups according to their properties. For constraints which can be modeled by a PRF approach and with positive entropy, we characterize and provide statistics...... of the maximum PRF entropy. As examples, we consider the well known hard square constraint along with a few other constraints....

  14. Theoretical evaluation of the detectability of random lesions in bayesian emission reconstruction

    International Nuclear Information System (INIS)

    Qi, Jinyi

    2003-01-01

    Detecting cancerous lesion is an important task in positron emission tomography (PET). Bayesian methods based on the maximum a posteriori principle (also called penalized maximum likelihood methods) have been developed to deal with the low signal to noise ratio in the emission data. Similar to the filter cut-off frequency in the filtered backprojection method, the prior parameters in Bayesian reconstruction control the resolution and noise trade-off and hence affect detectability of lesions in reconstructed images. Bayesian reconstructions are difficult to analyze because the resolution and noise properties are nonlinear and object-dependent. Most research has been based on Monte Carlo simulations, which are very time consuming. Building on the recent progress on the theoretical analysis of image properties of statistical reconstructions and the development of numerical observers, here we develop a theoretical approach for fast computation of lesion detectability in Bayesian reconstruction. The results can be used to choose the optimum hyperparameter for the maximum lesion detectability. New in this work is the use of theoretical expressions that explicitly model the statistical variation of the lesion and background without assuming that the object variation is (locally) stationary. The theoretical results are validated using Monte Carlo simulations. The comparisons show good agreement between the theoretical predications and the Monte Carlo results

  15. Giant onsite electronic entropy enhances the performance of ceria for water splitting.

    Science.gov (United States)

    Naghavi, S Shahab; Emery, Antoine A; Hansen, Heine A; Zhou, Fei; Ozolins, Vidvuds; Wolverton, Chris

    2017-08-18

    Previous studies have shown that a large solid-state entropy of reduction increases the thermodynamic efficiency of metal oxides, such as ceria, for two-step thermochemical water splitting cycles. In this context, the configurational entropy arising from oxygen off-stoichiometry in the oxide, has been the focus of most previous work. Here we report a different source of entropy, the onsite electronic configurational entropy, arising from coupling between orbital and spin angular momenta in lanthanide f orbitals. We find that onsite electronic configurational entropy is sizable in all lanthanides, and reaches a maximum value of ≈4.7 k B per oxygen vacancy for Ce 4+ /Ce 3+ reduction. This unique and large positive entropy source in ceria explains its excellent performance for high-temperature catalytic redox reactions such as water splitting. Our calculations also show that terbium dioxide has a high electronic entropy and thus could also be a potential candidate for solar thermochemical reactions.Solid-state entropy of reduction increases the thermodynamic efficiency of ceria for two-step thermochemical water splitting. Here, the authors report a large and different source of entropy, the onsite electronic configurational entropy arising from coupling between orbital and spin angular momenta in f orbitals.

  16. Bayesian image reconstruction for improving detection performance of muon tomography.

    Science.gov (United States)

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  17. Assessing suitable area for Acacia dealbata Mill. in the Ceira River Basin (Central Portugal based on maximum entropy modelling approach

    Directory of Open Access Journals (Sweden)

    Jorge Pereira

    2015-12-01

    Full Text Available Biological invasion by exotic organisms became a key issue, a concern associated to the deep impacts on several domains described as resultant from such processes. A better understanding of the processes, the identification of more susceptible areas, and the definition of preventive or mitigation measures are identified as critical for the purpose of reducing associated impacts. The use of species distribution modeling might help on the purpose of identifying areas that are more susceptible to invasion. This paper aims to present preliminary results on assessing the susceptibility to invasion by the exotic species Acacia dealbata Mill. in the Ceira river basin. The results are based on the maximum entropy modeling approach, considered one of the correlative modelling techniques with better predictive performance. Models which validation is based on independent data sets present better performance, an evaluation based on the AUC of ROC accuracy measure.

  18. ENTROPY FUNCTIONAL FOR CONTINUOUS SYSTEMS OF FINITE ENTROPY

    Institute of Scientific and Technical Information of China (English)

    M. Rahimi A. Riazi

    2012-01-01

    In this article,we introduce the concept of entropy functional for continuous systems on compact metric spaces,and prove some of its properties.We also extract the Kolmogorov entropy from the entropy functional.

  19. Entropy of the Mixture of Sources and Entropy Dimension

    OpenAIRE

    Smieja, Marek; Tabor, Jacek

    2011-01-01

    We investigate the problem of the entropy of the mixture of sources. There is given an estimation of the entropy and entropy dimension of convex combination of measures. The proof is based on our alternative definition of the entropy based on measures instead of partitions.

  20. Study on spectral entropy of two-phase flow density wave instability

    International Nuclear Information System (INIS)

    Zhang Zuoyi

    1992-05-01

    By using mathematic proof, spectral entropy calculations for simple examples and a practical two-phase flow system, it has been proved that under the same stochastic input, the output spectral entropy of a stable linear system is in maximum, while for an unstable linear system, its entropy is in relative lower level. Because the spectral entropy describes the output uncertainty of a system and the second law of thermodynamics rules the direction of natural tendency, the spontaneous process can develop only toward the direction of uncertainty increasing, and the opposite is impossible. It seems that the physical mechanism of the stability of a system can be explained as following: Any deviation from its original state of a stable system will reduce the spectral entropy and violate the natural tendency so that the system will return to original state. On the contrary, the deviation from its original state of an unstable system will increase the spectral entropy that will enhance the deviation and the system will be further away from its original state

  1. Use of mutual information to decrease entropy: Implications for the second law of thermodynamics

    International Nuclear Information System (INIS)

    Lloyd, S.

    1989-01-01

    Several theorems on the mechanics of gathering information are proved, and the possibility of violating the second law of thermodynamics by obtaining information is discussed in light of these theorems. Maxwell's demon can lower the entropy of his surroundings by an amount equal to the difference between the maximum entropy of his recording device and its initial entropy, without generating a compensating entropy increase. A demon with human-scale recording devices can reduce the entropy of a gas by a negligible amount only, but the proof of the demon's impracticability leaves open the possibility that systems highly correlated with their environment can reduce the environment's entropy by a substantial amount without increasing entropy elsewhere. In the event that a boundary condition for the universe requires it to be in a state of low entropy when small, the correlations induced between different particle modes during the expansion phase allow the modes to behave like Maxwell's demons during the contracting phase, reducing the entropy of the universe to a low value

  2. Entanglement entropy and differential entropy for massive flavors

    International Nuclear Information System (INIS)

    Jones, Peter A.R.; Taylor, Marika

    2015-01-01

    In this paper we compute the holographic entanglement entropy for massive flavors in the D3-D7 system, for arbitrary mass and various entangling region geometries. We show that the universal terms in the entanglement entropy exactly match those computed in the dual theory using conformal perturbation theory. We derive holographically the universal terms in the entanglement entropy for a CFT perturbed by a relevant operator, up to second order in the coupling; our results are valid for any entangling region geometry. We present a new method for computing the entanglement entropy of any top-down brane probe system using Kaluza-Klein holography and illustrate our results with massive flavors at finite density. Finally we discuss the differential entropy for brane probe systems, emphasising that the differential entropy captures only the effective lower-dimensional Einstein metric rather than the ten-dimensional geometry.

  3. QCD sum rules in a Bayesian approach

    International Nuclear Information System (INIS)

    Gubler, Philipp; Oka, Makoto

    2011-01-01

    A novel technique is developed, in which the Maximum Entropy Method is used to analyze QCD sum rules. The main advantage of this approach lies in its ability of directly generating the spectral function of a given operator. This is done without the need of making an assumption about the specific functional form of the spectral function, such as in the 'pole + continuum' ansatz that is frequently used in QCD sum rule studies. Therefore, with this method it should in principle be possible to distinguish narrow pole structures form continuum states. To check whether meaningful results can be extracted within this approach, we have first investigated the vector meson channel, where QCD sum rules are traditionally known to provide a valid description of the spectral function. Our results exhibit a significant peak in the region of the experimentally observed ρ-meson mass, which agrees with earlier QCD sum rules studies and shows that the Maximum Entropy Method is a useful tool for analyzing QCD sum rules.

  4. Entropy of localized states and black hole evaporation

    International Nuclear Information System (INIS)

    Olum, K.D.

    1997-01-01

    We call a state 'vacuum bounded' if every measurement performed outside a specified interior region gives the same result as in the vacuum. We compute the maximum entropy of a vacuum-bounded state with a given energy for a one-dimensional model, with the aid of numerical calculations on a lattice. The maximum entropy is larger than it would be for rigid wall boundary conditions by an amount δS, which for large energies is approx-lt(1)/(6)ln(L in T), where L in is the length of the interior region. Assuming that the state resulting from the evaporation of a black hole is similar to a vacuum-bounded state, and that the similarity between vacuum-bounded and rigid-wall-bounded problems extends from 1 to 3 dimensions, we apply these results to the black hole information paradox. Under these assumptions we conclude that large amounts of information cannot be emitted in the final explosion of a black hole. copyright 1997 The American Physical Society

  5. Vector entropy imaging theory with application to computerized tomography

    International Nuclear Information System (INIS)

    Wang Yuanmei; Cheng Jianping; Heng, Pheng Ann

    2002-01-01

    Medical imaging theory for x-ray CT and PET is based on image reconstruction from projections. In this paper a novel vector entropy imaging theory under the framework of multiple criteria decision making is presented. We also study the most frequently used image reconstruction methods, namely, least square, maximum entropy, and filtered back-projection methods under the framework of the single performance criterion optimization. Finally, we introduce some of the results obtained by various reconstruction algorithms using computer-generated noisy projection data from the Hoffman phantom and real CT scanner data. Comparison of the reconstructed images indicates that the vector entropy method gives the best in error (difference between the original phantom data and reconstruction), smoothness (suppression of noise), grey value resolution and is free of ghost images. (author)

  6. [Prediction of potential geographic distribution of Lyme disease in Qinghai province with Maximum Entropy model].

    Science.gov (United States)

    Zhang, Lin; Hou, Xuexia; Liu, Huixin; Liu, Wei; Wan, Kanglin; Hao, Qin

    2016-01-01

    To predict the potential geographic distribution of Lyme disease in Qinghai by using Maximum Entropy model (MaxEnt). The sero-diagnosis data of Lyme disease in 6 counties (Huzhu, Zeku, Tongde, Datong, Qilian and Xunhua) and the environmental and anthropogenic data including altitude, human footprint, normalized difference vegetation index (NDVI) and temperature in Qinghai province since 1990 were collected. By using the data of Huzhu Zeku and Tongde, the prediction of potential distribution of Lyme disease in Qinghai was conducted with MaxEnt. The prediction results were compared with the human sero-prevalence of Lyme disease in Datong, Qilian and Xunhua counties in Qinghai. Three hot spots of Lyme disease were predicted in Qinghai, which were all in the east forest areas. Furthermore, the NDVI showed the most important role in the model prediction, followed by human footprint. Datong, Qilian and Xunhua counties were all in eastern Qinghai. Xunhua was in hot spot areaⅡ, Datong was close to the north of hot spot area Ⅲ, while Qilian with lowest sero-prevalence of Lyme disease was not in the hot spot areas. The data were well modeled in MaxEnt (Area Under Curve=0.980). The actual distribution of Lyme disease in Qinghai was in consistent with the results of the model prediction. MaxEnt could be used in predicting the potential distribution patterns of Lyme disease. The distribution of vegetation and the range and intensity of human activity might be related with Lyme disease distribution.

  7. The Bayesian Covariance Lasso.

    Science.gov (United States)

    Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G

    2013-04-01

    Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size ( n ) is less than the dimension ( d ), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.

  8. Bubble Entropy: An Entropy Almost Free of Parameters.

    Science.gov (United States)

    Manis, George; Aktaruzzaman, Md; Sassi, Roberto

    2017-11-01

    Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors. Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation

  9. Uncertainty measurement with belief entropy on interference effect in Quantum-Like Bayesian Networks

    OpenAIRE

    Huang, Zhiming; Yang, Lin; Jiang, Wen

    2017-01-01

    Social dilemmas have been regarded as the essence of evolution game theory, in which the prisoner's dilemma game is the most famous metaphor for the problem of cooperation. Recent findings revealed people's behavior violated the Sure Thing Principle in such games. Classic probability methodologies have difficulty explaining the underlying mechanisms of people's behavior. In this paper, a novel quantum-like Bayesian Network was proposed to accommodate the paradoxical phenomenon. The special ne...

  10. Logarithmic black hole entropy corrections and holographic Rényi entropy

    Science.gov (United States)

    Mahapatra, Subhash

    2018-01-01

    The entanglement and Rényi entropies for spherical entangling surfaces in CFTs with gravity duals can be explicitly calculated by mapping these entropies first to the thermal entropy on hyperbolic space and then, using the AdS/CFT correspondence, to the Wald entropy of topological black holes. Here we extend this idea by taking into account corrections to the Wald entropy. Using the method based on horizon symmetries and the asymptotic Cardy formula, we calculate corrections to the Wald entropy and find that these corrections are proportional to the logarithm of the area of the horizon. With the corrected expression for the entropy of the black hole, we then find corrections to the Rényi entropies. We calculate these corrections for both Einstein and Gauss-Bonnet gravity duals. Corrections with logarithmic dependence on the area of the entangling surface naturally occur at the order GD^0. The entropic c-function and the inequalities of the Rényi entropy are also satisfied even with the correction terms.

  11. A Trustworthiness Evaluation Method for Software Architectures Based on the Principle of Maximum Entropy (POME and the Grey Decision-Making Method (GDMM

    Directory of Open Access Journals (Sweden)

    Rong Jiang

    2014-09-01

    Full Text Available As the early design decision-making structure, a software architecture plays a key role in the final software product quality and the whole project. In the software design and development process, an effective evaluation of the trustworthiness of a software architecture can help making scientific and reasonable decisions on the architecture, which are necessary for the construction of highly trustworthy software. In consideration of lacking the trustworthiness evaluation and measurement studies for software architecture, this paper provides one trustworthy attribute model of software architecture. Based on this model, the paper proposes to use the Principle of Maximum Entropy (POME and Grey Decision-making Method (GDMM as the trustworthiness evaluation method of a software architecture and proves the scientificity and rationality of this method, as well as verifies the feasibility through case analysis.

  12. Symplectic entropy

    International Nuclear Information System (INIS)

    De Nicola, Sergio; Fedele, Renato; Man'ko, Margarita A; Man'ko, Vladimir I

    2007-01-01

    The tomographic-probability description of quantum states is reviewed. The symplectic tomography of quantum states with continuous variables is studied. The symplectic entropy of the states with continuous variables is discussed and its relation to Shannon entropy and information is elucidated. The known entropic uncertainty relations of the probability distribution in position and momentum of a particle are extended and new uncertainty relations for symplectic entropy are obtained. The partial case of symplectic entropy, which is optical entropy of quantum states, is considered. The entropy associated to optical tomogram is shown to satisfy the new entropic uncertainty relation. The example of Gaussian states of harmonic oscillator is studied and the entropic uncertainty relations for optical tomograms of the Gaussian state are shown to minimize the uncertainty relation

  13. Applicability of entropy, entransy and exergy analyses to the optimization of the Organic Rankine Cycle

    International Nuclear Information System (INIS)

    Zhu, Yadong; Hu, Zhe; Zhou, Yaodong; Jiang, Liang; Yu, Lijun

    2014-01-01

    Graphical abstract: Fig. 3a. Variations of the evaluation parameters with evaporation temperature in the case of prescribed hot and cold streams for R123. Fig. 3(a) indicates that among the seven parameters, the minimum entropy generation rate, exergy destruction rate, entransy efficiency, revised entropy generation number and the maximum entransy loss rate are corresponding to the maximum output power. However, the minimum entransy dissipation rate does not associate with the output power variation, it can be explained as follow: the entransy dissipation is one part of the entransy loss rate besides entransy variation (work entransy) or does not consider the influence of work output on the change of entransy. - Highlights: • Theories of entropy, exergy and entransy are applied to the optimization of the ORC. • Two commonly utilized working fluids – R123 and N-pentane are chosen for comparison. • Variable evaporation temperature, hot stream temperature and mass flow rate are considered. • 3-D coordinates are utilized to observe the global variation of parameters. • The concept of entransy loss rate is appropriate for all the cases discussed in this paper. - Abstract: Based on the theories of entropy, entransy and exergy, the concepts of entropy generation rate, revised entropy generation number, exergy destruction rate, entransy loss rate, entransy dissipation rate and entransy efficiency are applied to the optimization of the Organic Rankine Cycle. Cycles operating on R123 and N-pentane have been compared in three common cases which are variable evaporation temperature, hot stream temperature and hot stream mass flow rate. The optimization goal is to produce maximum output power. Some numerical analyses and simulations are presented, and the results show that when both the hot and cold stream conditions are fixed, all the entropy principle, the exergy theory, the entransy loss rate and the entransy efficiency are applicable to the optimization of the

  14. Giant onsite electronic entropy enhances the performance of ceria for water splitting

    DEFF Research Database (Denmark)

    Naghavi, S. Shahab; Emery, Antoine A.; Hansen, Heine Anton

    2017-01-01

    lanthanides, and reaches a maximum value of ≈4.7 kB per oxygen vacancy for Ce4+/Ce3+ reduction. This unique and large positive entropy source in ceria explains its excellent performance for high-temperature catalytic redox reactions such as water splitting. Our calculations also show that terbium dioxide has...... a high electronic entropy and thus could also be a potential candidate for solar thermochemical reactions....

  15. Configurational entropy of charged AdS black holes

    Directory of Open Access Journals (Sweden)

    Chong Oh Lee

    2017-09-01

    Full Text Available When we consider charged AdS black holes in higher dimensional spacetime and a molecule number density along coexistence curves is numerically extended to higher dimensional cases. It is found that a number density difference of a small and large black holes decrease as a total dimension grows up. In particular, we find that a configurational entropy is a concave function of a reduced temperature and reaches a maximum value at a critical (second-order phase transition point. Furthermore, the bigger a total dimension becomes, the more concave function in a configurational entropy while the more convex function in a reduced pressure.

  16. Simultaneous State and Parameter Estimation Using Maximum Relative Entropy with Nonhomogenous Differential Equation Constraints

    Directory of Open Access Journals (Sweden)

    Adom Giffin

    2014-09-01

    Full Text Available In this paper, we continue our efforts to show how maximum relative entropy (MrE can be used as a universal updating algorithm. Here, our purpose is to tackle a joint state and parameter estimation problem where our system is nonlinear and in a non-equilibrium state, i.e., perturbed by varying external forces. Traditional parameter estimation can be performed by using filters, such as the extended Kalman filter (EKF. However, as shown with a toy example of a system with first order non-homogeneous ordinary differential equations, assumptions made by the EKF algorithm (such as the Markov assumption may not be valid. The problem can be solved with exponential smoothing, e.g., exponentially weighted moving average (EWMA. Although this has been shown to produce acceptable filtering results in real exponential systems, it still cannot simultaneously estimate both the state and its parameters and has its own assumptions that are not always valid, for example when jump discontinuities exist. We show that by applying MrE as a filter, we can not only develop the closed form solutions, but we can also infer the parameters of the differential equation simultaneously with the means. This is useful in real, physical systems, where we want to not only filter the noise from our measurements, but we also want to simultaneously infer the parameters of the dynamics of a nonlinear and non-equilibrium system. Although there were many assumptions made throughout the paper to illustrate that EKF and exponential smoothing are special cases ofMrE, we are not “constrained”, by these assumptions. In other words, MrE is completely general and can be used in broader ways.

  17. Entropy equilibrium equation and dynamic entropy production in environment liquid

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The entropy equilibrium equation is the basis of the nonequilibrium state thermodynamics. But the internal energy implies the kinetic energy of the fluid micelle relative to mass center in the classical entropy equilibrium equation at present. This internal energy is not the mean kinetic energy of molecular movement in thermodynamics. Here a modified entropy equilibrium equation is deduced, based on the concept that the internal energy is just the mean kinetic energy of the molecular movement. A dynamic entropy production is introduced into the entropy equilibrium equation to describe the dynamic process distinctly. This modified entropy equilibrium equation can describe not only the entropy variation of the irreversible processes but also the reversible processes in a thermodynamic system. It is more reasonable and suitable for wider applications.

  18. COLLAGE-BASED INVERSE PROBLEMS FOR IFSM WITH ENTROPY MAXIMIZATION AND SPARSITY CONSTRAINTS

    Directory of Open Access Journals (Sweden)

    Herb Kunze

    2013-11-01

    Full Text Available We consider the inverse problem associated with IFSM: Given a target function f, find an IFSM, such that its invariant fixed point f is sufficiently close to f in the Lp distance. In this paper, we extend the collage-based method developed by Forte and Vrscay (1995 along two different directions. We first search for a set of mappings that not only minimizes the collage error but also maximizes the entropy of the dynamical system. We then include an extra term in the minimization process which takes into account the sparsity of the set of mappings. In this new formulation, the minimization of collage error is treated as multi-criteria problem: we consider three different and conflicting criteria i.e., collage error, entropy and sparsity. To solve this multi-criteria program we proceed by scalarization and we reduce the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented. Numerical studies indicate that a maximum entropy principle exists for this approximation problem, i.e., that the suboptimal solutions produced by collage coding can be improved at least slightly by adding a maximum entropy criterion.

  19. ENTROPY PRODUCTION IN COLLISIONLESS SYSTEMS. II. ARBITRARY PHASE-SPACE OCCUPATION NUMBERS

    International Nuclear Information System (INIS)

    Barnes, Eric I.; Williams, Liliya L. R.

    2012-01-01

    We present an analysis of two thermodynamic techniques for determining equilibria of self-gravitating systems. One is the Lynden-Bell (LB) entropy maximization analysis that introduced violent relaxation. Since we do not use the Stirling approximation, which is invalid at small occupation numbers, our systems have finite mass, unlike LB's isothermal spheres. (Instead of Stirling, we utilize a very accurate smooth approximation for ln x!.) The second analysis extends entropy production extremization to self-gravitating systems, also without the use of the Stirling approximation. In addition to the LB statistical family characterized by the exclusion principle in phase space, and designed to treat collisionless systems, we also apply the two approaches to the Maxwell-Boltzmann (MB) families, which have no exclusion principle and hence represent collisional systems. We implicitly assume that all of the phase space is equally accessible. We derive entropy production expressions for both families and give the extremum conditions for entropy production. Surprisingly, our analysis indicates that extremizing entropy production rate results in systems that have maximum entropy, in both LB and MB statistics. In other words, both thermodynamic approaches lead to the same equilibrium structures.

  20. The improvement of Clausius entropy and its application in entropy analysis

    Institute of Scientific and Technical Information of China (English)

    WU Jing; GUO ZengYuan

    2008-01-01

    The defects of Cleusius entropy which Include s premise of reversible process and a process quantlty of heat in Its definition are discussed in this paper. Moreover, the heat temperature quotient under reversible conditions, i.e. (δQ/T)rev, is essentially a process quantity although it is numerically equal to the entropy change. The sum of internal energy temperature quotient and work temperature quotient is defined as the improved form of Clausius entropy and it can be further proved to be a state funcllon. Unlike Clausius entropy, the improved deflnltion consists of system properties wlthout premise just like other state functions, for example, pressure p and enthalpy h, etc. it is unnecessary to invent reversible paths when calculating entropy change for irreversible processes based on the improved form of entropy since it is independent of process. Furthermore, entropy balance equations for internally and externally irreversible processes are deduced respectively based on the concepts of thermal reservoir entropy transfer and system entropy transfer. Finally, some examples are presented to show that the improved deflnitlon of Clausius entropy provides a clear concept as well as a convenient method for en-tropy change calculation.