F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Information Entropy Production of Spatio-Temporal Maximum Entropy Distributions
Cofre, Rodrigo
2015-01-01
Spiking activity from populations of neurons display causal interactions and memory effects. Therefore, they are expected to show some degree of irreversibility in time. Motivated by the spike train statistics, in this paper we build a framework to quantify the degree of irreversibility of any maximum entropy distribution. Our approach is based on the transfer matrix technique, which enables us to find an homogeneous irreducible Markov chain that shares the same maximum entropy measure. We provide relevant examples in the context of spike train statistics
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Projective Power Entropy and Maximum Tsallis Entropy Distributions
Shinto Eguchi; Shogo Kato; Osamu Komori
2011-01-01
We discuss a one-parameter family of generalized cross entropy between two distributions with the power index, called the projective power entropy. The cross entropy is essentially reduced to the Tsallis entropy if two distributions are taken to be equal. Statistical and probabilistic properties associated with the projective power entropy are extensively investigated including a characterization problem of which conditions uniquely determine the projective power entropy up to the power index...
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
On the maximum entropy principle in non-extensive thermostatistics
Naudts, Jan
2004-01-01
It is possible to derive the maximum entropy principle from thermodynamic stability requirements. Using as a starting point the equilibrium probability distribution, currently used in non-extensive thermostatistics, it turns out that the relevant entropy function is Renyi's alpha-entropy, and not Tsallis' entropy.
Maximum-entropy clustering algorithm and its global convergence analysis
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
An Interval Maximum Entropy Method for Quadratic Programming Problem
RUI Wen-juan; CAO De-xin; SONG Xie-wu
2005-01-01
With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.
Counterexamples to convergence theorem of maximum-entropy clustering algorithm
于剑; 石洪波; 黄厚宽; 孙喜晨; 程乾生
2003-01-01
In this paper, we surveyed the development of maximum-entropy clustering algorithm, pointed out that the maximum-entropy clustering algorithm is not new in essence, and constructed two examples to show that the iterative sequence given by the maximum-entropy clustering algorithm may not converge to a local minimum of its objective function, but a saddle point. Based on these results, our paper shows that the convergence theorem of maximum-entropy clustering algorithm put forward by Kenneth Rose et al. does not hold in general cases.
MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR
SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM
1994-01-01
In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the est
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...
Microcanonical origin of the maximum entropy principle for open systems.
Lee, Julian; Pressé, Steve
2012-10-01
There are two distinct approaches for deriving the canonical ensemble. The canonical ensemble either follows as a special limit of the microcanonical ensemble or alternatively follows from the maximum entropy principle. We show the equivalence of these two approaches by applying the maximum entropy formulation to a closed universe consisting of an open system plus bath. We show that the target function for deriving the canonical distribution emerges as a natural consequence of partial maximization of the entropy over the bath degrees of freedom alone. By extending this mathematical formalism to dynamical paths rather than equilibrium ensembles, the result provides an alternative justification for the principle of path entropy maximization as well.
Hanel, Rudolf; Gell-Mann, Murray
2014-01-01
The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems, by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there exists an ongoing controversy whether the notion of the maximum entropy principle can be extended in a meaningful way to non-extensive, non-ergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for non-ergodic and complex statistical systems if their relative entropy can be factored into a general...
Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf
2017-09-01
There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
The maximum entropy production principle: two basic questions.
Martyushev, Leonid M
2010-05-12
The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.
[Study on the maximum entropy principle and population genetic equilibrium].
Zhang, Hong-Li; Zhang, Hong-Yan
2006-03-01
A general mathematic model of population genetic equilibrium about one locus was constructed based on the maximum entropy principle by WANG Xiao-Long et al. They proved that the maximum solve of the model was just the frequency distribution that a population reached Hardy-Weinberg genetic equilibrium. It can suggest that a population reached Hardy-Weinberg genetic equilibrium when the genotype entropy of the population reached the maximal possible value, and that the frequency distribution of the maximum entropy was equivalent to the distribution of Hardy-Weinberg equilibrium law about one locus. They further assumed that the frequency distribution of the maximum entropy was equivalent to all genetic equilibrium distributions. This is incorrect, however. The frequency distribution of the maximum entropy was only equivalent to the distribution of Hardy-Weinberg equilibrium with respect to one locus or several limited loci. The case with regard to limited loci was proved in this paper. Finally we also discussed an example where the maximum entropy principle was not the equivalent of other genetic equilibria.
Maximum-entropy probability distributions under Lp-norm constraints
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray
2014-05-13
The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.
Maximum entropy models of ecosystem functioning
Bertram, Jason, E-mail: jason.bertram@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)
2014-12-05
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Maximum entropy models of ecosystem functioning
Bertram, Jason
2014-12-01
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes' broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Proscriptive Bayesian Programming and Maximum Entropy: a Preliminary Study
Koike, Carla Cavalcante
2008-11-01
Some problems found in robotics systems, as avoiding obstacles, can be better described using proscriptive commands, where only prohibited actions are indicated in contrast to prescriptive situations, which demands that a specific command be specified. An interesting question arises regarding the possibility to learn automatically if proscriptive commands are suitable and which parametric function could be better applied. Lately, a great variety of problems in robotics domain are object of researches using probabilistic methods, including the use of Maximum Entropy in automatic learning for robot control systems. This works presents a preliminary study on automatic learning of proscriptive robot control using maximum entropy and using Bayesian Programming. It is verified whether Maximum entropy and related methods can favour proscriptive commands in an obstacle avoidance task executed by a mobile robot.
Approximate maximum-entropy moment closures for gas dynamics
McDonald, James G.
2016-11-01
Accurate prediction of flows that exist between the traditional continuum regime and the free-molecular regime have proven difficult to obtain. Current methods are either inaccurate in this regime or prohibitively expensive for practical problems. Moment closures have long held the promise of providing new, affordable, accurate methods in this regime. The maximum-entropy hierarchy of closures seems to offer particularly attractive physical and mathematical properties. Unfortunately, several difficulties render the practical implementation of maximum-entropy closures very difficult. This work examines the use of simple approximations to these maximum-entropy closures and shows that physical accuracy that is vastly improved over continuum methods can be obtained without a significant increase in computational cost. Initially the technique is demonstrated for a simple one-dimensional gas. It is then extended to the full three-dimensional setting. The resulting moment equations are used for the numerical solution of shock-wave profiles with promising results.
A Maximum Entropy Method for a Robust Portfolio Problem
Yingying Xu
2014-06-01
Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.
Maximum-entropy distributions of correlated variables with prespecified marginals.
Larralde, Hernán
2012-12-01
The problem of determining the joint probability distributions for correlated random variables with prespecified marginals is considered. When the joint distribution satisfying all the required conditions is not unique, the "most unbiased" choice corresponds to the distribution of maximum entropy. The calculation of the maximum-entropy distribution requires the solution of rather complicated nonlinear coupled integral equations, exact solutions to which are obtained for the case of Gaussian marginals; otherwise, the solution can be expressed as a perturbation around the product of the marginals if the marginal moments exist.
A discussion on maximum entropy production and information theory
Bruers, Stijn [Instituut voor Theoretische Fysica, Celestijnenlaan 200D, Katholieke Universiteit Leuven, B-3001 Leuven (Belgium)
2007-07-06
We will discuss the maximum entropy production (MaxEP) principle based on Jaynes' information theoretical arguments, as was done by Dewar (2003 J. Phys. A: Math. Gen. 36 631-41, 2005 J. Phys. A: Math. Gen. 38 371-81). With the help of a simple mathematical model of a non-equilibrium system, we will show how to derive minimum and maximum entropy production. Furthermore, the model will help us to clarify some confusing points and to see differences between some MaxEP studies in the literature.
Training Concept, Evolution Time, and the Maximum Entropy Production Principle
Alexey Bezryadin
2016-04-01
Full Text Available The maximum entropy production principle (MEPP is a type of entropy optimization which demands that complex non-equilibrium systems should organize such that the rate of the entropy production is maximized. Our take on this principle is that to prove or disprove the validity of the MEPP and to test the scope of its applicability, it is necessary to conduct experiments in which the entropy produced per unit time is measured with a high precision. Thus we study electric-field-induced self-assembly in suspensions of carbon nanotubes and realize precise measurements of the entropy production rate (EPR. As a strong voltage is applied the suspended nanotubes merge together into a conducting cloud which produces Joule heat and, correspondingly, produces entropy. We introduce two types of EPR, which have qualitatively different significance: global EPR (g-EPR and the entropy production rate of the dissipative cloud itself (DC-EPR. The following results are obtained: (1 As the system reaches the maximum of the DC-EPR, it becomes stable because the applied voltage acts as a stabilizing thermodynamic potential; (2 We discover metastable states characterized by high, near-maximum values of the DC-EPR. Under certain conditions, such efficient entropy-producing regimes can only be achieved if the system is allowed to initially evolve under mildly non-equilibrium conditions, namely at a reduced voltage; (3 Without such a “training” period the system typically is not able to reach the allowed maximum of the DC-EPR if the bias is high; (4 We observe that the DC-EPR maximum is achieved within a time, Te, the evolution time, which scales as a power-law function of the applied voltage; (5 Finally, we present a clear example in which the g-EPR theoretical maximum can never be achieved. Yet, under a wide range of conditions, the system can self-organize and achieve a dissipative regime in which the DC-EPR equals its theoretical maximum.
Influence of Pareto optimality on the maximum entropy methods
Peddavarapu, Sreehari; Sunil, Gujjalapudi Venkata Sai; Raghuraman, S.
2017-07-01
Galerkin meshfree schemes are emerging as a viable substitute to finite element method to solve partial differential equations for the large deformations as well as crack propagation problems. However, the introduction of Shanon-Jayne's entropy principle in to the scattered data approximation has deviated from the trend of defining the approximation functions, resulting in maximum entropy approximants. Further in addition to this, an objective functional which controls the degree of locality resulted in Local maximum entropy approximants. These are based on information-theoretical Pareto optimality between entropy and degree of locality that are defining the basis functions to the scattered nodes. The degree of locality in turn relies on the choice of locality parameter and prior (weight) function. The proper choices of both plays vital role in attain the desired accuracy. Present work is focused on the choice of locality parameter which defines the degree of locality and priors: Gaussian, Cubic spline and quartic spline functions on the behavior of local maximum entropy approximants.
Maximum entropy reconstruction of spin densities involving non uniform prior
Schweizer, J.; Ressouche, E. [DRFMC/SPSMS/MDN CEA-Grenoble (France); Papoular, R.J. [CEA-Saclay, Gif sur Yvette (France). Lab. Leon Brillouin; Tasset, F. [Inst. Laue Langevin, Grenoble (France); Zheludev, A.I. [Brookhaven National Lab., Upton, NY (United States). Physics Dept.
1997-09-01
Diffraction experiments give microscopic information on structures in crystals. A method which uses the concept of maximum of entropy (MaxEnt), appears to be a formidable improvement in the treatment of diffraction data. This method is based on a bayesian approach: among all the maps compatible with the experimental data, it selects that one which has the highest prior (intrinsic) probability. Considering that all the points of the map are equally probable, this probability (flat prior) is expressed via the Boltzman entropy of the distribution. This method has been used for the reconstruction of charge densities from X-ray data, for maps of nuclear densities from unpolarized neutron data as well as for distributions of spin density. The density maps obtained by this method, as compared to those resulting from the usual inverse Fourier transformation, are tremendously improved. In particular, any substantial deviation from the background is really contained in the data, as it costs entropy compared to a map that would ignore such features. However, in most of the cases, before the measurements are performed, some knowledge exists about the distribution which is investigated. It can range from the simple information of the type of scattering electrons to an elaborate theoretical model. In these cases, the uniform prior which considers all the different pixels as equally likely, is too weak a requirement and has to be replaced. In a rigorous bayesian analysis, Skilling has shown that prior knowledge can be encoded into the Maximum Entropy formalism through a model m({rvec r}), via a new definition for the entropy given in this paper. In the absence of any data, the maximum of the entropy functional is reached for {rho}({rvec r}) = m({rvec r}). Any substantial departure from the model, observed in the final map, is really contained in the data as, with the new definition, it costs entropy. This paper presents illustrations of model testing.
A New Detection Approach Based on the Maximum Entropy Model
DONG Xiaomei; XIANG Guang; YU Ge; LI Xiaohua
2006-01-01
The maximum entropy model was introduced and a new intrusion detection approach based on the maximum entropy model was proposed. The vector space model was adopted for data presentation. The minimal entropy partitioning method was utilized for attribute discretization. Experiments on the KDD CUP 1999 standard data set were designed and the experimental results were shown. The receiver operating characteristic(ROC) curve analysis approach was utilized to analyze the experimental results. The analysis results show that the proposed approach is comparable to those based on support vector machine(SVM) and outperforms those based on C4.5 and Naive Bayes classifiers. According to the overall evaluation result, the proposed approach is a little better than those based on SVM.
Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.
Cooper, William S.
1983-01-01
Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…
The constraint rule of the maximum entropy principle
Uffink, J.
2001-01-01
The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distribut
A MAXIMUM ENTROPY METHOD FOR CONSTRAINED SEMI-INFINITEPROGRAMMING PROBLEMS
ZHOU Guanglu; WANG Changyu; SHI Zhenjun; SUN Qingying
1999-01-01
This paper presents a new method, called the maximum entropy method,for solving semi-infinite programming problems, in which thesemi-infinite programming problem is approximated by one with a singleconstraint. The convergence properties for this method are discussed.Numerical examples are given to show the high effciency of thealgorithm.
Filtering Additive Measurement Noise with Maximum Entropy in the Mean
Gzyl, Henryk
2007-01-01
The purpose of this note is to show how the method of maximum entropy in the mean (MEM) may be used to improve parametric estimation when the measurements are corrupted by large level of noise. The method is developed in the context on a concrete example: that of estimation of the parameter in an exponential distribution. We compare the performance of our method with the bayesian and maximum likelihood approaches.
Enzyme kinetics and the maximum entropy production principle.
Dobovišek, Andrej; Zupanović, Paško; Brumen, Milan; Bonačić-Lošić, Zeljana; Kuić, Domagoj; Juretić, Davor
2011-03-01
A general proof is derived that entropy production can be maximized with respect to rate constants in any enzymatic transition. This result is used to test the assumption that biological evolution of enzyme is accompanied with an increase of entropy production in its internal transitions and that such increase can serve to quantify the progress of enzyme evolution. The state of maximum entropy production would correspond to fully evolved enzyme. As an example the internal transition ES↔EP in a generalized reversible Michaelis-Menten three state scheme is analyzed. A good agreement is found among experimentally determined values of the forward rate constant in internal transitions ES→EP for three types of β-Lactamase enzymes and their optimal values predicted by the maximum entropy production principle, which agrees with earlier observations that β-Lactamase enzymes are nearly fully evolved. The optimization of rate constants as the consequence of basic physical principle, which is the subject of this paper, is a completely different concept from a) net metabolic flux maximization or b) entropy production minimization (in the static head state), both also proposed to be tightly connected to biological evolution.
MB Distribution and its application using maximum entropy approach
Bhadra Suman
2016-01-01
Full Text Available Maxwell Boltzmann distribution with maximum entropy approach has been used to study the variation of political temperature and heat in a locality. We have observed that the political temperature rises without generating any political heat when political parties increase their attractiveness by intense publicity, but voters do not shift their loyalties. It has also been shown that political heat is generated and political entropy increases with political temperature remaining constant when parties do not change their attractiveness, but voters shift their loyalties (to more attractive parties.
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2015-01-01
Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...
Delocalized Epidemics on Graphs: A Maximum Entropy Approach
Sahneh, Faryad Darabi; Scoglio, Caterina
2016-01-01
The susceptible--infected--susceptible (SIS) epidemic process on complex networks can show metastability, resembling an endemic equilibrium. In a general setting, the metastable state may involve a large portion of the network, or it can be localized on small subgraphs of the contact network. Localized infections are not interesting because a true outbreak concerns network--wide invasion of the contact graph rather than localized infection of certain sites within the contact network. Existing approaches to localization phenomenon suffer from a major drawback: they fully rely on the steady--state solution of mean--field approximate models in the neighborhood of their phase transition point, where their approximation accuracy is worst; as statistical physics tells us. We propose a dispersion entropy measure that quantifies the localization of infections in a generic contact graph. Formulating a maximum entropy problem, we find an upper bound for the dispersion entropy of the possible metastable state in the exa...
Maximum Entropy Production and Non-Gaussian Climate Variability
Sura, Philip
2016-01-01
Earth's atmosphere is in a state far from thermodynamic equilibrium. For example, the large scale equator-to-pole temperature gradient is maintained by tropical heating, polar cooling, and a midlatitude meridional eddy heat flux predominantly driven by baroclinically unstable weather systems. Based on basic thermodynamic principles, it can be shown that the meridional heat flux, in combination with the meridional temperature gradient, acts to maximize entropy production of the atmosphere. In fact, maximum entropy production (MEP) has been successfully used to explain the observed mean state of the atmosphere and other components of the climate system. However, one important feature of the large scale atmospheric circulation is its often non-Gaussian variability about the mean. This paper presents theoretical and observational evidence that some processes in the midlatitude atmosphere are significantly non-Gaussian to maximize entropy production. First, after introducing the basic theory, it is shown that the ...
Triadic conceptual structure of the maximum entropy approach to evolution.
Herrmann-Pillath, Carsten; Salthe, Stanley N
2011-03-01
Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution in a process discourse. Following recent contributions to the naturalization of Peircean semiosis, pointing towards 'physiosemiosis' or 'pansemiosis', we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. In this, we accommodate the state-centered thermodynamic framework to a process approach. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference devices evolving under natural selection. In this view, the principles of Maximum Entropy, Maximum Power, and Maximum Entropy Production work together to drive the emergence of information carrying structures, which at the same time maximize information capacity as well as the gradients of energy flows, such that ultimately, contrary to Schrödinger's seminal contribution, the evolutionary process is seen to be a physical expression of the Second Law.
Maximum-Entropy Inference with a Programmable Annealer.
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2016-03-03
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
Metabolic networks evolve towards states of maximum entropy production.
Unrean, Pornkamol; Srienc, Friedrich
2011-11-01
A metabolic network can be described by a set of elementary modes or pathways representing discrete metabolic states that support cell function. We have recently shown that in the most likely metabolic state the usage probability of individual elementary modes is distributed according to the Boltzmann distribution law while complying with the principle of maximum entropy production. To demonstrate that a metabolic network evolves towards such state we have carried out adaptive evolution experiments with Thermoanaerobacterium saccharolyticum operating with a reduced metabolic functionality based on a reduced set of elementary modes. In such reduced metabolic network metabolic fluxes can be conveniently computed from the measured metabolite secretion pattern. Over a time span of 300 generations the specific growth rate of the strain continuously increased together with a continuous increase in the rate of entropy production. We show that the rate of entropy production asymptotically approaches the maximum entropy production rate predicted from the state when the usage probability of individual elementary modes is distributed according to the Boltzmann distribution. Therefore, the outcome of evolution of a complex biological system can be predicted in highly quantitative terms using basic statistical mechanical principles.
Stationary properties of maximum-entropy random walks.
Dixit, Purushottam D
2015-10-01
Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.
Maximum information entropy: a foundation for ecological theory.
Harte, John; Newman, Erica A
2014-07-01
The maximum information entropy (MaxEnt) principle is a successful method of statistical inference that has recently been applied to ecology. Here, we show how MaxEnt can accurately predict patterns such as species-area relationships (SARs) and abundance distributions in macroecology and be a foundation for ecological theory. We discuss the conceptual foundation of the principle, why it often produces accurate predictions of probability distributions in science despite not incorporating explicit mechanisms, and how mismatches between predictions and data can shed light on driving mechanisms in ecology. We also review possible future extensions of the maximum entropy theory of ecology (METE), a potentially important foundation for future developments in ecological theory.
Maximum-entropy closure of hydrodynamic moment hierarchies including correlations.
Hughes, Keith H; Burghardt, Irene
2012-06-07
Generalized hydrodynamic moment hierarchies are derived which explicitly include nonequilibrium two-particle and higher-order correlations. The approach is adapted to strongly correlated media and nonequilibrium processes on short time scales which necessitate an explicit treatment of time-evolving correlations. Closure conditions for the extended moment hierarchies are formulated by a maximum-entropy approach, generalizing related closure procedures for kinetic equations. A self-consistent set of nonperturbative dynamical equations are thus obtained for a chosen set of single-particle and two-particle (and possibly higher-order) moments. Analytical results are derived for generalized Gaussian closures including the dynamic pair distribution function and a two-particle correction to the current density. The maximum-entropy closure conditions are found to involve the Kirkwood superposition approximation.
Optical and terahertz spectra analysis by the maximum entropy method.
Vartiainen, Erik M; Peiponen, Kai-Erik
2013-06-01
Phase retrieval is one of the classical problems in various fields of physics including x-ray crystallography, astronomy and spectroscopy. It arises when only an amplitude measurement on electric field can be made while both amplitude and phase of the field are needed for obtaining the desired material properties. In optical and terahertz spectroscopies, in particular, phase retrieval is a one-dimensional problem, which is considered as unsolvable in general. Nevertheless, an approach utilizing the maximum entropy principle has proven to be a feasible tool in various applications of optical, both linear and nonlinear, as well as in terahertz spectroscopies, where the one-dimensional phase retrieval problem arises. In this review, we focus on phase retrieval using the maximum entropy method in various spectroscopic applications. We review the theory behind the method and illustrate through examples why and how the method works, as well as discuss its limitations.
Resolution of overlapping ambiguity strings based on maximum entropy model
ZHANG Feng; FAN Xiao-zhong
2006-01-01
The resolution of overlapping ambiguity strings (OAS) is studied based on the maximum entropy model.There are two model outputs,where either the first two characters form a word or the last two characters form a word.The features of the model include one word in context of OAS,the current OAS and word probability relation of two kinds of segmentation results.OAS in training text is found by the combination of the FMM and BMM segmentation method.After feature tagging they are used to train the maximum entropy model.The People Daily corpus of January 1998 is used in training and testing.Experimental results show a closed test precision of 98.64% and an open test precision of 95.01%.The open test precision is 3,76% better compared with that of the precision of common word probability method.
Time series analysis by the Maximum Entropy method
Kirk, B.L.; Rust, B.W.; Van Winkle, W.
1979-01-01
The principal subject of this report is the use of the Maximum Entropy method for spectral analysis of time series. The classical Fourier method is also discussed, mainly as a standard for comparison with the Maximum Entropy method. Examples are given which clearly demonstrate the superiority of the latter method over the former when the time series is short. The report also includes a chapter outlining the theory of the method, a discussion of the effects of noise in the data, a chapter on significance tests, a discussion of the problem of choosing the prediction filter length, and, most importantly, a description of a package of FORTRAN subroutines for making the various calculations. Cross-referenced program listings are given in the appendices. The report also includes a chapter demonstrating the use of the programs by means of an example. Real time series like the lynx data and sunspot numbers are also analyzed. 22 figures, 21 tables, 53 references.
Collective behaviours in the stock market -- A maximum entropy approach
Bury, Thomas
2014-01-01
Scale invariance, collective behaviours and structural reorganization are crucial for portfolio management (portfolio composition, hedging, alternative definition of risk, etc.). This lack of any characteristic scale and such elaborated behaviours find their origin in the theory of complex systems. There are several mechanisms which generate scale invariance but maximum entropy models are able to explain both scale invariance and collective behaviours. The study of the structure and collective modes of financial markets attracts more and more attention. It has been shown that some agent based models are able to reproduce some stylized facts. Despite their partial success, there is still the problem of rules design. In this work, we used a statistical inverse approach to model the structure and co-movements in financial markets. Inverse models restrict the number of assumptions. We found that a pairwise maximum entropy model is consistent with the data and is able to describe the complex structure of financial...
Incorporating Linguistic Structure into Maximum Entropy Language Models
FANG GaoLin(方高林); GAO Wen(高文); WANG ZhaoQi(王兆其)
2003-01-01
In statistical language models, how to integrate diverse linguistic knowledge in a general framework for long-distance dependencies is a challenging issue. In this paper, an improved language model incorporating linguistic structure into maximum entropy framework is presented.The proposed model combines trigram with the structure knowledge of base phrase in which trigram is used to capture the local relation between words, while the structure knowledge of base phrase is considered to represent the long-distance relations between syntactical structures. The knowledge of syntax, semantics and vocabulary is integrated into the maximum entropy framework.Experimental results show that the proposed model improves by 24% for language model perplexity and increases about 3% for sign language recognition rate compared with the trigram model.
Combining experiments and simulations using the maximum entropy principle.
Wouter Boomsma
2014-02-01
Full Text Available A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.
Combining experiments and simulations using the maximum entropy principle.
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-02-01
A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.
PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation
Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.
2007-06-23
In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.
Maximum-entropy principle as Galerkin modelling paradigm
Noack, Bernd R.; Niven, Robert K.; Rowley, Clarence W.
2012-11-01
We show how the empirical Galerkin method, leading e.g. to POD models, can be derived from maximum-entropy principles building on Noack & Niven 2012 JFM. In particular, principles are proposed (1) for the Galerkin expansion, (2) for the Galerkin system identification, and (3) for the probability distribution of the attractor. Examples will illustrate the advantages of the entropic modelling paradigm. Partially supported by the ANR Chair of Excellence TUCOROM and an ADFA/UNSW Visiting Fellowship.
ON A GENERALIZATION OF THE MAXIMUM ENTROPY THEOREM OF BURG
JOSÉ MARCANO
2017-01-01
Full Text Available In this article we introduce some matrix manipulations that allow us to obtain a version of the original Christoffel-Darboux formula, which is of interest in many applications of linear algebra. Using these developments matrix and Jensen’s inequality, we obtain the main result of this proposal, which is the generalization of the maximum entropy theorem of Burg for multivariate processes.
Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains
Erik Van der Straeten
2009-11-01
Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.
Maximum Tsallis entropy with generalized Gini and Gini mean difference indices constraints
Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.
2017-04-01
Using the maximum entropy principle with Tsallis entropy, some distribution families for modeling income distribution are obtained. By considering income inequality measures, maximum Tsallis entropy distributions under the constraint on generalized Gini and Gini mean difference indices are derived. It is shown that the Tsallis entropy maximizers with the considered constraints belong to generalized Pareto family.
Fast Forward Maximum entropy reconstruction of sparsely sampled data.
Balsgart, Nicholas M; Vosegaard, Thomas
2012-10-01
We present an analytical algorithm using fast Fourier transformations (FTs) for deriving the gradient needed as part of the iterative reconstruction of sparsely sampled datasets using the forward maximum entropy reconstruction (FM) procedure by Hyberts and Wagner [J. Am. Chem. Soc. 129 (2007) 5108]. The major drawback of the original algorithm is that it required one FT and one evaluation of the entropy per missing datapoint to establish the gradient. In the present study, we demonstrate that the entire gradient may be obtained using only two FT's and one evaluation of the entropy derivative, thus achieving impressive time savings compared to the original procedure. An example: A 2D dataset with sparse sampling of the indirect dimension, with sampling of only 75 out of 512 complex points (15% sampling) would lack (512-75)×2=874 points per ν(2) slice. The original FM algorithm would require 874 FT's and entropy function evaluations to setup the gradient, while the present algorithm is ∼450 times faster in this case, since it requires only two FT's. This allows reduction of the computational time from several hours to less than a minute. Even more impressive time savings may be achieved with 2D reconstructions of 3D datasets, where the original algorithm required days of CPU time on high-performance computing clusters only require few minutes of calculation on regular laptop computers with the new algorithm.
Maximum Entropy for the International Division of Labor.
Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang
2015-01-01
As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country's strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product's complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country's strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter.
Maximum entropy production in environmental and ecological systems.
Kleidon, Axel; Malhi, Yadvinder; Cox, Peter M
2010-05-12
The coupled biosphere-atmosphere system entails a vast range of processes at different scales, from ecosystem exchange fluxes of energy, water and carbon to the processes that drive global biogeochemical cycles, atmospheric composition and, ultimately, the planetary energy balance. These processes are generally complex with numerous interactions and feedbacks, and they are irreversible in their nature, thereby producing entropy. The proposed principle of maximum entropy production (MEP), based on statistical mechanics and information theory, states that thermodynamic processes far from thermodynamic equilibrium will adapt to steady states at which they dissipate energy and produce entropy at the maximum possible rate. This issue focuses on the latest development of applications of MEP to the biosphere-atmosphere system including aspects of the atmospheric circulation, the role of clouds, hydrology, vegetation effects, ecosystem exchange of energy and mass, biogeochemical interactions and the Gaia hypothesis. The examples shown in this special issue demonstrate the potential of MEP to contribute to improved understanding and modelling of the biosphere and the wider Earth system, and also explore limitations and constraints to the application of the MEP principle.
Color Image Enhancement Based on Maximum Fuzzy Entropy
QU Yi; XU Li-hong; KANG Qi
2004-01-01
A color image enhancement approach based on maximum fuzzy entropy and genetic algorithm is proposed in this paper. It enhances color images by stretching the contrast of S and I components respectively in the HSI color representation. The image is transformed from the property domain to the fuzzy domain with S-function. To preserve as much information as possible in the fuzzy the domain, the fuzzy entropy function is used as objective function in a genetic algorithm to optimize three parameters of the S-function. The Sigmoid function is applied to intensify the membership values and the results are transformed back to the property domain to produce the enhanced image. Experiments show the effectiveness of the approach.
Maximum-entropy for the laser fusion problem
Madkour, M.A. [Nansoura Univ. (Egypt). Dept. of Phys.
1996-09-01
The problem of heat flux at the critical surfaces and the surfaces of a pellet of deuterium and tritium (conduction zone) heated by laser have been considered. Ion-electron collisions are only allowed for: i.e. the linear transport equation is used to describe the problem with boundary conditions. The maximum-entropy approach is used to calculate the electron density and temperature across the conduction zone as well as the heat flux. Numerical results are given and compared with those of Rouse and Williams and El-Wakil et al. (orig.).
Use of Maximum Entropy Modeling in Wildlife Research
Roger A. Baldwin
2009-11-01
Full Text Available Maximum entropy (Maxent modeling has great potential for identifying distributions and habitat selection of wildlife given its reliance on only presence locations. Recent studies indicate Maxent is relatively insensitive to spatial errors associated with location data, requires few locations to construct useful models, and performs better than other presence-only modeling approaches. Further advances are needed to better define model thresholds, to test model significance, and to address model selection. Additionally, development of modeling approaches is needed when using repeated sampling of known individuals to assess habitat selection. These advancements would strengthen the utility of Maxent for wildlife research and management.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results...
On some problems of the maximum entropy ansatz
K Bandyopadhyay; K Bhattacharyya; A K Bhattacharyya
2000-03-01
Some problems associated with the use of the maximum entropy principle, namely, (i) possible divergence of the series that is exponentiated, (ii) input-dependent asymptotic behaviour of the density function resulting from the truncation of the said series, and (iii) non-vanishing of the density function at the boundaries of a ﬁnite domain are pointed out. Prescriptions for remedying the aforesaid problems are put forward. Pilot calculations involving the ground quantum eigenenergy states of the quartic oscillator, the particle-in-a-box model, and the classical Maxwellian speed and energy distributions lend credence to our approach.
Improving predictability of time series using maximum entropy methods
Chliamovitch, G.; Dupuis, A.; Golub, A.; Chopard, B.
2015-04-01
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, which provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
Implementation of the Maximum Entropy Method for Analytic Continuation
Levy, Ryan; Gull, Emanuel
2016-01-01
We present $\\texttt{Maxent}$, a tool for performing analytic continuation of spectral functions using the maximum entropy method. The code operates on discrete imaginary axis datasets (values with uncertainties) and transforms this input to the real axis. The code works for imaginary time and Matsubara frequency data and implements the 'Legendre' representation of finite temperature Green's functions. It implements a variety of kernels, default models, and grids for continuing bosonic, fermionic, anomalous, and other data. Our implementation is licensed under GPLv2 and extensively documented. This paper shows the use of the programs in detail.
Implementation of the maximum entropy method for analytic continuation
Levy, Ryan; LeBlanc, J. P. F.; Gull, Emanuel
2017-06-01
We present Maxent, a tool for performing analytic continuation of spectral functions using the maximum entropy method. The code operates on discrete imaginary axis datasets (values with uncertainties) and transforms this input to the real axis. The code works for imaginary time and Matsubara frequency data and implements the 'Legendre' representation of finite temperature Green's functions. It implements a variety of kernels, default models, and grids for continuing bosonic, fermionic, anomalous, and other data. Our implementation is licensed under GPLv3 and extensively documented. This paper shows the use of the programs in detail.
Time-Reversal Acoustics and Maximum-Entropy Imaging
Berryman, J G
2001-08-22
Target location is a common problem in acoustical imaging using either passive or active data inversion. Time-reversal methods in acoustics have the important characteristic that they provide a means of determining the eigenfunctions and eigenvalues of the scattering operator for either of these problems. Each eigenfunction may often be approximately associated with an individual scatterer. The resulting decoupling of the scattered field from a collection of targets is a very useful aid to localizing the targets, and suggests a number of imaging and localization algorithms. Two of these are linear subspace methods and maximum-entropy imaging.
A Maximum Entropy Modelling of the Rain Drop Size Distribution
Francisco J. Tapiador
2011-01-01
Full Text Available This paper presents a maximum entropy approach to Rain Drop Size Distribution (RDSD modelling. It is shown that this approach allows (1 to use a physically consistent rationale to select a particular probability density function (pdf (2 to provide an alternative method for parameter estimation based on expectations of the population instead of sample moments and (3 to develop a progressive method of modelling by updating the pdf as new empirical information becomes available. The method is illustrated with both synthetic and real RDSD data, the latest coming from a laser disdrometer network specifically designed to measure the spatial variability of the RDSD.
OIL MONITORING DIAGNOSTIC CRITERIONS BASED ON MAXIMUM ENTROPY PRINCIPLE
Huo Hua; Li Zhuguo; Xia Yanchun
2005-01-01
A method of applying maximum entropy probability density estimation approach to constituting diagnostic criterions of oil monitoring data is presented. The method promotes the precision of diagnostic criterions for evaluating the wear state of mechanical facilities, and judging abnormal data. According to the critical boundary points defined, a new measure on monitoring wear state and identifying probable wear faults can be got. The method can be applied to spectrometric analysis and direct reading ferrographic analysis. On the basis of the analysis and discussion of two examples of 8NVD48A-2U diesel engines, the practicality is proved to be an effective method in oil monitoring.
A Clustering Method Based on the Maximum Entropy Principle
Edwin Aldana-Bobadilla
2015-01-01
Full Text Available Clustering is an unsupervised process to determine which unlabeled objects in a set share interesting properties. The objects are grouped into k subsets (clusters whose elements optimize a proximity measure. Methods based on information theory have proven to be feasible alternatives. They are based on the assumption that a cluster is one subset with the minimal possible degree of “disorder”. They attempt to minimize the entropy of each cluster. We propose a clustering method based on the maximum entropy principle. Such a method explores the space of all possible probability distributions of the data to find one that maximizes the entropy subject to extra conditions based on prior information about the clusters. The prior information is based on the assumption that the elements of a cluster are “similar” to each other in accordance with some statistical measure. As a consequence of such a principle, those distributions of high entropy that satisfy the conditions are favored over others. Searching the space to find the optimal distribution of object in the clusters represents a hard combinatorial problem, which disallows the use of traditional optimization techniques. Genetic algorithms are a good alternative to solve this problem. We benchmark our method relative to the best theoretical performance, which is given by the Bayes classifier when data are normally distributed, and a multilayer perceptron network, which offers the best practical performance when data are not normal. In general, a supervised classification method will outperform a non-supervised one, since, in the first case, the elements of the classes are known a priori. In what follows, we show that our method’s effectiveness is comparable to a supervised one. This clearly exhibits the superiority of our method.
Kawaguchi, K.; Egashira, Y.; Watanabe, G. [Mazda Motor Corp., Hiroshima (Japan)
1997-10-01
Vehicle and unit performance change according to not only external causes represented by the environment such as temperature or weather, but also internal causes which are dispersion of component characteristics and manufacturing processes or aged deteriorations. We developed the design method to estimate thus performance distributions with maximum entropy method and to calculate specifications with high performance robustness using Fuzzy theory. This paper describes the details of these methods and examples applied to power window system. 3 refs., 7 figs., 4 tabs.
A maximum entropy model for opinions in social groups
Davis, Sergio; Navarrete, Yasmín; Gutiérrez, Gonzalo
2014-04-01
We study how the opinions of a group of individuals determine their spatial distribution and connectivity, through an agent-based model. The interaction between agents is described by a Hamiltonian in which agents are allowed to move freely without an underlying lattice (the average network topology connecting them is determined from the parameters). This kind of model was derived using maximum entropy statistical inference under fixed expectation values of certain probabilities that (we propose) are relevant to social organization. Control parameters emerge as Lagrange multipliers of the maximum entropy problem, and they can be associated with the level of consequence between the personal beliefs and external opinions, and the tendency to socialize with peers of similar or opposing views. These parameters define a phase diagram for the social system, which we studied using Monte Carlo Metropolis simulations. Our model presents both first and second-order phase transitions, depending on the ratio between the internal consequence and the interaction with others. We have found a critical value for the level of internal consequence, below which the personal beliefs of the agents seem to be irrelevant.
Stimulus-dependent maximum entropy models of neural population codes.
Granot-Atedgi, Einat; Tkačik, Gašper; Segev, Ronen; Schneidman, Elad
2013-01-01
Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.
Promoter recognition based on the maximum entropy hidden Markov model.
Zhao, Xiao-yu; Zhang, Jin; Chen, Yuan-yuan; Li, Qiang; Yang, Tao; Pian, Cong; Zhang, Liang-yun
2014-08-01
Since the fast development of genome sequencing has produced large scale data, the current work uses the bioinformatics methods to recognize different gene regions, such as exon, intron and promoter, which play an important role in gene regulations. In this paper, we introduce a new method based on the maximum entropy Markov model (MEMM) to recognize the promoter, which utilizes the biological features of the promoter for the condition. However, it leads to a high false positive rate (FPR). In order to reduce the FPR, we provide another new method based on the maximum entropy hidden Markov model (ME-HMM) without the independence assumption, which could also accommodate the biological features effectively. To demonstrate the precision, the new methods are implemented by R language and the hidden Markov model (HMM) is introduced for comparison. The experimental results show that the new methods may not only overcome the shortcomings of HMM, but also have their own advantages. The results indicate that, MEMM is excellent for identifying the conserved signals, and ME-HMM can demonstrably improve the true positive rate.
Stimulus-dependent maximum entropy models of neural population codes.
Einat Granot-Atedgi
Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.
Maximum Entropy Estimation of n-Year Extreme Waveheights
徐德伦; 张军; 郑桂珍
2004-01-01
A new method for estimating the n (50 or 100) -year return-period waveheight, namely, the extreme waveheightexpected to occur in n years, is presented on the basis of the maximum entropy principle. The main points of the method are as follows: ( 1 ) based on the Hamiltonian principle, a maximum entropy probability density function for the extreme waveheight H, f(H)= αHγe-βΗ4 is derived from a Lagrangian function subject to some necessary and rational constraints; (2) the parametersα,β, andγin the function are expressed in terms of the mean H, variance V = ( H - H)2and bias B = ( H- H)3; and (3) with H, V and B estimated from observed data, the n-year return-period wave height Hn is computed in accordance with the formula 1/1 - F(Hn) = n, where F(Hn) is defined as F(Hn) =n Hn Of(H)dH.Examples of estimating the 50 and 100-year retum period waveheights by the present method and by some currently used method from observed data acquired from two hydrographic stations are given. A comparison of the estimated results shows that the present method is superior to the others.
Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings
Yan, Xiao-Yong
2014-01-01
The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (k_max). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, k_max) and consequently also the sha...
Application of Maximum Entropy Deconvolution to ${\\gamma}$-ray Skymaps
Raab, Susanne
2015-01-01
Skymaps measured with imaging atmospheric Cherenkov telescopes (IACTs) represent the real source distribution convolved with the point spread function of the observing instrument. Current IACTs have an angular resolution in the order of 0.1$^\\circ$ which is rather large for the study of morphological structures and for comparing the morphology in $\\gamma$-rays to measurements in other wavelengths where the instruments have better angular resolutions. Serendipitously it is possible to approximate the underlying true source distribution by applying a deconvolution algorithm to the observed skymap, thus effectively improving the instruments angular resolution. From the multitude of existing deconvolution algorithms several are already used in astronomy, but in the special case of $\\gamma$-ray astronomy most of these algorithms are challenged due to the high noise level within the measured data. One promising algorithm for the application to $\\gamma$-ray data is the Maximum Entropy Algorithm. The advantages of th...
Venus atmosphere profile from a maximum entropy principle
L. N. Epele
2007-10-01
Full Text Available The variational method with constraints recently developed by Verkley and Gerkema to describe maximum-entropy atmospheric profiles is generalized to ideal gases but with temperature-dependent specific heats. In so doing, an extended and non standard potential temperature is introduced that is well suited for tackling the problem under consideration. This new formalism is successfully applied to the atmosphere of Venus. Three well defined regions emerge in this atmosphere up to a height of 100 km from the surface: the lowest one up to about 35 km is adiabatic, a transition layer located at the height of the cloud deck and finally a third region which is practically isothermal.
Conjugate variables in continuous maximum-entropy inference.
Davis, Sergio; Gutiérrez, Gonzalo
2012-11-01
For a continuous maximum-entropy distribution (obtained from an arbitrary number of simultaneous constraints), we derive a general relation connecting the Lagrange multipliers and the expectation values of certain particularly constructed functions of the states of the system. From this relation, an estimator for a given Lagrange multiplier can be constructed from derivatives of the corresponding constraining function. These estimators sometimes lead to the determination of the Lagrange multipliers by way of solving a linear system, and, in general, they provide another tool to widen the applicability of Jaynes's formalism. This general relation, especially well suited for computer simulation techniques, also provides some insight into the interpretation of the hypervirial relations known in statistical mechanics and the recently derived microcanonical dynamical temperature. We illustrate the usefulness of these new relations with several applications in statistics.
Test images for the maximum entropy image restoration method
Mackey, James E.
1990-01-01
One of the major activities of any experimentalist is data analysis and reduction. In solar physics, remote observations are made of the sun in a variety of wavelengths and circumstances. In no case is the data collected free from the influence of the design and operation of the data gathering instrument as well as the ever present problem of noise. The presence of significant noise invalidates the simple inversion procedure regardless of the range of known correlation functions. The Maximum Entropy Method (MEM) attempts to perform this inversion by making minimal assumptions about the data. To provide a means of testing the MEM and characterizing its sensitivity to noise, choice of point spread function, type of data, etc., one would like to have test images of known characteristics that can represent the type of data being analyzed. A means of reconstructing these images is presented.
A strong test of the maximum entropy theory of ecology.
Xiao, Xiao; McGlinn, Daniel J; White, Ethan P
2015-03-01
The maximum entropy theory of ecology (METE) is a unified theory of biodiversity that predicts a large number of macroecological patterns using information on only species richness, total abundance, and total metabolic rate of the community. We evaluated four major predictions of METE simultaneously at an unprecedented scale using data from 60 globally distributed forest communities including more than 300,000 individuals and nearly 2,000 species.METE successfully captured 96% and 89% of the variation in the rank distribution of species abundance and individual size but performed poorly when characterizing the size-density relationship and intraspecific distribution of individual size. Specifically, METE predicted a negative correlation between size and species abundance, which is weak in natural communities. By evaluating multiple predictions with large quantities of data, our study not only identifies a mismatch between abundance and body size in METE but also demonstrates the importance of conducting strong tests of ecological theories.
A Maximum-Entropy Method for Estimating the Spectrum
无
2007-01-01
Based on the maximum-entropy (ME) principle, a new power spectral estimator for random waves is derived in the form of ~S(ω)=(a/8)-H2(2π)d+1ω-(d+2)exp[-b(2π/ω)n], by solving a variational problem subject to some quite general constraints. This robust method is comprehensive enough to describe the wave spectra even in extreme wave conditions and is superior to periodogram method that is not suitable to process comparatively short or intensively unsteady signals for its tremendous boundary effect and some inherent defects of FFT. Fortunately, the newly derived method for spectral estimation works fairly well, even though the sample data sets are very short and unsteady, and the reliability and efficiency of this spectral estimator have been preliminarily proved.
Adaptive edge image enhancement based on maximum fuzzy entropy
ZHANG Xiu-hua; YANG Kun-tao
2006-01-01
Based on the maximum fuzzy entropy principle,the edge image with low contrast is optimally classified into two classes adaptively,under the condition of probability partition and fuzzy partition.The optimal threshold is used as the classified threshold value,and a local parametric gray-level transformation is applied to the obtained classes.By means of two parameters representing,the homogeneity of the regions in edge image is improved.The excellent performance of the proposed technique is exercisable through simulation results on a set of test images.It is shown how the extracted and enhanced edges provide an efficient edge-representation of images.It is shown that the proposed technique possesses excellent performance in homogeneity through simulations on a set of test images,and the extracted and enhanced edges provide an efficient edge-representation of images.
Maximum Entropy and Probability Kinematics Constrained by Conditionals
Stefan Lukits
2015-03-01
Full Text Available Two open questions of inductive reasoning are solved: (1 does the principle of maximum entropy (PME give a solution to the obverse Majerník problem; and (2 isWagner correct when he claims that Jeffrey’s updating principle (JUP contradicts PME? Majerník shows that PME provides unique and plausible marginal probabilities, given conditional probabilities. The obverse problem posed here is whether PME also provides such conditional probabilities, given certain marginal probabilities. The theorem developed to solve the obverse Majerník problem demonstrates that in the special case introduced by Wagner PME does not contradict JUP, but elegantly generalizes it and offers a more integrated approach to probability updating.
Triadic Conceptual Structure of the Maximum Entropy Approach to Evolution
Herrmann-Pillath, Carsten
2010-01-01
Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution. Following recent contributions to the naturalization of Peircean semiosis, we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference device...
LIBOR troubles: Anomalous movements detection based on maximum entropy
Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria
2016-05-01
According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.
Beyond maximum entropy: Fractal Pixon-based image reconstruction
Puetter, Richard C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.
Maximum entropy, word-frequency, Chinese characters, and multiple meanings.
Yan, Xiaoyong; Minnhagen, Petter
2015-01-01
The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (k(max)). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, k(max)) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, k(max)), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf's law, the Simon-model for texts and the present results are discussed.
Understanding Peripheral Bat Populations Using Maximum-Entropy Suitability Modeling
Barnhart, Paul R.; Gillam, Erin H.
2016-01-01
Individuals along the periphery of a species distribution regularly encounter more challenging environmental and climatic conditions than conspecifics near the center of the distribution. Due to these potential constraints, individuals in peripheral margins are expected to change their habitat and behavioral characteristics. Managers typically rely on species distribution maps when developing adequate management practices. However, these range maps are often too simplistic and do not provide adequate information as to what fine-scale biotic and abiotic factors are driving a species occurrence. In the last decade, habitat suitability modelling has become widely used as a substitute for simplistic distribution mapping which allows regional managers the ability to fine-tune management resources. The objectives of this study were to use maximum-entropy modeling to produce habitat suitability models for seven species that have a peripheral margin intersecting the state of North Dakota, according to current IUCN distributions, and determine the vegetative and climatic characteristics driving these models. Mistnetting resulted in the documentation of five species outside the IUCN distribution in North Dakota, indicating that current range maps for North Dakota, and potentially the northern Great Plains, are in need of update. Maximum-entropy modeling showed that temperature and not precipitation were the variables most important for model production. This fine-scale result highlights the importance of habitat suitability modelling as this information cannot be extracted from distribution maps. Our results provide baseline information needed for future research about how and why individuals residing in the peripheral margins of a species’ distribution may show marked differences in habitat use as a result of urban expansion, habitat loss, and climate change compared to more centralized populations. PMID:27935936
Comparison Between Bayesian and Maximum Entropy Analyses of Flow Networks†
Steven H. Waldrip
2017-02-01
Full Text Available We compare the application of Bayesian inference and the maximum entropy (MaxEnt method for the analysis of ﬂow networks, such as water, electrical and transport networks. The two methods have the advantage of allowing a probabilistic prediction of ﬂow rates and other variables, when there is insufﬁcient information to obtain a deterministic solution, and also allow the effects of uncertainty to be included. Both methods of inference update a prior to a posterior probability density function (pdf by the inclusion of new information, in the form of data or constraints. The MaxEnt method maximises an entropy function subject to constraints, using the method of Lagrange multipliers,to give the posterior, while the Bayesian method ﬁnds its posterior by multiplying the prior with likelihood functions incorporating the measured data. In this study, we examine MaxEnt using soft constraints, either included in the prior or as probabilistic constraints, in addition to standard moment constraints. We show that when the prior is Gaussian,both Bayesian inference and the MaxEnt method with soft prior constraints give the same posterior means, but their covariances are different. In the Bayesian method, the interactions between variables are applied through the likelihood function, using second or higher-order cross-terms within the posterior pdf. In contrast, the MaxEnt method incorporates interactions between variables using Lagrange multipliers, avoiding second-order correlation terms in the posterior covariance. The MaxEnt method with soft prior constraints, therefore, has a numerical advantage over Bayesian inference, in that the covariance terms are avoided in its integrations. The second MaxEnt method with soft probabilistic constraints is shown to give posterior means of similar, but not identical, structure to the other two methods, due to its different formulation.
Louis de Grange
2010-09-01
Full Text Available Maximum entropy models are often used to describe supply and demand behavior in urban transportation and land use systems. However, they have been criticized for not representing behavioral rules of system agents and because their parameters seems to adjust only to modeler-imposed constraints. In response, it is demonstrated that the solution to the entropy maximization problem with linear constraints is a multinomial logit model whose parameters solve the likelihood maximization problem of this probabilistic model. But this result neither provides a microeconomic interpretation of the entropy maximization problem nor explains the equivalence of these two optimization problems. This work demonstrates that an analysis of the dual of the entropy maximization problem yields two useful alternative explanations of its solution. The first shows that the maximum entropy estimators of the multinomial logit model parameters reproduce rational user behavior, while the second shows that the likelihood maximization problem for multinomial logit models is the dual of the entropy maximization problem.
Maximum entropy method for solving operator equations of the first kind
金其年; 侯宗义
1997-01-01
The maximum entropy method for linear ill-posed problems with modeling error and noisy data is considered and the stability and convergence results are obtained. When the maximum entropy solution satisfies the "source condition", suitable rates of convergence can be derived. Considering the practical applications, an a posteriori choice for the regularization parameter is presented. As a byproduct, a characterization of the maximum entropy regularized solution is given.
Haseli, Y
2016-05-01
The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov's engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
Application of Maximum Entropy Distribution to the Statistical Properties of Wave Groups
无
2007-01-01
The new distributions of the statistics of wave groups based on the maximum entropy principle are presented. The maximum entropy distributions appear to be superior to conventional distributions when applied to a limited amount of information. Its applications to the wave group properties show the effectiveness of the maximum entropy distribution. FFT filtering method is employed to obtain the wave envelope fast and efficiently. Comparisons of both the maximum entropy distribution and the distribution of Longuet-Higgins (1984) with the laboratory wind-wave data show that the former gives a better fit.
A maximum entropy theorem with applications to the measurement of biodiversity
Leinster, Tom
2009-01-01
This is a preliminary article stating and proving a new maximum entropy theorem. The entropies that we consider can be used as measures of biodiversity. In that context, the question is: for a given collection of species, which frequency distribution(s) maximize the diversity? The theorem provides the answer. The chief surprise is that although we are dealing not just with a single entropy, but a one-parameter family of entropies, there is a single distribution maximizing all of them simultaneously.
Local image statistics: maximum-entropy constructions and perceptual salience.
Victor, Jonathan D; Conte, Mary M
2012-07-01
The space of visual signals is high-dimensional and natural visual images have a highly complex statistical structure. While many studies suggest that only a limited number of image statistics are used for perceptual judgments, a full understanding of visual function requires analysis not only of the impact of individual image statistics, but also, how they interact. In natural images, these statistical elements (luminance distributions, correlations of low and high order, edges, occlusions, etc.) are intermixed, and their effects are difficult to disentangle. Thus, there is a need for construction of stimuli in which one or more statistical elements are introduced in a controlled fashion, so that their individual and joint contributions can be analyzed. With this as motivation, we present algorithms to construct synthetic images in which local image statistics--including luminance distributions, pair-wise correlations, and higher-order correlations--are explicitly specified and all other statistics are determined implicitly by maximum-entropy. We then apply this approach to measure the sensitivity of the human visual system to local image statistics and to sample their interactions.
Inferential permutation tests for maximum entropy models in ecology.
Shipley, Bill
2010-09-01
Maximum entropy (maxent) models assign probabilities to states that (1) agree with measured macroscopic constraints on attributes of the states and (2) are otherwise maximally uninformative and are thus as close as possible to a specified prior distribution. Such models have recently become popular in ecology, but classical inferential statistical tests require assumptions of independence during the allocation of entities to states that are rarely fulfilled in ecology. This paper describes a new permutation test for such maxent models that is appropriate for very general prior distributions and for cases in which many states have zero abundance and that can be used to test for conditional relevance of subsets of constraints. Simulations show that the test gives correct probability estimates under the null hypothesis. Power under the alternative hypothesis depends primarily on the number and strength of the constraints and on the number of states in the model; the number of empty states has only a small effect on power. The test is illustrated using two empirical data sets to test the community assembly model of B. Shipley, D. Vile, and E. Garnier and the species abundance distribution models of S. Pueyo, F. He, and T. Zillio.
On the maximum-entropy/autoregressive modeling of time series
Chao, B. F.
1984-01-01
The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.
Application of the maximum entropy method to profile analysis
Armstrong, N.; Kalceff, W. [University of Technology, Department of Applied Physics, Sydney, NSW (Australia); Cline, J.P. [National Institute of Standards and Technology, Gaithersburg, (United States)
1999-12-01
Full text: A maximum entropy (MaxEnt) method for analysing crystallite size- and strain-induced x-ray profile broadening is presented. This method treats the problems of determining the specimen profile, crystallite size distribution, and strain distribution in a general way by considering them as inverse problems. A common difficulty faced by many experimenters is their inability to determine a well-conditioned solution of the integral equation, which preserves the positivity of the profile or distribution. We show that the MaxEnt method overcomes this problem, while also enabling a priori information, in the form of a model, to be introduced into it. Additionally, we demonstrate that the method is fully quantitative, in that uncertainties in the solution profile or solution distribution can be determined and used in subsequent calculations, including mean particle sizes and rms strain. An outline of the MaxEnt method is presented for the specific problems of determining the specimen profile and crystallite or strain distributions for the correspondingly broadened profiles. This approach offers an alternative to standard methods such as those of Williamson-Hall and Warren-Averbach. An application of the MaxEnt method is demonstrated in the analysis of alumina size-broadened diffraction data (from NIST, Gaithersburg). It is used to determine the specimen profile and column-length distribution of the scattering domains. Finally, these results are compared with the corresponding Williamson-Hall and Warren-Averbach analyses. Copyright (1999) Australian X-ray Analytical Association Inc.
Improved Maximum Entropy Method with an Extended Search Space
Rothkopf, Alexander
2012-01-01
We report on an improvement to the implementation of the Maximum Entropy Method (MEM). It amounts to departing from the search space obtained through a singular value decomposition (SVD) of the Kernel. Based on the shape of the SVD basis functions we argue that the MEM spectrum for given $N_\\tau$ data-points $D(\\tau)$ and prior information $m(\\omega)$ does not in general lie in this $N_\\tau$ dimensional singular subspace. Systematically extending the search basis will eventually recover the full search space and the correct extremum. We illustrate this idea through a mock data analysis inspired by actual lattice spectra, to show where our improvement becomes essential for the success of the MEM. To remedy the shortcomings of Bryan's SVD prescription we propose to use the real Fourier basis, which consists of trigonometric functions. Not only does our approach lead to more stable numerical behavior, as the SVD is not required for the determination of the basis functions, but also the resolution of the MEM beco...
Constructing Maximum Entropy Language Models for Movie Review Subjectivity Analysis
Bo Chen; Hui He; Jun Guo
2008-01-01
Document subjectivity analysis has become an important aspect of web text content mining. This problem is similar to traditional text categorization, thus many related classification techniques can be adapted here. However, there is one significant difference that more language or semantic information is required for better estimating the subjectivity of a document. Therefore, in this paper, our focuses are mainly on two aspects. One is how to extract useful and meaningful language features, and the other is how to construct appropriate language models efficiently for this special task. For the first issue, we conduct a Global-Filtering and Local-Weighting strategy to select and evaluate language features in a series of n-grams with different orders and within various distance-windows. For the second issue, we adopt Maximum Entropy (MaxEnt) modeling methods to construct our language model framework. Besides the classical MaxEnt models, we have also constructed two kinds of improved models with Gaussian and exponential priors respectively. Detailed experiments given in this paper show that with well selected and weighted language features, MaxEnt models with exponential priors are significantly more suitable for the text subjectivity analysis task.
Present and Last Glacial Maximum climates as states of maximum entropy production
Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere
2011-01-01
The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...
Maximum Relative Entropy Updating and the Value of Learning
Patryk Dziurosz-Serafinowicz
2015-03-01
Full Text Available We examine the possibility of justifying the principle of maximum relative entropy (MRE considered as an updating rule by looking at the value of learning theorem established in classical decision theory. This theorem captures an intuitive requirement for learning: learning should lead to new degrees of belief that are expected to be helpful and never harmful in making decisions. We call this requirement the value of learning. We consider the extent to which learning rules by MRE could satisfy this requirement and so could be a rational means for pursuing practical goals. First, by representing MRE updating as a conditioning model, we show that MRE satisfies the value of learning in cases where learning prompts a complete redistribution of one’s degrees of belief over a partition of propositions. Second, we show that the value of learning may not be generally satisfied by MRE updates in cases of updating on a change in one’s conditional degrees of belief. We explain that this is so because, contrary to what the value of learning requires, one’s prior degrees of belief might not be equal to the expectation of one’s posterior degrees of belief. This, in turn, points towards a more general moral: that the justification of MRE updating in terms of the value of learning may be sensitive to the context of a given learning experience. Moreover, this lends support to the idea that MRE is not a universal nor mechanical updating rule, but rather a rule whose application and justification may be context-sensitive.
The mechanics of granitoid systems and maximum entropy production rates.
Hobbs, Bruce E; Ord, Alison
2010-01-13
A model for the formation of granitoid systems is developed involving melt production spatially below a rising isotherm that defines melt initiation. Production of the melt volumes necessary to form granitoid complexes within 10(4)-10(7) years demands control of the isotherm velocity by melt advection. This velocity is one control on the melt flux generated spatially just above the melt isotherm, which is the control valve for the behaviour of the complete granitoid system. Melt transport occurs in conduits initiated as sheets or tubes comprising melt inclusions arising from Gurson-Tvergaard constitutive behaviour. Such conduits appear as leucosomes parallel to lineations and foliations, and ductile and brittle dykes. The melt flux generated at the melt isotherm controls the position of the melt solidus isotherm and hence the physical height of the Transport/Emplacement Zone. A conduit width-selection process, driven by changes in melt viscosity and constitutive behaviour, operates within the Transport Zone to progressively increase the width of apertures upwards. Melt can also be driven horizontally by gradients in topography; these horizontal fluxes can be similar in magnitude to vertical fluxes. Fluxes induced by deformation can compete with both buoyancy and topographic-driven flow over all length scales and results locally in transient 'ponds' of melt. Pluton emplacement is controlled by the transition in constitutive behaviour of the melt/magma from elastic-viscous at high temperatures to elastic-plastic-viscous approaching the melt solidus enabling finite thickness plutons to develop. The system involves coupled feedback processes that grow at the expense of heat supplied to the system and compete with melt advection. The result is that limits are placed on the size and time scale of the system. Optimal characteristics of the system coincide with a state of maximum entropy production rate.
Unification of Field Theory and Maximum Entropy Methods for Learning Probability Densities
Kinney, Justin B
2014-01-01
Bayesian field theory and maximum entropy are two methods for learning smooth probability distributions (a.k.a. probability densities) from finite sampled data. Both methods were inspired by statistical physics, but the relationship between them has remained unclear. Here I show that Bayesian field theory subsumes maximum entropy density estimation. In particular, the most common maximum entropy methods are shown to be limiting cases of Bayesian inference using field theory priors that impose no boundary conditions on candidate densities. This unification provides a natural way to test the validity of the maximum entropy assumption on one's data. It also provides a better-fitting nonparametric density estimate when the maximum entropy assumption is rejected.
Maximum entropy principle for stationary states underpinned by stochastic thermodynamics.
Ford, Ian J
2015-11-01
The selection of an equilibrium state by maximizing the entropy of a system, subject to certain constraints, is often powerfully motivated as an exercise in logical inference, a procedure where conclusions are reached on the basis of incomplete information. But such a framework can be more compelling if it is underpinned by dynamical arguments, and we show how this can be provided by stochastic thermodynamics, where an explicit link is made between the production of entropy and the stochastic dynamics of a system coupled to an environment. The separation of entropy production into three components allows us to select a stationary state by maximizing the change, averaged over all realizations of the motion, in the principal relaxational or nonadiabatic component, equivalent to requiring that this contribution to the entropy production should become time independent for all realizations. We show that this recovers the usual equilibrium probability density function (pdf) for a conservative system in an isothermal environment, as well as the stationary nonequilibrium pdf for a particle confined to a potential under nonisothermal conditions, and a particle subject to a constant nonconservative force under isothermal conditions. The two remaining components of entropy production account for a recently discussed thermodynamic anomaly between over- and underdamped treatments of the dynamics in the nonisothermal stationary state.
Maximum entropy principle for stationary states underpinned by stochastic thermodynamics
Ford, Ian J.
2015-11-01
The selection of an equilibrium state by maximizing the entropy of a system, subject to certain constraints, is often powerfully motivated as an exercise in logical inference, a procedure where conclusions are reached on the basis of incomplete information. But such a framework can be more compelling if it is underpinned by dynamical arguments, and we show how this can be provided by stochastic thermodynamics, where an explicit link is made between the production of entropy and the stochastic dynamics of a system coupled to an environment. The separation of entropy production into three components allows us to select a stationary state by maximizing the change, averaged over all realizations of the motion, in the principal relaxational or nonadiabatic component, equivalent to requiring that this contribution to the entropy production should become time independent for all realizations. We show that this recovers the usual equilibrium probability density function (pdf) for a conservative system in an isothermal environment, as well as the stationary nonequilibrium pdf for a particle confined to a potential under nonisothermal conditions, and a particle subject to a constant nonconservative force under isothermal conditions. The two remaining components of entropy production account for a recently discussed thermodynamic anomaly between over- and underdamped treatments of the dynamics in the nonisothermal stationary state.
Nonuniform sampling and maximum entropy reconstruction in multidimensional NMR.
Hoch, Jeffrey C; Maciejewski, Mark W; Mobli, Mehdi; Schuyler, Adam D; Stern, Alan S
2014-02-18
NMR spectroscopy is one of the most powerful and versatile analytic tools available to chemists. The discrete Fourier transform (DFT) played a seminal role in the development of modern NMR, including the multidimensional methods that are essential for characterizing complex biomolecules. However, it suffers from well-known limitations: chiefly the difficulty in obtaining high-resolution spectral estimates from short data records. Because the time required to perform an experiment is proportional to the number of data samples, this problem imposes a sampling burden for multidimensional NMR experiments. At high magnetic field, where spectral dispersion is greatest, the problem becomes particularly acute. Consequently multidimensional NMR experiments that rely on the DFT must either sacrifice resolution in order to be completed in reasonable time or use inordinate amounts of time to achieve the potential resolution afforded by high-field magnets. Maximum entropy (MaxEnt) reconstruction is a non-Fourier method of spectrum analysis that can provide high-resolution spectral estimates from short data records. It can also be used with nonuniformly sampled data sets. Since resolution is substantially determined by the largest evolution time sampled, nonuniform sampling enables high resolution while avoiding the need to uniformly sample at large numbers of evolution times. The Nyquist sampling theorem does not apply to nonuniformly sampled data, and artifacts that occur with the use of nonuniform sampling can be viewed as frequency-aliased signals. Strategies for suppressing nonuniform sampling artifacts include the careful design of the sampling scheme and special methods for computing the spectrum. Researchers now routinely report that they can complete an N-dimensional NMR experiment 3(N-1) times faster (a 3D experiment in one ninth of the time). As a result, high-resolution three- and four-dimensional experiments that were prohibitively time consuming are now practical
Vector entropy imaging theory with application to computerized tomography
Wang, Yuanmei; Cheng, Jianping; Heng, Pheng Ann
2002-07-01
Medical imaging theory for x-ray CT and PET is based on image reconstruction from projections. In this paper a novel vector entropy imaging theory under the framework of multiple criteria decision making is presented. We also study the most frequently used image reconstruction methods, namely, least square, maximum entropy, and filtered back-projection methods under the framework of the single performance criterion optimization. Finally, we introduce some of the results obtained by various reconstruction algorithms using computer-generated noisy projection data from the Hoffman phantom and real CT scanner data. Comparison of the reconstructed images indicates that the vector entropy method gives the best in error (difference between the original phantom data and reconstruction), smoothness (suppression of noise), grey value resolution and is free of ghost images.
Yudong Zhang
2011-04-01
Full Text Available This paper proposes a global multi-level thresholding method for image segmentation. As a criterion for this, the traditional method uses the Shannon entropy, originated from information theory, considering the gray level image histogram as a probability distribution, while we applied the Tsallis entropy as a general information theory entropy formalism. For the algorithm, we used the artificial bee colony approach since execution of an exhaustive algorithm would be too time-consuming. The experiments demonstrate that: 1 the Tsallis entropy is superior to traditional maximum entropy thresholding, maximum between class variance thresholding, and minimum cross entropy thresholding; 2 the artificial bee colony is more rapid than either genetic algorithm or particle swarm optimization. Therefore, our approach is effective and rapid.
Maximum entropy production: Can it be used to constrain conceptual hydrological models?
M.C. Westhoff; E. Zehe
2013-01-01
In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...
Y. Haseli
2016-05-01
Full Text Available The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov’s engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
Maximum entropy method applied to deblurring images on a MasPar MP-1 computer
Bonavito, N. L.; Dorband, John; Busse, Tim
1991-01-01
A statistical inference method based on the principle of maximum entropy is developed for the purpose of enhancing and restoring satellite images. The proposed maximum entropy image restoration method is shown to overcome the difficulties associated with image restoration and provide the smoothest and most appropriate solution consistent with the measured data. An implementation of the method on the MP-1 computer is described, and results of tests on simulated data are presented.
Acoustic space dimensionality selection and combination using the maximum entropy principle
Abdel-Haleem, Yasser H.; Renals, Steve; Lawrence, Neil D.
2004-01-01
In this paper we propose a discriminative approach to acoustic space dimensionality selection based on maximum entropy modelling. We form a set of constraints by composing the acoustic space with the space of phone classes, and use a continuous feature formulation of maximum entropy modelling to select an optimal feature set. The suggested approach has two steps: (1) the selection of the best acoustic space that efficiently and economically represents the acoustic data and its variability;...
Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle
Ge Cheng; Zhenyu Zhang; Moses Ntanda Kyebambe; Nasser Kimbugwe
2016-01-01
Predicting the outcome of National Basketball Association (NBA) matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME) model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that...
A Load Balancing Algorithm Based on Maximum Entropy Methods in Homogeneous Clusters
Long Chen
2014-10-01
Full Text Available In order to solve the problems of ill-balanced task allocation, long response time, low throughput rate and poor performance when the cluster system is assigning tasks, we introduce the concept of entropy in thermodynamics into load balancing algorithms. This paper proposes a new load balancing algorithm for homogeneous clusters based on the Maximum Entropy Method (MEM. By calculating the entropy of the system and using the maximum entropy principle to ensure that each scheduling and migration is performed following the increasing tendency of the entropy, the system can achieve the load balancing status as soon as possible, shorten the task execution time and enable high performance. The result of simulation experiments show that this algorithm is more advanced when it comes to the time and extent of the load balance of the homogeneous cluster system compared with traditional algorithms. It also provides novel thoughts of solutions for the load balancing problem of the homogeneous cluster system.
Maximum entropy analysis of flow and reaction networks
Niven, Robert K.; Abel, Markus; Schlegel, Michael; Waldrip, Steven H.
2015-01-01
We present a generalised MaxEnt method to infer the stationary state of a flow network, subject to "observable" constraints on expectations of various parameters, as well as "physical" constraints arising from frictional properties (resistance functions) and conservation laws (Kirchhoff laws). The method invokes an entropy defined over all uncertainties in the system, in this case the internal and external flow rates and potential differences. The proposed MaxEnt framework is readily extendable to the analysis of networks with uncertainty in the network structure itself.
Determining Dynamical Path Distributions usingMaximum Relative Entropy
2015-05-31
θ). The selected joint posterior Pnew(x, θ) is that which maximizes the entropy1 , S[P, Pold ] = − ∫ P (x, θ) log P (x, θ) Pold (x, θ) dxdθ , (15) 1...to the appropriate constraints (parameters can be discrete as well). Pold (x, θ) contains our prior information which we call the joint prior. To be...explicit, Pold (x, θ) = Pold (x) Pold (θ|x) , (16) where Pold (x) is the traditional Bayesian prior and Pold (θ|x) is the likelihood. It is important to
ZHANG Hong-lie; ZHANG Guo-yin; YAO Ai-hong
2010-01-01
This paper presents an algorithm that combines the chaos optimization algorithm with the maximum entropy(COA-ME)by using entropy model based on chaos algorithm,in which the maximum entropy is used as the second method of searching the excellent solution.The search direction is improved by chaos optimization algorithm and realizes the selective acceptance of wrong solution.The experimental result shows that the presented algorithm can be used in the partitioning of hardware/software of reconfigurable system.It effectively reduces the local extremum problem,and search speed as well as performance of partitioning is improved.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Maximum Entropy Production vs. Kolmogorov-Sinai Entropy in a Constrained ASEP Model
Martin Mihelich
2014-02-01
Full Text Available The asymmetric simple exclusion process (ASEP has become a paradigmatic toy-model of a non-equilibrium system, and much effort has been made in the past decades to compute exactly its statistics for given dynamical rules. Here, a different approach is developed; analogously to the equilibrium situation, we consider that the dynamical rules are not exactly known. Allowing for the transition rate to vary, we show that the dynamical rules that maximize the entropy production and those that maximise the rate of variation of the dynamical entropy, known as the Kolmogorov-Sinai entropy coincide with good accuracy. We study the dependence of this agreement on the size of the system and the couplings with the reservoirs, for the original ASEP and a variant with Langmuir kinetics.
A maximum entropy framework for non-exponential distributions
Peterson, Jack; Dill, Ken A
2015-01-01
Probability distributions having power-law tails are observed in a broad range of social, economic, and biological systems. We describe here a potentially useful common framework. We derive distribution functions $\\{p_k\\}$ for situations in which a `joiner particle' $k$ pays some form of price to enter a `community' of size $k-1$, where costs are subject to economies-of-scale (EOS). Maximizing the Boltzmann-Gibbs-Shannon entropy subject to this energy-like constraint predicts a distribution having a power-law tail; it reduces to the Boltzmann distribution in the absence of EOS. We show that the predicted function gives excellent fits to 13 different distribution functions, ranging from friendship links in social networks, to protein-protein interactions, to the severity of terrorist attacks. This approach may give useful insights into when to expect power-law distributions in the natural and social sciences.
General proof of (maximum) entropy principle in Lovelock gravity
Cao, Li-Ming
2014-01-01
We consider a static self-gravitating perfect fluid system in Lovelock gravity theory. For a spacial region on the hypersurface orthogonal to static Killing vector, by the Tolman's law of temperature, the assumption of a fixed total particle number inside the spacial region, and all of the variations (of relevant fields) in which the induced metric and its first derivatives are fixed on the boundary of the spacial region, then with the help of the gravitational equations of the theory, we can prove a theorem says that the total entropy of the fluid in this region takes an extremum value. A converse theorem can also be obtained following the reverse process of our proof.
General proof of (maximum) entropy principle in Lovelock gravity
Cao, Li-Ming; Xu, Jianfei
2015-02-01
We consider a static self-gravitating perfect fluid system in Lovelock gravity theory. For a spacial region on the hypersurface orthogonal to static Killing vector, by the Tolman's law of temperature, the assumption of a fixed total particle number inside the spacial region, and all of the variations (of relevant fields) in which the induced metric and its first derivatives are fixed on the boundary of the spacial region, then with the help of the gravitational and fluid equations of the theory, we can prove a theorem says that the total entropy of the fluid in this region takes an extremum value. A converse theorem can also be obtained following the reverse process of our proof. We also propose the definition of isolation quasilocally for the system and explain the physical meaning of the boundary conditions in the proof of the theorems.
The Second Law Today: Using Maximum-Minimum Entropy Generation
Umberto Lucia
2015-11-01
Full Text Available There are a great number of thermodynamic schools, independent of each other, and without a powerful general approach, but with a split on non-equilibrium thermodynamics. In 1912, in relation to the stationary non-equilibrium states, Ehrenfest introduced the fundamental question on the existence of a functional that achieves its extreme value for stable states, as entropy does for the stationary states in equilibrium thermodynamics. Today, the new branch frontiers of science and engineering, from power engineering to environmental sciences, from chaos to complex systems, from life sciences to nanosciences, etc. require a unified approach in order to optimize results and obtain a powerful approach to non-equilibrium thermodynamics and open systems. In this paper, a generalization of the Gouy–Stodola approach is suggested as a possible answer to the Ehrenfest question.
Night vision image fusion for target detection with improved 2D maximum entropy segmentation
Bai, Lian-fa; Liu, Ying-bin; Yue, Jiang; Zhang, Yi
2013-08-01
Infrared and LLL image are used for night vision target detection. In allusion to the characteristics of night vision imaging and lack of traditional detection algorithm for segmentation and extraction of targets, we propose a method of infrared and LLL image fusion for target detection with improved 2D maximum entropy segmentation. Firstly, two-dimensional histogram was improved by gray level and maximum gray level in weighted area, weights were selected to calculate the maximum entropy for infrared and LLL image segmentation by using the histogram. Compared with the traditional maximum entropy segmentation, the algorithm had significant effect in target detection, and the functions of background suppression and target extraction. And then, the validity of multi-dimensional characteristics AND operation on the infrared and LLL image feature level fusion for target detection is verified. Experimental results show that detection algorithm has a relatively good effect and application in target detection and multiple targets detection in complex background.
An Efficient Algorithm for Maximum-Entropy Extension of Block-Circulant Covariance Matrices
Carli, Francesca P; Pavon, Michele; Picci, Giorgio
2011-01-01
This paper deals with maximum entropy completion of partially specified block-circulant matrices. Since positive definite symmetric circulants happen to be covariance matrices of stationary periodic processes, in particular of stationary reciprocal processes, this problem has applications in signal processing, in particular to image modeling. Maximum entropy completion is strictly related to maximum likelihood estimation subject to certain conditional independence constraints. The maximum entropy completion problem for block-circulant matrices is a nonlinear problem which has recently been solved by the authors, although leaving open the problem of an efficient computation of the solution. The main contribution of this paper is to provide an efficient algorithm for computing the solution. Simulation shows that our iterative scheme outperforms various existing approaches, especially for large dimensional problems. A necessary and sufficient condition for the existence of a positive definite circulant completio...
Adaptive Statistical Language Modeling; A Maximum Entropy Approach
1994-04-19
recognition systems were built that could recognize vowels or digits, but they could not be successfully extended to handle more realistic language...maximum likelihood of gener- ating the training data. The identity of the ML and ME solutions, apart from being aesthetically pleasing, is extremely
Bennani, Youssef; Pronzato, Luc; Rendas, Maria João
2015-01-01
We estimate the density of a set of biophysical parameters from region censored observations. We propose a new Maximum Entropy (maxent) estimator formulated as finding the most likely constrained maxent density. By using the Ŕnyi entropy of order two instead of the Shannon entropy, we are lead to a quadratic optimization problem with linear inequality constraints that has an efficient numerical solution. We compare the proposed estimator to the NPMLE and to the best fitting maxent solutions in real data from hyperbaric diving, showing that the resulting distribution has better generalization performance than NPMLE or maxent alone.
Initial system-bath state via the maximum-entropy principle
Dai, Jibo; Len, Yink Loong; Ng, Hui Khoon
2016-11-01
The initial state of a system-bath composite is needed as the input for prediction from any quantum evolution equation to describe subsequent system-only reduced dynamics or the noise on the system from joint evolution of the system and the bath. The conventional wisdom is to write down an uncorrelated state as if the system and the bath were prepared in the absence of each other; yet, such a factorized state cannot be the exact description in the presence of system-bath interactions. Here, we show how to go beyond the simplistic factorized-state prescription using ideas from quantum tomography: We employ the maximum-entropy principle to deduce an initial system-bath state consistent with the available information. For the generic case of weak interactions, we obtain an explicit formula for the correction to the factorized state. Such a state turns out to have little correlation between the system and the bath, which we can quantify using our formula. This has implications, in particular, on the subject of subsequent non-completely positive dynamics of the system. Deviation from predictions based on such an almost uncorrelated state is indicative of accidental control of hidden degrees of freedom in the bath.
On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method.
Roux, Benoît; Weare, Jonathan
2013-02-28
An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method.
Maximum-Entropy Method for Evaluating the Slope Stability of Earth Dams
Shuai Wang
2012-10-01
Full Text Available The slope stability is a very important problem in geotechnical engineering. This paper presents an approach for slope reliability analysis based on the maximum-entropy method. The key idea is to implement the maximum entropy principle in estimating the probability density function. The performance function is formulated by the Simplified Bishop’s method to estimate the slope failure probability. The maximum-entropy method is used to estimate the probability density function (PDF of the performance function subject to the moment constraints. A numerical example is calculated and compared to the Monte Carlo simulation (MCS and the Advanced First Order Second Moment Method (AFOSM. The results show the accuracy and efficiency of the proposed method. The proposed method should be valuable for performing probabilistic analyses.
Test the Principle of Maximum Entropy in Constant Sum 2x2 Game:Evidence in Experimental Economics
Xu, Bin; Wang, Zhijian; Zhang, Jianbo
2011-01-01
Entropy serves as a central observable which indicates uncertainty in many chemical, thermodynamical, biological and ecological systems, and the principle of the maximum entropy (MaxEnt) is widely supported in natural science. Recently, entropy is employed to describe the social system in which human subjects are interacted with each other, but the principle of the maximum entropy has never been reported from this field empirically. By using laboratory experimental data, we test the uncertainty of strategy type in various competing environments with two person constant sum $2 \\times 2$ game. Empirical evidence shows that, in this competing game environment, the outcome of human's decision-making obeys the principle of maximum entropy.
Vallino, Joseph J
2010-05-12
We examine the application of the maximum entropy production principle for describing ecosystem biogeochemistry. Since ecosystems can be functionally stable despite changes in species composition, we use a distributed metabolic network for describing biogeochemistry, which synthesizes generic biological structures that catalyse reaction pathways, but is otherwise organism independent. Allocation of biological structure and regulation of biogeochemical reactions is determined via solution of an optimal control problem in which entropy production is maximized. However, because synthesis of biological structures cannot occur if entropy production is maximized instantaneously, we propose that information stored within the metagenome allows biological systems to maximize entropy production when averaged over time. This differs from abiotic systems that maximize entropy production at a point in space-time, which we refer to as the steepest descent pathway. It is the spatio-temporal averaging that allows biological systems to outperform abiotic processes in entropy production, at least in many situations. A simulation of a methanotrophic system is used to demonstrate the approach. We conclude with a brief discussion on the implications of viewing ecosystems as self-organizing molecular machines that function to maximize entropy production at the ecosystem level of organization.
Exact computation of the maximum-entropy potential of spiking neural-network models.
Cofré, R; Cessac, B
2014-05-01
Understanding how stimuli and synaptic connectivity influence the statistics of spike patterns in neural networks is a central question in computational neuroscience. The maximum-entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. However, in spite of good performance in terms of prediction, the fitting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuromimetic models) provide a probabilistic mapping between the stimulus, network architecture, and spike patterns in terms of conditional probabilities. In this paper we build an exact analytical mapping between neuromimetic and maximum-entropy models.
Sob'yanin, Denis Nikolaevich
2012-06-01
A principle of hierarchical entropy maximization is proposed for generalized superstatistical systems, which are characterized by the existence of three levels of dynamics. If a generalized superstatistical system comprises a set of superstatistical subsystems, each made up of a set of cells, then the Boltzmann-Gibbs-Shannon entropy should be maximized first for each cell, second for each subsystem, and finally for the whole system. Hierarchical entropy maximization naturally reflects the sufficient time-scale separation between different dynamical levels and allows one to find the distribution of both the intensive parameter and the control parameter for the corresponding superstatistics. The hierarchical maximum entropy principle is applied to fluctuations of the photon Bose-Einstein condensate in a dye microcavity. This principle provides an alternative to the master equation approach recently applied to this problem. The possibility of constructing generalized superstatistics based on a statistics different from the Boltzmann-Gibbs statistics is pointed out.
Maximum entropy algorithm and its implementation for the neutral beam profile measurement
Lee, Seung Wook; Cho, Gyu Seong [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Cho, Yong Sub [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1997-12-31
A tomography algorithm to maximize the entropy of image using Lagrangian multiplier technique and conjugate gradient method has been designed for the measurement of 2D spatial distribution of intense neutral beams of KSTAR NBI (Korea Superconducting Tokamak Advanced Research Neutral Beam Injector), which is now being designed. A possible detection system was assumed and a numerical simulation has been implemented to test the reconstruction quality of given beam profiles. This algorithm has the good applicability for sparse projection data and thus, can be used for the neutral beam tomography. 8 refs., 3 figs. (Author)
Tonini, A.; Pede, V.
2011-01-01
In this paper, a stochastic frontier model accounting for spatial dependency is developed using generalized maximum entropy estimation. An application is made for measuring total factor productivity in European agriculture. The empirical results show that agricultural productivity growth in Europe i
Becker, Joseph F.; Valentin, Jose
1996-01-01
The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.
Hong, Hunsop; Schonfeld, Dan
2008-06-01
In this paper, we propose a maximum-entropy expectation-maximization (MEEM) algorithm. We use the proposed algorithm for density estimation. The maximum-entropy constraint is imposed for smoothness of the estimated density function. The derivation of the MEEM algorithm requires determination of the covariance matrix in the framework of the maximum-entropy likelihood function, which is difficult to solve analytically. We, therefore, derive the MEEM algorithm by optimizing a lower-bound of the maximum-entropy likelihood function. We note that the classical expectation-maximization (EM) algorithm has been employed previously for 2-D density estimation. We propose to extend the use of the classical EM algorithm for image recovery from randomly sampled data and sensor field estimation from randomly scattered sensor networks. We further propose to use our approach in density estimation, image recovery and sensor field estimation. Computer simulation experiments are used to demonstrate the superior performance of the proposed MEEM algorithm in comparison to existing methods.
Maximum-entropy parameter estimation for the k-NN modified value-difference kernel
Hendrickx, I.H.E.; van den Bosch, A.; Verbruggen, R.; Taatgen, N.; Schomaker, L.
2004-01-01
We introduce an extension of the modified value-difference kernel of $k$-nn by replacing the kernel's default class distribution matrix with the matrix produced by the maximum-entropy learning algorithm. This hybrid algorithm is tested on fifteen machine learning benchmark tasks, comparing the hybri
Can the maximum entropy principle be explained as a consistency requirement?
Uffink, J.
2001-01-01
The principle of maximum entropy is a general method to assign values to probability distributions on the basis of partial information. This principle, introduced by Jaynes in 1957, forms an extension of the classical principle of insufficient reason. It has been further generalized, both in mathema
Maximum-Entropy Parameter Estimation for the k-nn Modified Value-Difference Kernel
Hendrickx, Iris; Bosch, Antal van den
2005-01-01
We introduce an extension of the modified value-difference kernel of k-nn by replacing the kernel's default class distribution matrix with the matrix produced by the maximum-entropy learning algorithm. This hybrid algorithm is tested on fifteen machine learning benchmark tasks, comparing the hybrid
Brus, D.J.; Bogaert, P.; Heuvelink, G.B.M.
2008-01-01
Bayesian Maximum Entropy was used to estimate the probabilities of occurrence of soil categories in the Netherlands, and to simulate realizations from the associated multi-point pdf. Besides the hard observations (H) of the categories at 8369 locations, the soil map of the Netherlands 1:50 000 was u
Maximum entropy as a consequence of Bayes' theorem in differentiable manifolds
Davis, Sergio
2015-01-01
Bayesian inference and the principle of maximum entropy (PME) are usually presented as separate but complementary branches of inference, the latter playing a central role in the foundations of Statistical Mechanics. In this work it is shown that the PME can be derived from Bayes' theorem and the divergence theorem for systems whose states can be mapped to points in a differentiable manifold. In this view, entropy must be interpreted as the invariant measure (non-informative prior) on the space of probability densities.
Self-assembled wiggling nano-structures and the principle of maximum entropy production.
Belkin, A; Hubler, A; Bezryadin, A
2015-02-09
While behavior of equilibrium systems is well understood, evolution of nonequilibrium ones is much less clear. Yet, many researches have suggested that the principle of the maximum entropy production is of key importance in complex systems away from equilibrium. Here, we present a quantitative study of large ensembles of carbon nanotubes suspended in a non-conducting non-polar fluid subject to a strong electric field. Being driven out of equilibrium, the suspension spontaneously organizes into an electrically conducting state under a wide range of parameters. Such self-assembly allows the Joule heating and, therefore, the entropy production in the fluid, to be maximized. Curiously, we find that emerging self-assembled structures can start to wiggle. The wiggling takes place only until the entropy production in the suspension reaches its maximum, at which time the wiggling stops and the structure becomes quasi-stable. Thus, we provide strong evidence that maximum entropy production principle plays an essential role in the evolution of self-organizing systems far from equilibrium.
Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle
Ge Cheng
2016-12-01
Full Text Available Predicting the outcome of National Basketball Association (NBA matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that the model is able to predict the winning team with 74.4% accuracy, outperforming other classical machine learning algorithms that could only afford a maximum prediction accuracy of 70.6% in the experiments that we performed.
Kleidon, Axel
2009-06-01
The Earth system is maintained in a unique state far from thermodynamic equilibrium, as, for instance, reflected in the high concentration of reactive oxygen in the atmosphere. The myriad of processes that transform energy, that result in the motion of mass in the atmosphere, in oceans, and on land, processes that drive the global water, carbon, and other biogeochemical cycles, all have in common that they are irreversible in their nature. Entropy production is a general consequence of these processes and measures their degree of irreversibility. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints. In this review, the basics of nonequilibrium thermodynamics are described, as well as how these apply to Earth system processes. Applications of the MEP principle are discussed, ranging from the strength of the atmospheric circulation, the hydrological cycle, and biogeochemical cycles to the role that life plays in these processes. Nonequilibrium thermodynamics and the MEP principle have potentially wide-ranging implications for our understanding of Earth system functioning, how it has evolved in the past, and why it is habitable. Entropy production allows us to quantify an objective direction of Earth system change (closer to vs further away from thermodynamic equilibrium, or, equivalently, towards a state of MEP). When a maximum in entropy production is reached, MEP implies that the Earth system reacts to perturbations primarily with negative feedbacks. In conclusion, this nonequilibrium thermodynamic view of the Earth system shows great promise to establish a holistic description of the Earth as one system. This perspective is likely to allow us to better understand and predict its function as one entity, how it has evolved in the past, and how it is modified by human activities in the future.
Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models.
Richard R Stein
2015-07-01
Full Text Available Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene-gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design.
Maximum entropy deconvolution of the optical jet of 3C 273
Evans, I. N.; Ford, H. C.; Hui, X.
1989-01-01
The technique of maximum entropy image restoration is applied to the problem of deconvolving the point spread function from a deep, high-quality V band image of the optical jet of 3C 273. The resulting maximum entropy image has an approximate spatial resolution of 0.6 arcsec and has been used to study the morphology of the optical jet. Four regularly-spaced optical knots are clearly evident in the data, together with an optical 'extension' at each end of the optical jet. The jet oscillates around its center of gravity, and the spatial scale of the oscillations is very similar to the spacing between the optical knots. The jet is marginally resolved in the transverse direction and has an asymmetric profile perpendicular to the jet axis. The distribution of V band flux along the length of the jet, and accurate astrometry of the optical knot positions are presented.
Applying rough sets in word segmentation disambiguation based on maximum entropy model
无
2006-01-01
To solve the complicated feature extraction and long distance dependency problem in Word Segmentation Disambiguation ( WSD), this paper proposes to apply rough sets in WSD based on the Maximum Entropy model. Firstly, rough set theory is applied to extract the complicated features and long distance features, even from noise or inconsistent corpus. Secondly, these features are added into the Maximum Entropy model, and consequently, the feature weights can be assigned according to the performance of the whole disambiguation model. Finally, the semantic lexicon is adopted to build class-based rough set features to overcome data sparseness. The experiment indicated that our method performed better than previous models, which got top rank in WSD in 863 Evaluation in 2003. This system ranked first and second respectively in MSR and PKU open test in the Second International Chinese Word Segmentation Bakeoff held in 2005.
Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models.
Stein, Richard R; Marks, Debora S; Sander, Chris
2015-07-01
Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene-gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design.
In-medium dispersion relations of charmonia studied by the maximum entropy method
Ikeda, Atsuro; Asakawa, Masayuki; Kitazawa, Masakiyo
2017-01-01
We study in-medium spectral properties of charmonia in the vector and pseudoscalar channels at nonzero momenta on quenched lattices, especially focusing on their dispersion relation and the weight of the peak. We measure the lattice Euclidean correlation functions with nonzero momenta on the anisotropic quenched lattices and study the spectral functions with the maximum entropy method. The dispersion relations of charmonia and the momentum dependence of the weight of the peak are analyzed with the maximum entropy method together with the errors estimated probabilistically in this method. We find a significant increase of the masses of charmonia in medium. We also find that the functional form of the charmonium dispersion relations is not changed from that in the vacuum within the error even at T ≃1.6 Tc for all the channels we analyze.
On the maximum-entropy method for kinetic equation of radiation, particle and gas
El-Wakil, S.A. [Mansoura Univ. (Egypt). Phys. Dept.; Madkour, M.A. [Mansoura Univ. (Egypt). Phys. Dept.; Degheidy, A.R. [Mansoura Univ. (Egypt). Phys. Dept.; Machali, H.M. [Mansoura Univ. (Egypt). Phys. Dept.
1995-02-01
The maximum-entropy approach is used to calculate some problems in radiative transfer and reactor physics such as the escape probability, the emergent and transmitted intensities for a finite slab as well as the emergent intensity for a semi-infinite medium. Also, it is employed to solve problems involving spherical geometry, such as luminosity (the total energy emitted by a sphere), neutron capture probability and the albedo problem. The technique is also employed in the kinetic theory of gases to calculate the Poiseuille flow and thermal creep of a rarefied gas between two plates. Numerical calculations are achieved and compared with the published data. The comparisons demonstrate that the maximum-entropy results are good in agreement with the exact ones. (orig.).
Maximum-Entropy Meshfree Method for Compressible and Near-Incompressible Elasticity
Ortiz, A; Puso, M A; Sukumar, N
2009-09-04
Numerical integration errors and volumetric locking in the near-incompressible limit are two outstanding issues in Galerkin-based meshfree computations. In this paper, we present a modified Gaussian integration scheme on background cells for meshfree methods that alleviates errors in numerical integration and ensures patch test satisfaction to machine precision. Secondly, a locking-free small-strain elasticity formulation for meshfree methods is proposed, which draws on developments in assumed strain methods and nodal integration techniques. In this study, maximum-entropy basis functions are used; however, the generality of our approach permits the use of any meshfree approximation. Various benchmark problems in two-dimensional compressible and near-incompressible small strain elasticity are presented to demonstrate the accuracy and optimal convergence in the energy norm of the maximum-entropy meshfree formulation.
In-medium dispersion relations of charmonia studied by maximum entropy method
Ikeda, Atsuro; Kitazawa, Masakiyo
2016-01-01
We study in-medium spectral properties of charmonia in the vector and pseudoscalar channels at nonzero momenta on quenched lattices, especially focusing on their dispersion relation and weight of the peak. We measure the lattice Euclidean correlation functions with nonzero momenta on the anisotropic quenched lattices and study the spectral functions with the maximum entropy method. The dispersion relations of charmonia and the momentum dependence of the weight of the peak are analyzed with the maximum entropy method together with the errors estimated probabilistically in this method. We find significant increase of the masses of charmonia in medium. It is also found that the functional form of the charmonium dispersion relations is not changed from that in the vacuum within the error even at $T\\simeq1.6T_c$ for all the channels we analyzed.
ZHUANG Huifu
2016-03-01
Full Text Available Generally, spatial-contextual information would be used in change detection because there is significant speckle noise in synthetic aperture radar(SAR images. In this paper, using the rich texture information of SAR images, an unsupervised change detection approach to high-resolution SAR images based on texture feature vector and maximum entropy principle is proposed. The difference image is generated by using the 32-dimensional texture feature vector of gray-level co-occurrence matrix(GLCM. And the automatic threshold is obtained by maximum entropy principle. In this method, the appropriate window size to change detection is 11×11 according to the regression analysis of window size and precision index. The experimental results show that the proposed approach is better could both reduce the influence of speckle noise and improve the detection accuracy of high-resolution SAR image effectively; and it is better than Markov random field.
Urban expressway traffic state forecasting based on multimode maximum entropy model
无
2010-01-01
The accurate and timely traffic state prediction has become increasingly important for the traffic participants,especially for the traffic managements. In this paper,the traffic state is described by Micro-LOS,and a direct prediction method is introduced. The development of the proposed method is based on Maximum Entropy (ME) models trained for multiple modes. In the Multimode Maximum Entropy (MME) framework,the different features like temporal and spatial features of traffic systems,regional traffic state are integrated simultaneously,and the different state behaviors based on 14 traffic modes defined by average speed according to the date-time division are also dealt with. The experiments based on the real data in Beijing expressway prove that the MME models outperforms the already existing model in both effectiveness and robustness.
Osterloh, Frank E
2014-10-02
The Shockley-Queisser analysis provides a theoretical limit for the maximum energy conversion efficiency of single junction photovoltaic cells. But besides the semiconductor bandgap no other semiconductor properties are considered in the analysis. Here, we show that the maximum conversion efficiency is limited further by the excited state entropy of the semiconductors. The entropy loss can be estimated with the modified Sackur-Tetrode equation as a function of the curvature of the bands, the degeneracy of states near the band edges, the illumination intensity, the temperature, and the band gap. The application of the second law of thermodynamics to semiconductors provides a simple explanation for the observed high performance of group IV, III-V, and II-VI materials with strong covalent bonding and for the lower efficiency of transition metal oxides containing weakly interacting metal d orbitals. The model also predicts efficient energy conversion with quantum confined and molecular structures in the presence of a light harvesting mechanism.
Maximum Entropy Principle Based Estimation of Performance Distribution in Queueing Theory
He, Dayi; Li, Ran; Huang, Qi; Lei, Ping
2014-01-01
In related research on queuing systems, in order to determine the system state, there is a widespread practice to assume that the system is stable and that distributions of the customer arrival ratio and service ratio are known information. In this study, the queuing system is looked at as a black box without any assumptions on the distribution of the arrival and service ratios and only keeping the assumption on the stability of the queuing system. By applying the principle of maximum entropy, the performance distribution of queuing systems is derived from some easily accessible indexes, such as the capacity of the system, the mean number of customers in the system, and the mean utilization of the servers. Some special cases are modeled and their performance distributions are derived. Using the chi-square goodness of fit test, the accuracy and generality for practical purposes of the principle of maximum entropy approach is demonstrated. PMID:25207992
Derivation of some new distributions in statistical mechanics using maximum entropy approach
Ray Amritansu
2014-01-01
Full Text Available The maximum entropy principle has been earlier used to derive the Bose Einstein(B.E., Fermi Dirac(F.D. & Intermediate Statistics(I.S. distribution of statistical mechanics. The central idea of these distributions is to predict the distribution of the microstates, which are the particle of the system, on the basis of the knowledge of some macroscopic data. The latter information is specified in the form of some simple moment constraints. One distribution differs from the other in the way in which the constraints are specified. In the present paper, we have derived some new distributions similar to B.E., F.D. distributions of statistical mechanics by using maximum entropy principle. Some proofs of B.E. & F.D. distributions are shown, and at the end some new results are discussed.
Maximum entropy principle based estimation of performance distribution in queueing theory.
He, Dayi; Li, Ran; Huang, Qi; Lei, Ping
2014-01-01
In related research on queuing systems, in order to determine the system state, there is a widespread practice to assume that the system is stable and that distributions of the customer arrival ratio and service ratio are known information. In this study, the queuing system is looked at as a black box without any assumptions on the distribution of the arrival and service ratios and only keeping the assumption on the stability of the queuing system. By applying the principle of maximum entropy, the performance distribution of queuing systems is derived from some easily accessible indexes, such as the capacity of the system, the mean number of customers in the system, and the mean utilization of the servers. Some special cases are modeled and their performance distributions are derived. Using the chi-square goodness of fit test, the accuracy and generality for practical purposes of the principle of maximum entropy approach is demonstrated.
Maximum entropy principle based estimation of performance distribution in queueing theory.
Dayi He
Full Text Available In related research on queuing systems, in order to determine the system state, there is a widespread practice to assume that the system is stable and that distributions of the customer arrival ratio and service ratio are known information. In this study, the queuing system is looked at as a black box without any assumptions on the distribution of the arrival and service ratios and only keeping the assumption on the stability of the queuing system. By applying the principle of maximum entropy, the performance distribution of queuing systems is derived from some easily accessible indexes, such as the capacity of the system, the mean number of customers in the system, and the mean utilization of the servers. Some special cases are modeled and their performance distributions are derived. Using the chi-square goodness of fit test, the accuracy and generality for practical purposes of the principle of maximum entropy approach is demonstrated.
Exact computation of the Maximum Entropy Potential of spiking neural networks models
Cofre, Rodrigo
2014-01-01
Understanding how stimuli and synaptic connectivity in uence the statistics of spike patterns in neural networks is a central question in computational neuroscience. Maximum Entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. But, in spite of good performance in terms of prediction, the ?tting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuro-mimetic models) provide a probabilistic mapping between stimulus, network architecture and spike patterns in terms of conditional proba- bilities. In this paper we build an exact analytical mapping between neuro-mimetic and Maximum Entropy models.
Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle
Barletti, Luigi, E-mail: luigi.barletti@unifi.it [Dipartimento di Matematica e Informatica “Ulisse Dini”, Università degli Studi di Firenze, Viale Morgagni 67/A, 50134 Firenze (Italy)
2014-08-15
The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.
J. G. Dyke; Kleidon, A.
2010-01-01
The Maximum Entropy Production (MEP) principle has been remarkably successful in producing accurate predictions for non-equilibrium states. We argue that this is because the MEP principle is an effective inference procedure that produces the best predictions from the available information. Since all Earth system processes are subject to the conservation of energy, mass and momentum, we argue that in practical terms the MEP principle should be applied to Earth system processes in terms of the ...
Hyland, D. C.
1983-01-01
A stochastic structural control model is described. In contrast to the customary deterministic model, the stochastic minimum data/maximum entropy model directly incorporates the least possible a priori parameter information. The approach is to adopt this model as the basic design model, thus incorporating the effects of parameter uncertainty at a fundamental level, and design mean-square optimal controls (that is, choose the control law to minimize the average of a quadratic performance index over the parameter ensemble).
Lattice Field Theory with the Sign Problem and the Maximum Entropy Method
Masahiro Imachi
2007-02-01
Full Text Available Although numerical simulation in lattice field theory is one of the most effective tools to study non-perturbative properties of field theories, it faces serious obstacles coming from the sign problem in some theories such as finite density QCD and lattice field theory with the θ term. We reconsider this problem from the point of view of the maximum entropy method.
Hyland, D. C.
1985-01-01
The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modelling and reduced order control design method for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed and the application of the methodology to several large space structure (LSS) problems of representative complexity is illustrated.
Hyland, D. C.; Bernstein, D. S.
1987-01-01
The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.
Mona Nazeri; Kamaruzaman Jusoff; Nima Madani; Ahmad Rodzi Mahmud; Abdul Rani Bahman; Lalit Kumar
2012-01-01
One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt) is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify ...
Li, X.; Chin, L. P.; Tankin, R. S.; Jackson, T.; Stutrud, J.; Switzer, G.
1991-07-01
Measurements were made of the droplet size and velocity distributions in a hollow cone spray from a pressure atomizer using a phase/Doppler particle analyzer. The maximum entropy principle is used to predict these distributions. The constraints imposed in this model involve conversation of mass, momentum, and energy. Estimates of the source terms associated with these constraints are made based on physical reasoning. Agreement between the measurements and the predictions is very good.
Application of the maximum relative entropy method to the physics of ferromagnetic materials
Giffin, Adom; Cafaro, Carlo; Ali, Sean Alan
2016-08-01
It is known that the Maximum relative Entropy (MrE) method can be used to both update and approximate probability distributions functions in statistical inference problems. In this manuscript, we apply the MrE method to infer magnetic properties of ferromagnetic materials. In addition to comparing our approach to more traditional methodologies based upon the Ising model and Mean Field Theory, we also test the effectiveness of the MrE method on conventionally unexplored ferromagnetic materials with defects.
A pairwise maximum entropy model accurately describes resting-state human brain networks.
Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki
2013-01-01
The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks.
Reymbaut, A.; Gagnon, A.-M.; Bergeron, D.; Tremblay, A.-M. S.
2017-03-01
The computation of transport coefficients, even in linear response, is a major challenge for theoretical methods that rely on analytic continuation of correlation functions obtained numerically in Matsubara space. While maximum entropy methods can be used for certain correlation functions, this is not possible in general, important examples being the Seebeck, Hall, Nernst, and Reggi-Leduc coefficients. Indeed, positivity of the spectral weight on the positive real-frequency axis is not guaranteed in these cases. The spectral weight can even be complex in the presence of broken time-reversal symmetry. Various workarounds, such as the neglect of vertex corrections or the study of the infinite frequency or Kelvin limits, have been proposed. Here, we show that one can define auxiliary response functions that allow one to extract the desired real-frequency susceptibilities from maximum entropy methods in the most general multiorbital cases with no particular symmetry. As a benchmark case, we study the longitudinal thermoelectric response and corresponding Onsager coefficient in the single-band two-dimensional Hubbard model treated with dynamical mean-field theory and continuous-time quantum Monte Carlo. We thereby extend the maximum entropy analytic continuation with auxiliary functions (MaxEntAux method), developed for the study of the superconducting pairing dynamics of correlated materials, to transport coefficients.
Wu Fuxian; Wen Weidong
2016-01-01
Classic maximum entropy quantile function method (CMEQFM) based on the probabil-ity weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the very small samples. To overcome this weakness, least square maximum entropy quantile function method (LSMEQFM) and that with constraint condition (LSMEQFMCC) are proposed. To improve the confidence level of quantile function estimation, scatter factor method is combined with maximum entropy method to estimate the confidence inter-val of quantile function. From the comparisons of these methods about two common probability distributions and one engineering application, it is showed that CMEQFM can estimate the quan-tile function accurately on the small samples but inaccurately on the very small samples (10 sam-ples); LSMEQFM and LSMEQFMCC can be successfully applied to the very small samples;with consideration of the constraint condition on quantile function, LSMEQFMCC is more stable and computationally accurate than LSMEQFM; scatter factor confidence interval estimation method based on LSMEQFM or LSMEQFMCC has good estimation accuracy on the confidence interval of quantile function, and that based on LSMEQFMCC is the most stable and accurate method on the very small samples (10 samples).
Applications of the principle of maximum entropy: from physics to ecology.
Banavar, Jayanth R; Maritan, Amos; Volkov, Igor
2010-02-17
There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori.
A Robust Image Tampering Detection Method Based on Maximum Entropy Criteria
Bo Zhao
2015-12-01
Full Text Available This paper proposes a novel image watermarking method based on local energy and maximum entropy aiming to improve the robustness. First, the image feature distribution is extracted by employing the local energy model and then it is transformed as a digital watermark by employing a Discrete Cosine Transform (DCT. An offset image is thus obtained according to the difference between the extracted digital watermarking and the feature distribution of the watermarked image. The entropy of the pixel value distribution is computed first. The Lorenz curve is used to measure the polarization degree of the pixel value distribution. In the pixel location distribution flow, the maximum entropy criteria is applied in segmenting the offset image into potentially tampered regions and unchanged regions. All-connected graph and 2-D Gaussian probability are utilized to obtain the probability distribution of the pixel location. Finally, the factitious tampering probability value of a pending detected image is computed through combining the weighting factors of pixel value and pixel location distribution. Experimental results show that the proposed method is more robust against the commonly used image processing operations, such as Gaussian noise, impulse noise, etc. Simultaneously, the proposed method achieves high sensitivity against factitious tampering.
Polyatomic gases with dynamic pressure: Maximum entropy principle and shock structure
Pavić-Čolić, Milana; Simić, Srboljub
2016-01-01
This paper is concerned with the analysis of polyatomic gases within the framework of kinetic theory. Internal degrees of freedom are modeled using a single continuous variable corresponding to the molecular internal energy. Non-equilibrium velocity distribution function, compatible with macroscopic field variables, is constructed using the maximum entropy principle. A proper collision cross section is constructed which obeys the micro-reversibility requirement. The source term and entropy production rate are determined in the form which generalizes the results obtained within the framework of extended thermodynamics. They can be adapted to appropriate physical situations due to the presence of parameters. They are also compared with the results obtained using BGK approximation. For the proposed model the shock structure problem is thoroughly analyzed.
A maximum entropy distribution for wave heights of non-linear sea waves
无
2007-01-01
Based on the maximum entropy principle, a probability density function (PDF) for the zero-crossing wave height (H)of random waves is derived as the simple form fn (H) = αHγe-βHn ( n is a selectable positive integer) through solving a variational problem subject to some quite general constraints. This PDF maximizes the information entropy of H, and its parameters α, γ and β are expressed ear sea waves with large uncertainty, and its parameters can be simply determined from available data. Comparisons between the PDF with n = 3 and n = 4 and the observed distributions of H from wave records measured in the East China Sea and in a wind-wave tunnel show fairly satisfying agreements.
Beretta, Gian Paolo
2014-10-01
By suitable reformulations, we cast the mathematical frameworks of several well-known different approaches to the description of nonequilibrium dynamics into a unified formulation valid in all these contexts, which extends to such frameworks the concept of steepest entropy ascent (SEA) dynamics introduced by the present author in previous works on quantum thermodynamics. Actually, the present formulation constitutes a generalization also for the quantum thermodynamics framework. The analysis emphasizes that in the SEA modeling principle a key role is played by the geometrical metric with respect to which to measure the length of a trajectory in state space. In the near-thermodynamic-equilibrium limit, the metric tensor is directly related to the Onsager's generalized resistivity tensor. Therefore, through the identification of a suitable metric field which generalizes the Onsager generalized resistance to the arbitrarily far-nonequilibrium domain, most of the existing theories of nonequilibrium thermodynamics can be cast in such a way that the state exhibits the spontaneous tendency to evolve in state space along the path of SEA compatible with the conservation constraints and the boundary conditions. The resulting unified family of SEA dynamical models is intrinsically and strongly consistent with the second law of thermodynamics. The non-negativity of the entropy production is a general and readily proved feature of SEA dynamics. In several of the different approaches to nonequilibrium description we consider here, the SEA concept has not been investigated before. We believe it defines the precise meaning and the domain of general validity of the so-called maximum entropy production principle. Therefore, it is hoped that the present unifying approach may prove useful in providing a fresh basis for effective, thermodynamically consistent, numerical models and theoretical treatments of irreversible conservative relaxation towards equilibrium from far nonequilibrium
Superfast maximum-likelihood reconstruction for quantum tomography
Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon
2017-06-01
Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.
Movahednejad, E.; Ommi, F.; Hosseinalipour, S. M.; Chen, C. P.; Mahdavi, S. A.
2011-12-01
This paper describes the implementation of the instability analysis of wave growth on liquid jet surface, and maximum entropy principle (MEP) for prediction of droplet diameter distribution in primary breakup region. The early stage of the primary breakup, which contains the growth of wave on liquid-gas interface, is deterministic; whereas the droplet formation stage at the end of primary breakup is random and stochastic. The stage of droplet formation after the liquid bulk breakup can be modeled by statistical means based on the maximum entropy principle. The MEP provides a formulation that predicts the atomization process while satisfying constraint equations based on conservations of mass, momentum and energy. The deterministic aspect considers the instability of wave motion on jet surface before the liquid bulk breakup using the linear instability analysis, which provides information of the maximum growth rate and corresponding wavelength of instabilities in breakup zone. The two sub-models are coupled together using momentum source term and mean diameter of droplets. This model is also capable of considering drag force on droplets through gas-liquid interaction. The predicted results compared favorably with the experimentally measured droplet size distributions for hollow-cone sprays.
de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie
2011-12-14
We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/.
Surface Elevation Distribution of Sea Waves Based on the Maximum Entropy Principle
戴德君; 王伟; 钱成春; 孙孚
2001-01-01
A probability density function of surface elevation is obtained through improvement of the method introduced byCieslikiewicz who employed the maximum entropy principle to investigate the surface elevation distribution. The densityfunction can be easily extended to higher order according to demand and is non-negative everywhere, satisfying the basicbehavior of the probability. Moreover because the distribution is derived without any assumption about sea waves, it isfound from comparison with several accepted distributions that the new form of distribution can be applied in a widerrange of wave conditions. In addition, the density function can be used to fit some observed distributions of surface verti-cal acceleration although something remains unsolved.
The SIS and SIR stochastic epidemic models: a maximum entropy approach.
Artalejo, J R; Lopez-Herrero, M J
2011-12-01
We analyze the dynamics of infectious disease spread by formulating the maximum entropy (ME) solutions of the susceptible-infected-susceptible (SIS) and the susceptible-infected-removed (SIR) stochastic models. Several scenarios providing helpful insight into the use of the ME formalism for epidemic modeling are identified. The ME results are illustrated with respect to several descriptors, including the number of recovered individuals and the time to extinction. An application to infectious data from outbreaks of extended spectrum beta lactamase (ESBL) in a hospital is also considered.
Collins, Emmanuel G., Jr.; Richter, Stephen
1990-01-01
One well known deficiency of LQG compensators is that they do not guarantee any measure of robustness. This deficiency is especially highlighted when considering control design for complex systems such as flexible structures. There has thus been a need to generalize LQG theory to incorporate robustness constraints. Here we describe the maximum entropy approach to robust control design for flexible structures, a generalization of LQG theory, pioneered by Hyland, which has proved useful in practice. The design equations consist of a set of coupled Riccati and Lyapunov equations. A homotopy algorithm that is used to solve these design equations is presented.
1975-06-25
conjugates of the roots of AH V. Thus the forward prediction error filter is a minimum phase filter . Since its output does not precede any of its input points...circle. The inverse of the forward Li-18 prediction error filter is also a causal minimum phase filter . The inverse filter can be used to construct the...filter is a maximum phase filter (a minimum phase filter if the direction of time is reversed). When the maxi- mum entropy assumption is valid, it
A MAXIMUM ENTROPY CHUNKING MODEL WITH N-FOLD TEMPLATE CORRECTION
无
2007-01-01
This letter presents a new chunking method based on Maximum Entropy (ME) model with N-fold template correction model. First two types of machine learning models are described. Based on the analysis of the two models, then the chunking model which combines the profits of conditional probability model and rule based model is proposed. The selection of features and rule templates in the chunking model is discussed. Experimental results for the CoNLL-2000 corpus show that this approach achieves impressive accuracy in terms of the F-score: 92.93%. Compared with the ME model and ME Markov model, the new chunking model achieves better performance.
Fiebig, H R
2002-01-01
We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss practical issues of the approach.
Nuclear Enhanced X-ray Maximum Entropy Method Used to Analyze Local Distortions in Simple Structures
Christensen, Sebastian; Bindzus, Niels; Christensen, Mogens
the ideal, undistorted rock-salt structure. NEXMEM employs a simple procedure to normalize extracted structure factors to the atomic form factors. The NDD is reconstructed by performing maximum entropy calculations on the normalized structure factors. NEXMEM has been validated by testing against simulated....... In addition, we have applied NEXMEM to multi-temperature synchrotron powder X-ray diffraction collected on PbX. Based on powder diffraction data, our study demonstrates that NEXMEM successfully improves the atomic resolution over standard MEM. This new tool aids our understanding of the local distortions...
Chaos control of ferroresonance system based on RBF-maximum entropy clustering algorithm
Liu Fan [Key Lab of High Voltage and Electrical New Technology of Ministry of Education, Chongqing University, Chongqing 400044 (China)]. E-mail: liufan2003@yahoo.com.cn; Sun Caixin [Key Lab of High Voltage and Electrical New Technology of Ministry of Education, Chongqing University, Chongqing 400044 (China); Sima Wenxia [Key Lab of High Voltage and Electrical New Technology of Ministry of Education, Chongqing University, Chongqing 400044 (China); Liao Ruijin [Key Lab of High Voltage and Electrical New Technology of Ministry of Education, Chongqing University, Chongqing 400044 (China); Guo Fei [Key Lab of High Voltage and Electrical New Technology of Ministry of Education, Chongqing University, Chongqing 400044 (China)
2006-09-11
With regards to the ferroresonance overvoltage of neutral grounded power system, a maximum-entropy learning algorithm based on radial basis function neural networks is used to control the chaotic system. The algorithm optimizes the object function to derive learning rule of central vectors, and uses the clustering function of network hidden layers. It improves the regression and learning ability of neural networks. The numerical experiment of ferroresonance system testifies the effectiveness and feasibility of using the algorithm to control chaos in neutral grounded system.
Zhao, Quanyu; Kurata, Hiroyuki
2010-08-01
Elementary mode (EM) analysis is potentially effective in integrating transcriptome or proteome data into metabolic network analyses and in exploring the mechanism of how phenotypic or metabolic flux distribution is changed with respect to environmental and genetic perturbations. The EM coefficients (EMCs) indicate the quantitative contribution of their associated EMs and can be estimated by maximizing Shannon's entropy as a general objective function in our previous study, but the use of EMCs is still restricted to a relatively small-scale networks. We propose a fast and universal method that optimizes hundreds of thousands of EMCs under the constraint of the Maximum entropy principle (MEP). Lagrange multipliers (LMs) are applied to maximize the Shannon's entropy-based objective function, analytically solving each EMC as the function of LMs. Consequently, the number of such search variables, the EMC number, is dramatically reduced to the reaction number. To demonstrate the feasibility of the MEP with Lagrange multipliers (MEPLM), it is coupled with enzyme control flux (ECF) to predict the flux distributions of Escherichia coli and Saccharomycescerevisiae for different conditions (gene deletion, adaptive evolution, temperature, and dilution rate) and to provide a quantitative understanding of how metabolic or physiological states are changed in response to these genetic or environmental perturbations at the elementary mode level. It is shown that the ECF-based method is a feasible framework for the prediction of metabolic flux distribution by integrating enzyme activity data into EMs to genetic and environmental perturbations.
Local solutions of Maximum Likelihood Estimation in Quantum State Tomography
Gonçalves, Douglas S; Lavor, Carlile; Farías, Osvaldo Jiménez; Ribeiro, P H Souto
2011-01-01
Maximum likelihood estimation is one of the most used methods in quantum state tomography, where the aim is to find the best density matrix for the description of a physical system. Results of measurements on the system should match the expected values produced by the density matrix. In some cases however, if the matrix is parameterized to ensure positivity and unit trace, the negative log-likelihood function may have several local minima. In several papers in the field, authors associate a source of errors to the possibility that most of these local minima are not global, so that optimization methods can be trapped in the wrong minimum, leading to a wrong density matrix. Here we show that, for convex negative log-likelihood functions, all local minima are global. We also show that a practical source of errors is in fact the use of optimization methods that do not have global convergence property or present numerical instabilities. The clarification of this point has important repercussion on quantum informat...
Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation
Bergeron, Dominic; Tremblay, A.-M. S.
2016-08-01
Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.
Quantifying extrinsic noise in gene expression using the maximum entropy framework.
Dixit, Purushottam D
2013-06-18
We present a maximum entropy framework to separate intrinsic and extrinsic contributions to noisy gene expression solely from the profile of expression. We express the experimentally accessible probability distribution of the copy number of the gene product (mRNA or protein) by accounting for possible variations in extrinsic factors. The distribution of extrinsic factors is estimated using the maximum entropy principle. Our results show that extrinsic factors qualitatively and quantitatively affect the probability distribution of the gene product. We work out, in detail, the transcription of mRNA from a constitutively expressed promoter in Escherichia coli. We suggest that the variation in extrinsic factors may account for the observed wider-than-Poisson distribution of mRNA copy numbers. We successfully test our framework on a numerical simulation of a simple gene expression scheme that accounts for the variation in extrinsic factors. We also make falsifiable predictions, some of which are tested on previous experiments in E. coli whereas others need verification. Application of the presented framework to more complex situations is also discussed.
Maximum entropy reconstructions of dynamic signaling networks from quantitative proteomics data.
Locasale, Jason W; Wolf-Yadlin, Alejandro
2009-08-26
Advances in mass spectrometry among other technologies have allowed for quantitative, reproducible, proteome-wide measurements of levels of phosphorylation as signals propagate through complex networks in response to external stimuli under different conditions. However, computational approaches to infer elements of the signaling network strictly from the quantitative aspects of proteomics data are not well established. We considered a method using the principle of maximum entropy to infer a network of interacting phosphotyrosine sites from pairwise correlations in a mass spectrometry data set and derive a phosphorylation-dependent interaction network solely from quantitative proteomics data. We first investigated the applicability of this approach by using a simulation of a model biochemical signaling network whose dynamics are governed by a large set of coupled differential equations. We found that in a simulated signaling system, the method detects interactions with significant accuracy. We then analyzed a growth factor mediated signaling network in a human mammary epithelial cell line that we inferred from mass spectrometry data and observe a biologically interpretable, small-world structure of signaling nodes, as well as a catalog of predictions regarding the interactions among previously uncharacterized phosphotyrosine sites. For example, the calculation places a recently identified tumor suppressor pathway through ARHGEF7 and Scribble, in the context of growth factor signaling. Our findings suggest that maximum entropy derived network models are an important tool for interpreting quantitative proteomics data.
Possible dynamical explanations for Paltridge's principle of maximum entropy production
Virgo, Nathaniel, E-mail: nathanielvirgo@gmail.com; Ikegami, Takashi, E-mail: nathanielvirgo@gmail.com [Ikegami Laboratory, University of Tokyo (Japan)
2014-12-05
Throughout the history of non-equilibrium thermodynamics a number of theories have been proposed in which complex, far from equilibrium flow systems are hypothesised to reach a steady state that maximises some quantity. Perhaps the most celebrated is Paltridge's principle of maximum entropy production for the horizontal heat flux in Earth's atmosphere, for which there is some empirical support. There have been a number of attempts to derive such a principle from maximum entropy considerations. However, we currently lack a more mechanistic explanation of how any particular system might self-organise into a state that maximises some quantity. This is in contrast to equilibrium thermodynamics, in which models such as the Ising model have been a great help in understanding the relationship between the predictions of MaxEnt and the dynamics of physical systems. In this paper we show that, unlike in the equilibrium case, Paltridge-type maximisation in non-equilibrium systems cannot be achieved by a simple dynamical feedback mechanism. Nevertheless, we propose several possible mechanisms by which maximisation could occur. Showing that these occur in any real system is a task for future work. The possibilities presented here may not be the only ones. We hope that by presenting them we can provoke further discussion about the possible dynamical mechanisms behind extremum principles for non-equilibrium systems, and their relationship to predictions obtained through MaxEnt.
Bajkova, Anisa T
2011-01-01
We propose the multi-frequency synthesis (MFS) algorithm with spectral correction of frequency-dependent source brightness distribution based on maximum entropy method. In order to take into account the spectral terms of n-th order in the Taylor expansion for the frequency-dependent brightness distribution, we use a generalized form of the maximum entropy method suitable for reconstruction of not only positive-definite functions, but also sign-variable ones. The proposed algorithm is aimed at producing both improved total intensity image and two-dimensional spectral index distribution over the source. We consider also the problem of frequency-dependent variation of the radio core positions of self-absorbed active galactic nuclei, which should be taken into account in a correct multi-frequency synthesis. First, the proposed MFS algorithm has been tested on simulated data and then applied to four-frequency synthesis imaging of the radio source 0954+658 from VLBA observational data obtained quasi-simultaneously ...
Improvement of the detector resolution in X-ray spectrometry by using the maximum entropy method
Fernández, Jorge E.; Scot, Viviana; Giulio, Eugenio Di; Sabbatucci, Lorenzo
2015-11-01
In every X-ray spectroscopy measurement the influence of the detection system causes loss of information. Different mechanisms contribute to form the so-called detector response function (DRF): the detector efficiency, the escape of photons as a consequence of photoelectric or scattering interactions, the spectrum smearing due to the energy resolution, and, in solid states detectors (SSD), the charge collection artifacts. To recover the original spectrum, it is necessary to remove the detector influence by solving the so-called inverse problem. The maximum entropy unfolding technique solves this problem by imposing a set of constraints, taking advantage of the known a priori information and preserving the positive-defined character of the X-ray spectrum. This method has been included in the tool UMESTRAT (Unfolding Maximum Entropy STRATegy), which adopts a semi-automatic strategy to solve the unfolding problem based on a suitable combination of the codes MAXED and GRAVEL, developed at PTB. In the past UMESTRAT proved the capability to resolve characteristic peaks which were revealed as overlapped by a Si SSD, giving good qualitative results. In order to obtain quantitative results, UMESTRAT has been modified to include the additional constraint of the total number of photons of the spectrum, which can be easily determined by inverting the diagonal efficiency matrix. The features of the improved code are illustrated with some examples of unfolding from three commonly used SSD like Si, Ge, and CdTe. The quantitative unfolding can be considered as a software improvement of the detector resolution.
A New Maximum Entropy Probability Function for the Surface Elevation of Nonlinear Sea Waves
ZHANG Li-zhen; XU De-lun
2005-01-01
Based on the maximum entropy principle a new probability density function (PDF) f(x) for the surface elevation of nonlinear sea waves, X, is derived through performing a coordinate transform of X and solving a variation problem subject to three constraint conditions of f(x). Compared with the maximum entropy PDFs presented previously, the new PDF has the following merits: (1) it has four parameters to be determined and hence can give more refined fit to observed data and has wider suitability for nonlinear waves in different conditions; (2) these parameters are expressed in terms of distribution moments of X in a relatively simple form and hence are easy to be determined from observed data; (3) the PDF is free of the restriction of weak nonlinearity and possible to be used for sea waves in complicated conditions, such as those in shallow waters with complicated topography; and (4) the PDF is simple in form and hence convenient for theoretical and practical uses. Laboratory wind-wave experiments have been conducted to test the competence of the new PDF for the surface elevation of nonlinear waves. The experimental results manifest that the new PDF gives somewhat better fit to the laboratory wind-wave data than the well-known Gram-Charlier PDF and beta PDF.
A maximum entropy approach to separating noise from signal in bimodal affiliation networks
Dianati, Navid
2016-01-01
In practice, many empirical networks, including co-authorship and collocation networks are unimodal projections of a bipartite data structure where one layer represents entities, the second layer consists of a number of sets representing affiliations, attributes, groups, etc., and an inter-layer link indicates membership of an entity in a set. The edge weight in the unimodal projection, which we refer to as a co-occurrence network, counts the number of sets to which both end-nodes are linked. Interpreting such dense networks requires statistical analysis that takes into account the bipartite structure of the underlying data. Here we develop a statistical significance metric for such networks based on a maximum entropy null model which preserves both the frequency sequence of the individuals/entities and the size sequence of the sets. Solving the maximum entropy problem is reduced to solving a system of nonlinear equations for which fast algorithms exist, thus eliminating the need for expensive Monte-Carlo sam...
A novel impact identification algorithm based on a linear approximation with maximum entropy
Sanchez, N.; Meruane, V.; Ortiz-Bernardin, A.
2016-09-01
This article presents a novel impact identification algorithm that uses a linear approximation handled by a statistical inference model based on the maximum-entropy principle, termed linear approximation with maximum entropy (LME). Unlike other regression algorithms as artificial neural networks (ANNs) and support vector machines, the proposed algorithm requires only parameter to be selected and the impact is identified after solving a convex optimization problem that has a unique solution. In addition, with LME data is processed in a period of time that is comparable to the one of other algorithms. The performance of the proposed methodology is validated by considering an experimental aluminum plate. Time varying strain data is measured using four piezoceramic sensors bonded to the plate. To demonstrate the potential of the proposed approach over existing ones, results obtained via LME are compared with those of ANN and least square support vector machines. The results demonstrate that with a low number of sensors it is possible to accurately locate and quantify impacts on a structure and that LME outperforms other impact identification algorithms.
Modelling and Simulation of Seasonal Rainfall Using the Principle of Maximum Entropy
Jonathan Borwein
2014-02-01
Full Text Available We use the principle of maximum entropy to propose a parsimonious model for the generation of simulated rainfall during the wettest three-month season at a typical location on the east coast of Australia. The model uses a checkerboard copula of maximum entropy to model the joint probability distribution for total seasonal rainfall and a set of two-parameter gamma distributions to model each of the marginal monthly rainfall totals. The model allows us to match the grade correlation coefficients for the checkerboard copula to the observed Spearman rank correlation coefficients for the monthly rainfalls and, hence, provides a model that correctly describes the mean and variance for each of the monthly totals and also for the overall seasonal total. Thus, we avoid the need for a posteriori adjustment of simulated monthly totals in order to correctly simulate the observed seasonal statistics. Detailed results are presented for the modelling and simulation of seasonal rainfall in the town of Kempsey on the mid-north coast of New South Wales. Empirical evidence from extensive simulations is used to validate this application of the model. A similar analysis for Sydney is also described.
Disproportionate Allocation of Indirect Costs at Individual-Farm Level Using Maximum Entropy
Markus Lips
2017-08-01
Full Text Available This paper addresses the allocation of indirect or joint costs among farm enterprises, and elaborates two maximum entropy models, the basic CoreModel and the InequalityModel, which additionally includes inequality restrictions in order to incorporate knowledge from production technology. Representing the indirect costing approach, both models address the individual-farm level and use standard costs from farm-management literature as allocation bases. They provide a disproportionate allocation, with the distinctive feature that enterprises with large allocation bases face stronger adjustments than enterprises with small ones, approximating indirect costing with reality. Based on crop-farm observations from the Swiss Farm Accountancy Data Network (FADN, including up to 36 observations per enterprise, both models are compared with a proportional allocation as reference base. The mean differences of the enterprise’s allocated labour inputs and machinery costs are in a range of up to ±35% and ±20% for the CoreModel and InequalityModel, respectively. We conclude that the choice of allocation methods has a strong influence on the resulting indirect costs. Furthermore, the application of inequality restrictions is a precondition to make the merits of the maximum entropy principle accessible for the allocation of indirect costs.
del Jesus, Manuel; Foti, Romano; Rinaldo, Andrea; Rodriguez-Iturbe, Ignacio
2012-12-18
The spatial organization of functional vegetation types in river basins is a major determinant of their runoff production, biodiversity, and ecosystem services. The optimization of different objective functions has been suggested to control the adaptive behavior of plants and ecosystems, often without a compelling justification. Maximum entropy production (MEP), rooted in thermodynamics principles, provides a tool to justify the choice of the objective function controlling vegetation organization. The application of MEP at the ecosystem scale results in maximum productivity (i.e., maximum canopy photosynthesis) as the thermodynamic limit toward which the organization of vegetation appears to evolve. Maximum productivity, which incorporates complex hydrologic feedbacks, allows us to reproduce the spatial macroscopic organization of functional types of vegetation in a thoroughly monitored river basin, without the need for a reductionist description of the underlying microscopic dynamics. The methodology incorporates the stochastic characteristics of precipitation and the associated soil moisture on a spatially disaggregated framework. Our results suggest that the spatial organization of functional vegetation types in river basins naturally evolves toward configurations corresponding to dynamically accessible local maxima of the maximum productivity of the ecosystem.
Gian Paolo Beretta
2008-08-01
Full Text Available A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
Beretta, Gian P.
2008-09-01
A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
S. Pascale
2012-01-01
Full Text Available The objective of this paper is to reconsider the Maximum Entropy Production conjecture (MEP in the context of a very simple two-dimensional zonal-vertical climate model able to represent the total material entropy production due at the same time to both horizontal and vertical heat fluxes. MEP is applied first to a simple four-box model of climate which accounts for both horizontal and vertical material heat fluxes. It is shown that, under condition of fixed insolation, a MEP solution is found with reasonably realistic temperature and heat fluxes, thus generalising results from independent two-box horizontal or vertical models. It is also shown that the meridional and the vertical entropy production terms are independently involved in the maximisation and thus MEP can be applied to each subsystem with fixed boundary conditions. We then extend the four-box model by increasing its resolution, and compare it with GCM output. A MEP solution is found which is fairly realistic as far as the horizontal large scale organisation of the climate is concerned whereas the vertical structure looks to be unrealistic and presents seriously unstable features. This study suggest that the thermal meridional structure of the atmosphere is predicted fairly well by MEP once the insolation is given but the vertical structure of the atmosphere cannot be predicted satisfactorily by MEP unless constraints are imposed to represent the determination of longwave absorption by water vapour and clouds as a function of the state of the climate. Furthermore an order-of-magnitude estimate of contributions to the material entropy production due to horizontal and vertical processes within the climate system is provided by using two different methods. In both cases we found that approximately 40 mW m^{−2} K^{−1} of material entropy production is due to vertical heat transport and 5–7 mW m^{−2} K^{−1} to horizontal heat transport.
Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory
Taylor, Jamie M.
2016-09-01
This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.
Analysis of simulated fluorescence intensities decays by a new maximum entropy method algorithm.
Esposito, Rosario; Altucci, Carlo; Velotta, Raffaele
2013-01-01
A new algorithm for the Maximum Entropy Method (MEM) is proposed for recovering the lifetime distribution in time-resolved fluorescence decays. The procedure is based on seeking the distribution that maximizes the Skilling entropy function subjected to the chi-squared constraint χ(2) ~ 1 through iterative linear approximations, LU decomposition of the Hessian matrix of the lagrangian problem and the Golden Section Search for backtracking. The accuracy of this algorithm has been investigated through comparisons with simulated fluorescence decays both of narrow and broad lifetime distributions. The proposed approach is capable to analyse datasets of up to 4,096 points with a discretization ranging from 100 to 1,000 lifetimes. A good agreement with non linear fitting estimates has been observed when the method has been applied to multi-exponential decays. Remarkable results have been also obtained for the broad lifetime distributions where the position is recovered with high accuracy and the distribution width is estimated within 3%. These results indicate that the procedure proposed generates MEM lifetime distributions that can be used to quantify the real heterogeneity of lifetimes in a sample.
Maximum joint entropy and information-based collaboration of automated learning machines
Malakar, N. K.; Knuth, K. H.; Lary, D. J.
2012-05-01
We are working to develop automated intelligent agents, which can act and react as learning machines with minimal human intervention. To accomplish this, an intelligent agent is viewed as a question-asking machine, which is designed by coupling the processes of inference and inquiry to form a model-based learning unit. In order to select maximally-informative queries, the intelligent agent needs to be able to compute the relevance of a question. This is accomplished by employing the inquiry calculus, which is dual to the probability calculus, and extends information theory by explicitly requiring context. Here, we consider the interaction between two questionasking intelligent agents, and note that there is a potential information redundancy with respect to the two questions that the agents may choose to pose. We show that the information redundancy is minimized by maximizing the joint entropy of the questions, which simultaneously maximizes the relevance of each question while minimizing the mutual information between them. Maximum joint entropy is therefore an important principle of information-based collaboration, which enables intelligent agents to efficiently learn together.
Pier A Mello; Eugene Kogan
2002-02-01
We present a maximum-entropy model for the transport of waves through a classically chaotic cavity in the presence of absorption. The entropy of the -matrix statistical distribution is maximized, with the constraint $\\langle {\\rm Tr}SS^{\\dagger}\\rangle = n: n$ is the dimensionality of , and 0 ≤ ≤ 1. For = 1 the -matrix distribution concentrates on the unitarity sphere and we have no absorption; for = 0 the distribution becomes a delta function at the origin and we have complete absorption. For strong absorption our result agrees with a number of analytical calculations already given in the literature. In that limit, the distribution of the individual (angular) transmission and reﬂection coefﬁcients becomes exponential – Rayleigh statistics – even for = 1. For ≫ 1 Rayleigh statistics is attained even with no absorption; here we extend the study to < 1. The model is compared with random-matrix-theory numerical simulations: it describes the problem very well for strong absorption, but fails for moderate and weak absorptions. The success of the model for strong absorption is understood in the light of a central-limit theorem. For weak absorption, some important physical constraint is missing in the construction of the model.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations.
Source Function Determined from HBT Correlations by the Maximum Entropy Principle
Yuan Fang Wei; Yuanfang, Wu; Heinz, Ulrich
1996-01-01
We study the reconstruction of the source function in space-time directly from the measured HBT correlation function using the Maximum Entropy Principle. We find that the problem is ill-defined without at least one additional theoretical constraint as input. Using the requirement of a finite source lifetime for the latter we find a new Gaussian parametrization of the source function directly in terms of the measured HBT radius parameters and its lifetime, where the latter is a free parameter which is not directly measurable by HBT. We discuss the implications of our results for the remaining freedom in building source models consistent with a given set of measured HBT radius parameters.
Determination of zero-coupon and spot rates from treasury data by maximum entropy methods
Gzyl, Henryk; Mayoral, Silvia
2016-08-01
An interesting and important inverse problem in finance consists of the determination of spot rates or prices of the zero coupon bonds, when the only information available consists of the prices of a few coupon bonds. A variety of methods have been proposed to deal with this problem. Here we present variants of a non-parametric method to treat with such problems, which neither imposes an analytic form on the rates or bond prices, nor imposes a model for the (random) evolution of the yields. The procedure consists of transforming the problem of the determination of the prices of the zero coupon bonds into a linear inverse problem with convex constraints, and then applying the method of maximum entropy in the mean. This method is flexible enough to provide a possible solution to a mispricing problem.
A maximum-entropy approach to the adiabatic freezing of a supercooled liquid.
Prestipino, Santi
2013-04-28
I employ the van der Waals theory of Baus and co-workers to analyze the fast, adiabatic decay of a supercooled liquid in a closed vessel with which the solidification process usually starts. By imposing a further constraint on either the system volume or pressure, I use the maximum-entropy method to quantify the fraction of liquid that is transformed into solid as a function of undercooling and of the amount of a foreign gas that could possibly be also present in the test tube. Upon looking at the implications of thermal and mechanical insulation for the energy cost of forming a solid droplet within the liquid, I identify one situation where the onset of solidification inevitably occurs near the wall in contact with the bath.
RESEARCH OF PINYIN-TO-CHARACTER CONVERSION BASED ON MAXIMUM ENTROPY MODEL
Zhao Yan; Wang Xiaolong; Liu Bingquan; Guan Yi
2006-01-01
This paper applied Maximum Entropy (ME) model to Pinyin-To-Character (PTC) conversion instead of Hidden Markov Model (HMM) that could not include complicated and long-distance lexical information. Two ME models were built based on simple and complex templates respectively, and the complex one gave better conversion result. Furthermore, conversion trigger pair of yA → yB/cB was proposed to extract the long-distance constrain feature from the corpus; and then Average Mutual Information (AMI) was used to select conversion trigger pair features which were added to the ME model. The experiment shows that conversion error of the ME with conversion trigger pairs is reduced by 4% on a small training corpus, comparing with HMM smoothed by absolute smoothing.
无
2009-01-01
In order to restrain the mid-spatial frequency error in magnetorheological finishing (MRF) process, a novel part-random path is designed based on the theory of maximum entropy method (MEM). Using KDMRF-1000F polishing machine, one flat work piece (98 mm in diameter) is polished. The mid-spatial frequency error in the region using part-random path is much lower than that by using common raster path. After one MRF iteration (7.46 min), peak-to-valley (PV) is 0.062 wave (1 wave =632.8 nm), root-mean-square (RMS) is 0.010 wave and no obvious mid-spatial frequency error is found. The result shows that the part-random path is a novel path, which results in a high form accuracy and low mid-spatial frequency error in MRF process.
Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.
1993-01-01
We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.
Background adjustment of cDNA microarray images by Maximum Entropy distributions.
Argyropoulos, Christos; Daskalakis, Antonis; Nikiforidis, George C; Sakellaropoulos, George C
2010-08-01
Many empirical studies have demonstrated the exquisite sensitivity of both traditional and novel statistical and machine intelligence algorithms to the method of background adjustment used to analyze microarray datasets. In this paper we develop a statistical framework that approaches background adjustment as a classic stochastic inverse problem, whose noise characteristics are given in terms of Maximum Entropy distributions. We derive analytic closed form approximations to the combined problem of estimating the magnitude of the background in microarray images and adjusting for its presence. The proposed method reduces standardized measures of log expression variability across replicates in situations of known differential and non-differential gene expression without increasing the bias. Additionally, it results in computationally efficient procedures for estimation and learning based on sufficient statistics and can filter out spot measures with intensities that are numerically close to the background level resulting in a noise reduction of about 7%.
Yin, Lo I.; Bielefeld, Michael J.
1987-01-01
The maximum entropy method (MEM) and balanced correlation method were used to reconstruct the images of low-intensity X-ray objects obtained experimentally by means of a uniformly redundant array coded aperture system. The reconstructed images from MEM are clearly superior. However, the MEM algorithm is computationally more time-consuming because of its iterative nature. On the other hand, both the inherently two-dimensional character of images and the iterative computations of MEM suggest the use of parallel processing machines. Accordingly, computations were carried out on the massively parallel processor at Goddard Space Flight Center as well as on the serial processing machine VAX 8600, and the results are compared.
Source Function Determined from Hanbury-Brown/Twiss Correlations by the Maximum Entropy Principle
吴元芳; 刘连寿
2002-01-01
We study the reconstruction of the source function in space-time directly from the measured Hanbury-Brown/Twiss (HBT) correlation function using the maximum entropy principle. We find that the problem is ill-defined without at least one additional theoretical constraint as input. Using the requirement of a finite source lifetime for the problem we find a new Gaussian parametrization of the source function directly in terms of the measured HBT radius parameters and its lifetime, where the latter is a free parameter which is not directly measurable by HBT.We discuss the implications of our results for the remaining freedom in building source models consistent with a given set of measured HBT radius parameters.
Parallelization of Maximum Entropy POS Tagging for Bahasa Indonesia with MapReduce
Arif Nurwidyantoro
2012-07-01
Full Text Available In this paper, MapReduce programming model is used to parallelize training and tagging process in maximum entropy part of speech tagging for Bahasa Indonesia. In training process, MapReduce model is implemented dictionary, tagtoken, and feature creation. In tagging process, MapReduce is implemented to tag lines of document in parallel. The training experiments showed that total training time using MapReduce is faster, but its result reading time inside the process slow down the total training time. The tagging experiments using different number of map and reduce process showed that MapReduce implementation could speedup the tagging process. The fastest tagging result is showed by tagging process using 1,000,000 word corpus and 30 map process.
Maximum Entropy Threshold Segmentation for Target Matching Using Speeded-Up Robust Features
Mu Zhou
2014-01-01
Full Text Available This paper proposes a 2-dimensional (2D maximum entropy threshold segmentation (2DMETS based speeded-up robust features (SURF approach for image target matching. First of all, based on the gray level of each pixel and the average gray level of its neighboring pixels, we construct a 2D gray histogram. Second, by the target and background segmentation, we localize the feature points at the interest points which have the local extremum of box filter responses. Third, from the 2D Haar wavelet responses, we generate the 64-dimensional (64D feature point descriptor vectors. Finally, we perform the target matching according to the comparisons of the 64D feature point descriptor vectors. Experimental results show that our proposed approach can effectively enhance the target matching performance, as well as preserving the real-time capacity.
SEBA SUSAN; NANDINI AGGARWAL; SHEFALI CHAND; AYUSH GUPTA
2016-12-01
In this paper we investigate information-theoretic image coding techniques that assign longer codes to improbable, imprecise and non-distinct intensities in the image. The variable length coding techniques when applied to cropped facial images of subjects with different facial expressions, highlight the set of low probability intensities that characterize the facial expression such as the creases in the forehead, the widening of the eyes and the opening and closing of the mouth. A new coding scheme based on maximum entropy partitioning is proposed in our work, particularly to identify the improbable intensities related to different emotions. The improbable intensities when used as a mask decode the facial expression correctly, providing an effectiveplatform for future emotion categorization experiments
Xintao Xia
2013-07-01
Full Text Available This study proposed the bootstrap maximum-entropy method to evaluate the uncertainty of the starting torque of a slewing bearing. Addressing the variation coefficient of the slewing bearing starting torque under load, the probability density function, estimated true value and variation domain are obtained through experimental investigation of the slewing bearing starting torque under various loads. The probability density function is found to be characterized by variational figure, scale and location. In addition, the estimated true value and the variation domain vary from large to small along with increasing load, indicating better evolution of the stability and reliability of the starting friction torque. Finally, a sensitive spot exists where the estimated true value and the variation domain rise abnormally, showing a fluctuation in the immunity and a degenerative disorder in the stability and reliability of the starting friction torque.
Continuity of the maximum-entropy inference: Convex geometry and numerical ranges approach
Rodman, Leiba [Department of Mathematics, College of William and Mary, P.O. Box 8795, Williamsburg, Virginia 23187-8795 (United States); Spitkovsky, Ilya M., E-mail: ims2@nyu.edu, E-mail: ilya@math.wm.edu [Department of Mathematics, College of William and Mary, P.O. Box 8795, Williamsburg, Virginia 23187-8795 (United States); Division of Science and Mathematics, New York University Abu Dhabi, Saadiyat Island, P.O. Box 129188, Abu Dhabi (United Arab Emirates); Szkoła, Arleta, E-mail: szkola@mis.mpg.de; Weis, Stephan, E-mail: maths@stephan-weis.info [Max Planck Institute for Mathematics in the Sciences, Inselstrasse 22, D-04103 Leipzig (Germany)
2016-01-15
We study the continuity of an abstract generalization of the maximum-entropy inference—a maximizer. It is defined as a right-inverse of a linear map restricted to a convex body which uniquely maximizes on each fiber of the linear map a continuous function on the convex body. Using convex geometry we prove, amongst others, the existence of discontinuities of the maximizer at limits of extremal points not being extremal points themselves and apply the result to quantum correlations. Further, we use numerical range methods in the case of quantum inference which refers to two observables. One result is a complete characterization of points of discontinuity for 3 × 3 matrices.
Maximum-entropy weak lens reconstruction improved methods and application to data
Marshall, P J; Gull, S F; Bridle, S L
2002-01-01
We develop the maximum-entropy weak shear mass reconstruction method presented in earlier papers by taking each background galaxy image shape as an independent estimator of the reduced shear field and incorporating an intrinsic smoothness into the reconstruction. The characteristic length scale of this smoothing is determined by Bayesian methods. Within this algorithm the uncertainties due to the intrinsic distribution of galaxy shapes are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures can be calculated with corresponding uncertainties. We apply this method to two clusters taken from N-body simulations using mock observations corresponding to Keck LRIS and mosaiced HST WFPC2 fields. We demonstrate that the Bayesian choice of smoothing length is sensible and that masses within apertures (including one on a filamentary structure) are reliable. We apply the method to data taken on the cluster MS1054-03 using the Keck LRIS (Clowe et al. 2000) and HST (Hoekstra e...
Modelling streambank erosion potential using maximum entropy in a central Appalachian watershed
Pitchford, J.; Strager, M.; Riley, A.; Lin, L.; Anderson, J.
2015-03-01
We used maximum entropy to model streambank erosion potential (SEP) in a central Appalachian watershed to help prioritize sites for management. Model development included measuring erosion rates, application of a quantitative approach to locate Target Eroding Areas (TEAs), and creation of maps of boundary conditions. We successfully constructed a probability distribution of TEAs using the program Maxent. All model evaluation procedures indicated that the model was an excellent predictor, and that the major environmental variables controlling these processes were streambank slope, soil characteristics, bank position, and underlying geology. A classification scheme with low, moderate, and high levels of SEP derived from logistic model output was able to differentiate sites with low erosion potential from sites with moderate and high erosion potential. A major application of this type of modelling framework is to address uncertainty in stream restoration planning, ultimately helping to bridge the gap between restoration science and practice.
High resolution VLBI polarisation imaging of AGN with the Maximum Entropy Method
Coughlan, Colm P
2016-01-01
Radio polarisation images of the jets of Active Galactic Nuclei (AGN) can provide a deep insight into the launching and collimation mechanisms of relativistic jets. However, even at VLBI scales, resolution is often a limiting factor in the conclusions that can be drawn from observations. The Maximum Entropy Method (MEM) is a deconvolution algorithm that can outperform the more common CLEAN algorithm in many cases, particularly when investigating structures present on scales comparable to or smaller than the nominal beam size with "super-resolution". A new implementation of the MEM suitable for single- or multiple-wavelength VLBI polarisation observations has been developed and is described here. Monte Carlo simulations comparing the performances of CLEAN and MEM at reconstructing the properties of model images are presented; these demonstrate the enhanced reliability of MEM over CLEAN when images of the fractional polarisation and polarisation angle are constructed using convolving beams that are appreciably ...
Matsumoto, Hisanori; Tokiwano, Kazuo; Hosoi, Hirotaka; Sueoka, Kazuhisa; Mukasa, Koichi
2002-05-01
We present a new technique for the restoration of scanning tunneling microscopy (STM) images, which is a two-dimensional extension of a recently developed statistical approach based on the one-dimensional least-squares method (LSM). An STM image is regarded as a realization of a stochastic process and assumed to be a composition of an underlying image and noise. We express the underlying image in terms of a two-dimensional generalized trigonometric polynomial suitable for representing the atomic protrusions in STM images. The optimization of the polynomial is performed by the two-dimensional LSM combined with the power spectral density function estimated by means of the maximum entropy method (MEM) iterative algorithm for two-dimensional signals. The restored images are obtained as the optimum least-squares fitting polynomial which is a continuous surface. We apply this technique to modeled and actual STM data. Results show that the present method yields a reasonable restoration of STM images.
Imaging VLBI polarimetry data from Active Galactic Nuclei using the Maximum Entropy Method
Coughlan Colm P.
2013-12-01
Full Text Available Mapping the relativistic jets emanating from AGN requires the use of a deconvolution algorithm to account for the effects of missing baseline spacings. The CLEAN algorithm is the most commonly used algorithm in VLBI imaging today and is suitable for imaging polarisation data. The Maximum Entropy Method (MEM is presented as an alternative with some advantages over the CLEAN algorithm, including better spatial resolution and a more rigorous and unbiased approach to deconvolution. We have developed a MEM code suitable for deconvolving VLBI polarisation data. Monte Carlo simulations investigating the performance of CLEAN and the MEM code on a variety of source types are being carried out. Real polarisation (VLBA data taken at multiple wavelengths have also been deconvolved using MEM, and several of the resulting polarisation and Faraday rotation maps are presented and discussed.
Modeling the Mass Action Dynamics of Metabolism with Fluctuation Theorems and Maximum Entropy
Cannon, William; Thomas, Dennis; Baxter, Douglas; Zucker, Jeremy; Goh, Garrett
The laws of thermodynamics dictate the behavior of biotic and abiotic systems. Simulation methods based on statistical thermodynamics can provide a fundamental understanding of how biological systems function and are coupled to their environment. While mass action kinetic simulations are based on solving ordinary differential equations using rate parameters, analogous thermodynamic simulations of mass action dynamics are based on modeling states using chemical potentials. The latter have the advantage that standard free energies of formation/reaction and metabolite levels are much easier to determine than rate parameters, allowing one to model across a large range of scales. Bridging theory and experiment, statistical thermodynamics simulations allow us to both predict activities of metabolites and enzymes and use experimental measurements of metabolites and proteins as input data. Even if metabolite levels are not available experimentally, it is shown that a maximum entropy assumption is quite reasonable and in many cases results in both the most energetically efficient process and the highest material flux.
On the stability of the moments of the maximum entropy wind wave spectrum
Pena, H.G.
1983-03-01
The stability of some current wind wave parameters as a function of high-frequency cut-off and degrees of freedom of the spectrum has been numerically investigated when computed in terms of the moments of the wave energy spectrum. From the Pierson-Moskovitz wave spectrum type, a sea surface profile is simulated and its wave energy spectrum is estimated by the Maximum Entropy Method (MEM). As the degrees of freedom of the MEM spectral estimation are varied, the results show a much better stability of the wave parameters as compared to the classical periodogram and correlogram spectral approaches. The stability of wave parameters as a function of high-frequency cut-off has the same result as obtained by the classical techniques.
XU Fu-min; XUE Hong-chao
2004-01-01
The Maximum Entropy Principle (MEP) method is elaborated, and the corresponding probability density evaluation method for the random fluctuation system is introduced, the goal of the article is to find the best fitting method for the wave climate statistical distribution. For the first time, a kind of new maximum entropy probability distribution (MEP distribution) expression is deduced in accordance with the second order moment of a random process. Different from all the fitting methods in the past, the MEP distribution can describe the probability distribution of any random fluctuation system conveniently and reasonably. If the moments of the random signal is limited to the second order, that is, the ratio of the root-mean-square value to the mean value of the random variable is obtained from the random sample, the corresponding MEP distribution can be computed according to the deduced expression in this essay. The concept of the wave climate is introduced here, and the MEP distribution is applied to fit the probability density distributions of the significant wave height and spectral peak period. Take the Mexico Gulf as an example, three stations at different locations, depths and wind wave strengths are chosen in the half-closed gulf, the significant wave height and spectral peak period distributions at each station are fitted with the MEP distribution, the Weibull distribution and the Log-normal distribution respectively, the fitted results are compared with the field observations, the results show that the MEP distribution is the best fitting method, and the Weibull distribution is the worst one when applied to the significant wave height and spectral peak period distributions at different locations, water depths and wind wave strengths in the Gulf. The conclusion shows the feasibility and reasonability of fitting wave climate statistical distributions with the deduced MEP distributions in this essay, and furthermore proves the great potential of MEP method to
Maximum entropy production: can it be used to constrain conceptual hydrological models?
M. C. Westhoff
2013-08-01
Full Text Available In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in literature, generally little guidance has been given on how to apply the principle. The aim of this paper is to use the maximum power principle – which is closely related to MEP – to constrain parameters of a simple conceptual (bucket model. Although, we had to conclude that conceptual bucket models could not be constrained with respect to maximum power, this study sheds more light on how to use and how not to use the principle. Several of these issues have been correctly applied in other studies, but have not been explained or discussed as such. While other studies were based on resistance formulations, where the quantity to be optimized is a linear function of the resistance to be identified, our study shows that the approach also works for formulations that are only linear in the log-transformed space. Moreover, we showed that parameters describing process thresholds or influencing boundary conditions cannot be constrained. We furthermore conclude that, in order to apply the principle correctly, the model should be (1 physically based; i.e. fluxes should be defined as a gradient divided by a resistance, (2 the optimized flux should have a feedback on the gradient; i.e. the influence of boundary conditions on gradients should be minimal, (3 the temporal scale of the model should be chosen in such a way that the parameter that is optimized is constant over the modelling period, (4 only when the correct feedbacks are implemented the fluxes can be correctly optimized and (5 there should be a trade-off between two or more fluxes. Although our application of the maximum power principle did
Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.
Shalymov, Dmitry S; Fradkov, Alexander L
2016-01-01
We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.
Vasquez, R. P.; Klein, J. D.; Barton, J. J.; Grunthaner, F. J.
1981-01-01
A comparison is made between maximum-entropy spectral estimation and traditional methods of deconvolution used in electron spectroscopy. The maximum-entropy method is found to have higher resolution-enhancement capabilities and, if the broadening function is known, can be used with no adjustable parameters with a high degree of reliability. The method and its use in practice are briefly described, and a criterion is given for choosing the optimal order for the prediction filter based on the prediction-error power sequence. The method is demonstrated on a test case and applied to X-ray photoelectron spectra.
Xu, Yadong; Serre, Marc L; Reyes, Jeanette; Vizuete, William
2016-04-19
To improve ozone exposure estimates for ambient concentrations at a national scale, we introduce our novel Regionalized Air Quality Model Performance (RAMP) approach to integrate chemical transport model (CTM) predictions with the available ozone observations using the Bayesian Maximum Entropy (BME) framework. The framework models the nonlinear and nonhomoscedastic relation between air pollution observations and CTM predictions and for the first time accounts for variability in CTM model performance. A validation analysis using only noncollocated data outside of a validation radius rv was performed and the R(2) between observations and re-estimated values for two daily metrics, the daily maximum 8-h average (DM8A) and the daily 24-h average (D24A) ozone concentrations, were obtained with the OBS scenario using ozone observations only in contrast with the RAMP and a Constant Air Quality Model Performance (CAMP) scenarios. We show that, by accounting for the spatial and temporal variability in model performance, our novel RAMP approach is able to extract more information in terms of R(2) increase percentage, with over 12 times for the DM8A and over 3.5 times for the D24A ozone concentrations, from CTM predictions than the CAMP approach assuming that model performance does not change across space and time.
Carlos A. L. Pires
2013-02-01
Full Text Available The Minimum Mutual Information (MinMI Principle provides the least committed, maximum-joint-entropy (ME inferential law that is compatible with prescribed marginal distributions and empirical cross constraints. Here, we estimate MI bounds (the MinMI values generated by constraining sets Tcr comprehended by mcr linear and/or nonlinear joint expectations, computed from samples of N iid outcomes. Marginals (and their entropy are imposed by single morphisms of the original random variables. N-asymptotic formulas are given both for the distribution of cross expectation’s estimation errors, the MinMI estimation bias, its variance and distribution. A growing Tcr leads to an increasing MinMI, converging eventually to the total MI. Under N-sized samples, the MinMI increment relative to two encapsulated sets Tcr1 ⊂ Tcr2 (with numbers of constraints mcr1
Maximum Entropy Relief Feature Weighting%极大熵Relief特征加权
张翔; 邓赵红; 王士同; 蔡及时
2011-01-01
Relief特征加权的最新研究进展表明其可近似地表述为一个间距最大化优化问题.尽管该类算法广为应用,但仍然存在一些缺陷.为了提高Relief特征加权的适应性和鲁棒性,融合间距最大化和极大熵理论,并由此探讨了新的鲁棒的具有更好适应性的Relief特征加新方法.首先,构造了一个结合极大熵原理的间距最大化目标函数.对于该目标函数,运用优化理论得到一些重要的理论结果.在此基础上,对于两类数据、多类数据和在线数据,提出了一组鲁棒的Relief特征加权算法.利用UCI基准数据集和基因数据集进行了实验验证,结果表明提出的新Relief特征加权算法对噪音和例外点显示出了更好的适应性和鲁棒性.%A latest advance in Relief feature weighting techniques is that it can be approximately expressed as a margin maximization problem and therefore its distinctive properties can be investigated with the help of the optimization theory. Although Relief feature has been widely used, it lacks a mechanism to deal with outlier data and how to enhance the robustness and the adjustability of the algorithm in noisy environments is still not very obvious. In order to enhance Relief's adjustability and robustness, by integrating maximum entropy technique into Relief feature weighting techniques, the more robust and adaptive Relief feature weighting new algorithms are investigated. First, a new margin-based objective function integrating maximum entropy is proposed within the optimization framework, where two maximum entropy terms are adopted to control the feature weights and sample force coefficients respectively. Then by applying optimization theory, some of useful theoretical results are derived from the proposed objective function and then a set of robust Relief feature weighting algorithms are developed for two-class data, multi-class data and online data. As demonstrated by extensive experiments in UCI
Test the principle of maximum entropy in constant sum 2×2 game: Evidence in experimental economics
Xu, Bin, E-mail: xubin211@zju.edu.cn [Experimental Social Science Laboratory, Zhejiang University, Hangzhou, 310058 (China); Public Administration College, Zhejiang Gongshang University, Hangzhou, 310018 (China); Zhang, Hongen, E-mail: hongen777@163.com [Department of Physics, Zhejiang University, Hangzhou, 310027 (China); Wang, Zhijian, E-mail: wangzj@zju.edu.cn [Experimental Social Science Laboratory, Zhejiang University, Hangzhou, 310058 (China); Zhang, Jianbo, E-mail: jbzhang08@zju.edu.cn [Department of Physics, Zhejiang University, Hangzhou, 310027 (China)
2012-03-19
By using laboratory experimental data, we test the uncertainty of strategy type in various competing environments with two-person constant sum 2×2 game in the social system. It firstly shows that, in these competing game environments, the outcome of human's decision-making obeys the principle of the maximum entropy. -- Highlights: ► Test the uncertainty in two-person constant sum games with experimental data. ► On game level, the constant sum game fits the principle of maximum entropy. ► On group level, all empirical entropy values are close to theoretical maxima. ► The results can be different for the games that are not constant sum game.
Fraternali, Fernando; Marcelli, Gianluca
2011-01-01
We present a meshfree method for the curvature estimation of membrane networks based on the Local Maximum Entropy approach recently presented in (Arroyo and Ortiz, 2006). A continuum regularization of the network is carried out by balancing the maximization of the information entropy corresponding to the nodal data, with the minimization of the total width of the shape functions. The accuracy and convergence properties of the given curvature prediction procedure are assessed through numerical applications to benchmark problems, which include coarse grained molecular dynamics simulations of the fluctuations of red blood cell membranes (Marcelli et al., 2005; Hale et al., 2009). We also provide an energetic discrete-to-continuum approach to the prediction of the zero-temperature bending rigidity of membrane networks, which is based on the integration of the local curvature estimates. The Local Maximum Entropy approach is easily applicable to the continuum regularization of fluctuating membranes, and the predict...
Chavanis, Pierre-Henri
2014-01-01
In the context of two-dimensional (2D) turbulence, we apply the maximum entropy production principle (MEPP) by enforcing a local conservation of energy. This leads to an equation for the vorticity distribution that conserves all the Casimirs, the energy, and that increases monotonically the mixing entropy ($H$-theorem). Furthermore, the equation for the coarse-grained vorticity dissipates monotonically all the generalized enstrophies. These equations may provide a parametrization of 2D turbulence. They do not generally relax towards the maximum entropy state. The vorticity current vanishes for any steady state of the 2D Euler equation. Interestingly, the equation for the coarse-grained vorticity obtained from the MEPP turns out to coincide, after some algebraic manipulations, with the one obtained with the anticipated vorticity method. This shows a connection between these two approaches when the conservation of energy is treated locally. Furthermore, the newly derived equation, which incorporates a diffusion...
Jiang, Yulin; Li, Bin; Chen, Jie
2016-01-01
The flow velocity distribution in partially-filled circular pipe was investigated in this paper. The velocity profile is different from full-filled pipe flow, since the flow is driven by gravity, not by pressure. The research findings show that the position of maximum flow is below the water surface, and varies with the water depth. In the region of near tube wall, the fluid velocity is mainly influenced by the friction of the wall and the pipe bottom slope, and the variation of velocity is similar to full-filled pipe. But near the free water surface, the velocity distribution is mainly affected by the contractive tube wall and the secondary flow, and the variation of the velocity is relatively small. Literature retrieval results show relatively less research has been shown on the practical expression to describe the velocity distribution of partially-filled circular pipe. An expression of two-dimensional (2D) velocity distribution in partially-filled circular pipe flow was derived based on the principle of maximum entropy (POME). Different entropies were compared according to fluid knowledge, and non-extensive entropy was chosen. A new cumulative distribution function (CDF) of partially-filled circular pipe velocity in terms of flow depth was hypothesized. Combined with the CDF hypothesis, the 2D velocity distribution was derived, and the position of maximum velocity distribution was analyzed. The experimental results show that the estimated velocity values based on the principle of maximum Tsallis wavelet entropy are in good agreement with measured values.
Hsia, Wei-Shen
1987-01-01
A stochastic control model of the NASA/MSFC Ground Facility for Large Space Structures (LSS) control verification through Maximum Entropy (ME) principle adopted in Hyland's method was presented. Using ORACLS, a computer program was implemented for this purpose. Four models were then tested and the results presented.
Vries, de R.Y.; Briels, W.J.; Feil, D.; Velde, te G.; Baerends, E.J.
1996-01-01
1990 Sakata and Sato applied the maximum entropy method (MEM) to a set of structure factors measured earlier by Saka and Kato with the Pendellösung method. They found the presence of non-nuclear attractors, i.e., maxima in the density between two bonded atoms. We applied the MEM to a limited set of
Liu, Jian; Miller, William H.
2008-08-01
The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. The LSC-IVR provides a very effective 'prior' for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25K and 14K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR, for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T = 25K, but the MEAC procedure produces a significant correction at the lower temperature (T = 14K). Comparisons are also made to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.
Reppert, Michael; Tokmakoff, Andrei
The structural characterization of intrinsically disordered peptides (IDPs) presents a challenging biophysical problem. Extreme heterogeneity and rapid conformational interconversion make traditional methods difficult to interpret. Due to its ultrafast (ps) shutter speed, Amide I vibrational spectroscopy has received considerable interest as a novel technique to probe IDP structure and dynamics. Historically, Amide I spectroscopy has been limited to delivering global secondary structural information. More recently, however, the method has been adapted to study structure at the local level through incorporation of isotope labels into the protein backbone at specific amide bonds. Thanks to the acute sensitivity of Amide I frequencies to local electrostatic interactions-particularly hydrogen bonds-spectroscopic data on isotope labeled residues directly reports on local peptide conformation. Quantitative information can be extracted using electrostatic frequency maps which translate molecular dynamics trajectories into Amide I spectra for comparison with experiment. Here we present our recent efforts in the development of a rigorous approach to incorporating Amide I spectroscopic restraints into refined molecular dynamics structural ensembles using maximum entropy and related approaches. By combining force field predictions with experimental spectroscopic data, we construct refined structural ensembles for a family of short, strongly disordered, elastin-like peptides in aqueous solution.
Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity.
Asti, Lorenzo; Uguzzoni, Guido; Marcatili, Paolo; Pagnani, Andrea
2016-04-01
The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high-frequency mutation rate in the genome region that codes for the antibody active site. Eventually, cells that produce antibodies with higher affinity for their cognate antigen are selected and clonally expanded. Here, we propose a new statistical approach based on maximum entropy modeling in which a scoring function related to the binding affinity of antibodies against a specific antigen is inferred from a sample of sequences of the immune repertoire of an individual. We use our inference strategy to infer a statistical model on a data set obtained by sequencing a fairly large portion of the immune repertoire of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10-6), outperforming other sequence- and structure-based models.
A strong test of a maximum entropy model of trait-based community assembly.
Shipley, Bill; Laughlin, Daniel C; Sonnier, Grégory; Otfinowski, Rafael
2011-02-01
We evaluate the predictive power and generality of Shipley's maximum entropy (maxent) model of community assembly in the context of 96 quadrats over a 120-km2 area having a large (79) species pool and strong gradients. Quadrats were sampled in the herbaceous understory of ponderosa pine forests in the Coconino National Forest, Arizona, U.S.A. The maxent model accurately predicted species relative abundances when observed community-weighted mean trait values were used as model constraints. Although only 53% of the variation in observed relative abundances was associated with a combination of 12 environmental variables, the maxent model based only on the environmental variables provided highly significant predictive ability, accounting for 72% of the variation that was possible given these environmental variables. This predictive ability largely surpassed that of nonmetric multidimensional scaling (NMDS) or detrended correspondence analysis (DCA) ordinations. Using cross-validation with 1000 independent runs, the median correlation between observed and predicted relative abundances was 0.560 (the 2.5% and 97.5% quantiles were 0.045 and 0.825). The qualitative predictions of the model were also noteworthy: dominant species were correctly identified in 53% of the quadrats, 83% of rare species were correctly predicted to have a relative abundance of < 0.05, and the median predicted relative abundance of species actually absent from a quadrat was 5 x 10(-5).
Nazeri, Mona; Jusoff, Kamaruzaman; Madani, Nima; Mahmud, Ahmad Rodzi; Bahman, Abdul Rani; Kumar, Lalit
2012-01-01
One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt) is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus) in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear's population.
Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming
2011-06-01
Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.
Messier, Kyle P; Campbell, Ted; Bradley, Philip J; Serre, Marc L
2015-08-18
Radon ((222)Rn) is a naturally occurring chemically inert, colorless, and odorless radioactive gas produced from the decay of uranium ((238)U), which is ubiquitous in rocks and soils worldwide. Exposure to (222)Rn is likely the second leading cause of lung cancer after cigarette smoking via inhalation; however, exposure through untreated groundwater is also a contributing factor to both inhalation and ingestion routes. A land use regression (LUR) model for groundwater (222)Rn with anisotropic geological and (238)U based explanatory variables is developed, which helps elucidate the factors contributing to elevated (222)Rn across North Carolina. The LUR is also integrated into the Bayesian Maximum Entropy (BME) geostatistical framework to increase accuracy and produce a point-level LUR-BME model of groundwater (222)Rn across North Carolina including prediction uncertainty. The LUR-BME model of groundwater (222)Rn results in a leave-one out cross-validation r(2) of 0.46 (Pearson correlation coefficient = 0.68), effectively predicting within the spatial covariance range. Modeled results of (222)Rn concentrations show variability among intrusive felsic geological formations likely due to average bedrock (238)U defined on the basis of overlying stream-sediment (238)U concentrations that is a widely distributed consistently analyzed point-source data.
Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems.
Schymanski, Stanislaus J; Kleidon, Axel; Stieglitz, Marc; Narula, Jatin
2010-05-12
Feedbacks between water use, biomass and infiltration capacity in semiarid ecosystems have been shown to lead to the spontaneous formation of vegetation patterns in a simple model. The formation of patterns permits the maintenance of larger overall biomass at low rainfall rates compared with homogeneous vegetation. This results in a bias of models run at larger scales neglecting subgrid-scale variability. In the present study, we investigate the question whether subgrid-scale heterogeneity can be parameterized as the outcome of optimal partitioning between bare soil and vegetated area. We find that a two-box model reproduces the time-averaged biomass of the patterns emerging in a 100 x 100 grid model if the vegetated fraction is optimized for maximum entropy production (MEP). This suggests that the proposed optimality-based representation of subgrid-scale heterogeneity may be generally applicable to different systems and at different scales. The implications for our understanding of self-organized behaviour and its modelling are discussed.
Vidal-García, Francisca; Serio-Silva, Juan Carlos
2011-07-01
We developed a potential distribution model for the tropical rain forest species of primates of southern Mexico: the black howler monkey (Alouatta pigra), the mantled howler monkey (Alouatta palliata), and the spider monkey (Ateles geoffroyi). To do so, we applied the maximum entropy algorithm from the ecological niche modeling program MaxEnt. For each species, we used occurrence records from scientific collections, and published and unpublished sources, and we also used the 19 environmental coverage variables related to precipitation and temperature from WorldClim to develop the models. The predicted distribution of A. pigra was strongly associated with the mean temperature of the warmest quarter (23.6%), whereas the potential distributions of A. palliata and A. geoffroyi were strongly associated with precipitation during the coldest quarter (52.2 and 34.3% respectively). The potential distribution of A. geoffroyi is broader than that of the Alouatta spp. The areas with the greatest probability of presence of A. pigra and A. palliata are strongly associated with riparian vegetation, whereas the presence of A. geoffroyi is more strongly associated with the presence of rain forest. Our most significant contribution is the identification of areas with a high probability of the presence of these primate species, which is information that can be applied to planning future studies and then establishing criteria for the creation of areas to primate conservation in Mexico.
Maximum entropy estimation of glutamate and glutamine in MR spectroscopic imaging.
Rathi, Yogesh; Ning, Lipeng; Michailovich, Oleg; Liao, HuiJun; Gagoski, Borjan; Grant, P Ellen; Shenton, Martha E; Stern, Robert; Westin, Carl-Fredrik; Lin, Alexander
2014-01-01
Magnetic resonance spectroscopic imaging (MRSI) is often used to estimate the concentration of several brain metabolites. Abnormalities in these concentrations can indicate specific pathology, which can be quite useful in understanding the disease mechanism underlying those changes. Due to higher concentration, metabolites such as N-acetylaspartate (NAA), Creatine (Cr) and Choline (Cho) can be readily estimated using standard Fourier transform techniques. However, metabolites such as Glutamate (Glu) and Glutamine (Gln) occur in significantly lower concentrations and their resonance peaks are very close to each other making it difficult to accurately estimate their concentrations (separately). In this work, we propose to use the theory of 'Spectral Zooming' or high-resolution spectral analysis to separate the Glutamate and Glutamine peaks and accurately estimate their concentrations. The method works by estimating a unique power spectral density, which corresponds to the maximum entropy solution of a zero-mean stationary Gaussian process. We demonstrate our estimation technique on several physical phantom data sets as well as on in-vivo brain spectroscopic imaging data. The proposed technique is quite general and can be used to estimate the concentration of any other metabolite of interest.
Jat, Prahlad; Serre, Marc L
2016-12-01
Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R(2) by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles.
White, Ethan P; Thibault, Katherine M; Xiao, Xiao
2012-08-01
The species abundance distribution (SAD) is one of themost studied patterns in ecology due to its potential insights into commonness and rarity, community assembly, and patterns of biodiversity. It is well established that communities are composed of a few common and many rare species, and numerous theoretical models have been proposed to explain this pattern. However, no attempt has been made to determine how well these theoretical characterizations capture observed taxonomic and global-scale spatial variation in the general form of the distribution. Here, using data of a scope unprecedented in community ecology, we show that a simple maximum entropy model produces a truncated log-series distribution that can predict between 83% and 93% of the observed variation in the rank abundance of species across 15 848 globally distributed communities including birds, mammals, plants, and butterflies. This model requires knowledge of only the species richness and total abundance of the community to predict the full abundance distribution, which suggests that these factors are sufficient to understand the distribution for most purposes. Since geographic patterns in richness and abundance can often be successfully modeled, this approach should allow the distribution of commonness and rarity to be characterized, even in locations where empirical data are unavailable.
Two dimensional IR-FID-CPMG acquisition and adaptation of a maximum entropy reconstruction
Rondeau-Mouro, C.; Kovrlija, R.; Van Steenberge, E.; Moussaoui, S.
2016-04-01
By acquiring the FID signal in two-dimensional TD-NMR spectroscopy, it is possible to characterize mixtures or complex samples composed of solid and liquid phases. We have developed a new sequence for this purpose, called IR-FID-CPMG, making it possible to correlate spin-lattice T1 and spin-spin T2 relaxation times, including both liquid and solid phases in samples. We demonstrate here the potential of a new algorithm for the 2D inverse Laplace transformation of IR-FID-CPMG data based on an adapted reconstruction of the maximum entropy method, combining the standard decreasing exponential decay function with an additional term drawn from Abragam's FID function. The results show that the proposed IR-FID-CPMG sequence and its related inversion model allow accurate characterization and quantification of both solid and liquid phases in multiphasic and compartmentalized systems. Moreover, it permits to distinguish between solid phases having different T1 relaxation times or to highlight cross-relaxation phenomena.
Huang, Shih-Yu; Deng, Yi; Wang, Jingfeng
2016-10-01
The maximum-entropy-production (MEP) model of surface heat fluxes, based on contemporary non-equilibrium thermodynamics, information theory, and atmospheric turbulence theory, is used to re-estimate the global surface heat fluxes. The MEP model predicted surface fluxes automatically balance the surface energy budgets at all time and space scales without the explicit use of near-surface temperature and moisture gradient, wind speed and surface roughness data. The new MEP-based global annual mean fluxes over the land surface, using input data of surface radiation, temperature data from National Aeronautics and Space Administration-Clouds and the Earth's Radiant Energy System (NASA CERES) supplemented by surface specific humidity data from the Modern-Era Retrospective Analysis for Research and Applications (MERRA), agree closely with previous estimates. The new estimate of ocean evaporation, not using the MERRA reanalysis data as model inputs, is lower than previous estimates, while the new estimate of ocean sensible heat flux is higher than previously reported. The MEP model also produces the first global map of ocean surface heat flux that is not available from existing global reanalysis products.
Guo-Jheng Yang
2013-08-01
Full Text Available The fragile watermarking technique is used to protect intellectual property rights while also providing security and rigorous protection. In order to protect the copyright of the creators, it can be implanted in some representative text or totem. Because all of the media on the Internet are digital, protection has become a critical issue, and determining how to use digital watermarks to protect digital media is thus the topic of our research. This paper uses the Logistic map with parameter u = 4 to generate chaotic dynamic behavior with the maximum entropy 1. This approach increases the security and rigor of the protection. The main research target of information hiding is determining how to hide confidential data so that the naked eye cannot see the difference. Next, we introduce one method of information hiding. Generally speaking, if the image only goes through Arnold’s cat map and the Logistic map, it seems to lack sufficient security. Therefore, our emphasis is on controlling Arnold’s cat map and the initial value of the chaos system to undergo small changes and generate different chaos sequences. Thus, the current time is used to not only make encryption more stringent but also to enhance the security of the digital media.
Jayajit Das '
2015-07-01
Full Text Available A common statistical situation concerns inferring an unknown distribution Q(x from a known distribution P(y, where X (dimension n, and Y (dimension m have a known functional relationship. Most commonly, n ≤ m, and the task is relatively straightforward for well-defined functional relationships. For example, if Y1 and Y2 are independent random variables, each uniform on [0, 1], one can determine the distribution of X = Y1 + Y2; here m = 2 and n = 1. However, biological and physical situations can arise where n > m and the functional relation Y→X is non-unique. In general, in the absence of additional information, there is no unique solution to Q in those cases. Nevertheless, one may still want to draw some inferences about Q. To this end, we propose a novel maximum entropy (MaxEnt approach that estimates Q(x based only on the available data, namely, P(y. The method has the additional advantage that one does not need to explicitly calculate the Lagrange multipliers. In this paper we develop the approach, for both discrete and continuous probability distributions, and demonstrate its validity. We give an intuitive justification as well, and we illustrate with examples.
Using Maximum Entropy Modeling for Optimal Selection of Sampling Sites for Monitoring Networks
Paul H. Evangelista
2011-05-01
Full Text Available Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2 of the National Ecological Observatory Network (NEON. We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint, within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
A Maximum Entropy Approach to Assess Debonding in Honeycomb aluminum Plates
Viviana Meruane
2014-05-01
Full Text Available Honeycomb sandwich structures are used in a wide variety of applications. Nevertheless, due to manufacturing defects or impact loads, these structures can be subject to imperfect bonding or debonding between the skin and the honeycomb core. The presence of debonding reduces the bending stiffness of the composite panel, which causes detectable changes in its vibration characteristics. This article presents a new supervised learning algorithm to identify debonded regions in aluminum honeycomb panels. The algorithm uses a linear approximation method handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of neural networks. The honeycomb panels are modeled with finite elements using a simplified three-layer shell model. The adhesive layer between the skin and core is modeled using linear springs, the rigidities of which are reduced in debonded sectors. The algorithm is validated using experimental data of an aluminum honeycomb panel under different damage scenarios.
Howard, A. M.; Bernardes, S.; Nibbelink, N.; Biondi, L.; Presotto, A.; Fragaszy, D. M.; Madden, M.
2012-07-01
Movement patterns of bearded capuchin monkeys (Cebus (Sapajus) libidinosus) in northeastern Brazil are likely impacted by environmental features such as elevation, vegetation density, or vegetation type. Habitat preferences of these monkeys provide insights regarding the impact of environmental features on species ecology and the degree to which they incorporate these features in movement decisions. In order to evaluate environmental features influencing movement patterns and predict areas suitable for movement, we employed a maximum entropy modelling approach, using observation points along capuchin monkey daily routes as species presence points. We combined these presence points with spatial data on important environmental features from remotely sensed data on land cover and topography. A spectral mixing analysis procedure was used to generate fraction images that represent green vegetation, shade and soil of the study area. A Landsat Thematic Mapper scene of the area of study was geometrically and atmospherically corrected and used as input in a Minimum Noise Fraction (MNF) procedure and a linear spectral unmixing approach was used to generate the fraction images. These fraction images and elevation were the environmental layer inputs for our logistic MaxEnt model of capuchin movement. Our models' predictive power (test AUC) was 0.775. Areas of high elevation (>450 m) showed low probabilities of presence, and percent green vegetation was the greatest overall contributor to model AUC. This work has implications for predicting daily movement patterns of capuchins in our field site, as suitability values from our model may relate to habitat preference and facility of movement.
Gul, Sehrish; Zou, Xiang; Hassan, Che Hashim; Azam, Muhammad; Zaman, Khalid
2015-12-01
This study investigates the relationship between energy consumption and carbon dioxide emission in the causal framework, as the direction of causality remains has a significant policy implication for developed and developing countries. The study employed maximum entropy bootstrap (Meboot) approach to examine the causal nexus between energy consumption and carbon dioxide emission using bivariate as well as multivariate framework for Malaysia, over a period of 1975-2013. This is a unified approach without requiring the use of conventional techniques based on asymptotical theory such as testing for possible unit root and cointegration. In addition, it can be applied in the presence of non-stationary of any type including structural breaks without any type of data transformation to achieve stationary. Thus, it provides more reliable and robust inferences which are insensitive to time span as well as lag length used. The empirical results show that there is a unidirectional causality running from energy consumption to carbon emission both in the bivariate model and multivariate framework, while controlling for broad money supply and population density. The results indicate that Malaysia is an energy-dependent country and hence energy is stimulus to carbon emissions.
Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays.
Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick
2013-01-01
Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand.
Using maximum entropy model to predict protein secondary structure with single sequence.
Ding, Yong-Sheng; Zhang, Tong-Liang; Gu, Quan; Zhao, Pei-Ying; Chou, Kuo-Chen
2009-01-01
Prediction of protein secondary structure is somewhat reminiscent of the efforts by many previous investigators but yet still worthy of revisiting it owing to its importance in protein science. Several studies indicate that the knowledge of protein structural classes can provide useful information towards the determination of protein secondary structure. Particularly, the performance of prediction algorithms developed recently have been improved rapidly by incorporating homologous multiple sequences alignment information. Unfortunately, this kind of information is not available for a significant amount of proteins. In view of this, it is necessary to develop the method based on the query protein sequence alone, the so-called single-sequence method. Here, we propose a novel single-sequence approach which is featured by that various kinds of contextual information are taken into account, and that a maximum entropy model classifier is used as the prediction engine. As a demonstration, cross-validation tests have been performed by the new method on datasets containing proteins from different structural classes, and the results thus obtained are quite promising, indicating that the new method may become an useful tool in protein science or at least play a complementary role to the existing protein secondary structure prediction methods.
STOCHASTIC ANALYSIS OF RANDOM AD HOC NETWORKS WITH MAXIMUM ENTROPY DEPLOYMENTS
Thomas Bourgeois
2014-10-01
Full Text Available In this paper, we present the first stochastic analysis of the link performance of an ad hoc network modelled by a single homogeneous Poisson point process (HPPP. According to the maximum entropy principle, the single HPPP model is mathematically the best model for random deployments with a given node density. However, previous works in the literature only consider a modified model which shows a discrepancy in the interference distribution with the more suitable single HPPP model. The main contributions of this paper are as follows. 1 It presents a new mathematical framework leading to closed form expressions of the probability of success of both one-way transmissions and handshakes for a deployment modelled by a single HPPP. Our approach, based on stochastic geometry, can be extended to complex protocols. 2 From the obtained results, all confirmed by comparison to simulated data, optimal PHY and MAC layer parameters are determined and the relations between them is described in details. 3 The influence of the routing protocol on handshake performance is taken into account in a realistic manner, leading to the confirmation of the intuitive result that the effect of imperfect feedback on the probability of success of a handshake is only negligible for transmissions to the first neighbour node.
Computational design of hepatitis C vaccines using maximum entropy models and population dynamics
Hart, Gregory; Ferguson, Andrew
Hepatitis C virus (HCV) afflicts 170 million people and kills 350,000 annually. Vaccination offers the most realistic and cost effective hope of controlling this epidemic. Despite 20 years of research, no vaccine is available. A major obstacle is the virus' extreme genetic variability and rapid mutational escape from immune pressure. Improvements in the vaccine design process are urgently needed. Coupling data mining with spin glass models and maximum entropy inference, we have developed a computational approach to translate sequence databases into empirical fitness landscapes. These landscapes explicitly connect viral genotype to phenotypic fitness and reveal vulnerable targets that can be exploited to rationally design immunogens. Viewing these landscapes as the mutational ''playing field'' over which the virus is constrained to evolve, we have integrated them with agent-based models of the viral mutational and host immune response dynamics, establishing a data-driven immune simulator of HCV infection. We have employed this simulator to perform in silico screening of HCV immunogens. By systematically identifying a small number of promising vaccine candidates, these models can accelerate the search for a vaccine by massively reducing the experimental search space.
On the relevance of the maximum entropy principle in non-equilibrium statistical mechanics
Auletta, Gennaro; Rondoni, Lamberto; Vulpiani, Angelo
2017-07-01
At first glance, the maximum entropy principle (MEP) apparently allows us to derive, or justify in a simple way, fundamental results of equilibrium statistical mechanics. Because of this, a school of thought considers the MEP as a powerful and elegant way to make predictions in physics and other disciplines, rather than a useful technical tool like others in statistical physics. From this point of view the MEP appears as an alternative and more general predictive method than the traditional ones of statistical physics. Actually, careful inspection shows that such a success is due to a series of fortunate facts that characterize the physics of equilibrium systems, but which are absent in situations not described by Hamiltonian dynamics, or generically in nonequilibrium phenomena. Here we discuss several important examples in non equilibrium statistical mechanics, in which the MEP leads to incorrect predictions, proving that it does not have a predictive nature. We conclude that, in these paradigmatic examples, an approach that uses a detailed analysis of the relevant aspects of the dynamics cannot be avoided.
Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks
Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.
2011-01-01
Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
Huang, Shih-Yu; Deng, Yi; Wang, Jingfeng
2017-09-01
The maximum-entropy-production (MEP) model of surface heat fluxes, based on contemporary non-equilibrium thermodynamics, information theory, and atmospheric turbulence theory, is used to re-estimate the global surface heat fluxes. The MEP model predicted surface fluxes automatically balance the surface energy budgets at all time and space scales without the explicit use of near-surface temperature and moisture gradient, wind speed and surface roughness data. The new MEP-based global annual mean fluxes over the land surface, using input data of surface radiation, temperature data from National Aeronautics and Space Administration-Clouds and the Earth's Radiant Energy System (NASA CERES) supplemented by surface specific humidity data from the Modern-Era Retrospective Analysis for Research and Applications (MERRA), agree closely with previous estimates. The new estimate of ocean evaporation, not using the MERRA reanalysis data as model inputs, is lower than previous estimates, while the new estimate of ocean sensible heat flux is higher than previously reported. The MEP model also produces the first global map of ocean surface heat flux that is not available from existing global reanalysis products.
Reinterpreting maximum entropy in ecology: a null hypothesis constrained by ecological mechanism.
O'Dwyer, James P; Rominger, Andrew; Xiao, Xiao
2017-07-01
Simplified mechanistic models in ecology have been criticised for the fact that a good fit to data does not imply the mechanism is true: pattern does not equal process. In parallel, the maximum entropy principle (MaxEnt) has been applied in ecology to make predictions constrained by just a handful of state variables, like total abundance or species richness. But an outstanding question remains: what principle tells us which state variables to constrain? Here we attempt to solve both problems simultaneously, by translating a given set of mechanisms into the state variables to be used in MaxEnt, and then using this MaxEnt theory as a null model against which to compare mechanistic predictions. In particular, we identify the sufficient statistics needed to parametrise a given mechanistic model from data and use them as MaxEnt constraints. Our approach isolates exactly what mechanism is telling us over and above the state variables alone. © 2017 John Wiley & Sons Ltd/CNRS.
Mona Nazeri
Full Text Available One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear's population.
Fujino, Akinori; Ueda, Naonori; Saito, Kazumi
2008-03-01
This paper presents a method for designing semi-supervised classifiers trained on labeled and unlabeled samples. We focus on probabilistic semi-supervised classifier design for multi-class and single-labeled classification problems, and propose a hybrid approach that takes advantage of generative and discriminative approaches. In our approach, we first consider a generative model trained by using labeled samples and introduce a bias correction model, where these models belong to the same model family, but have different parameters. Then, we construct a hybrid classifier by combining these models based on the maximum entropy principle. To enable us to apply our hybrid approach to text classification problems, we employed naive Bayes models as the generative and bias correction models. Our experimental results for four text data sets confirmed that the generalization ability of our hybrid classifier was much improved by using a large number of unlabeled samples for training when there were too few labeled samples to obtain good performance. We also confirmed that our hybrid approach significantly outperformed generative and discriminative approaches when the performance of the generative and discriminative approaches was comparable. Moreover, we examined the performance of our hybrid classifier when the labeled and unlabeled data distributions were different.
Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity.
Lorenzo Asti
2016-04-01
Full Text Available The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high-frequency mutation rate in the genome region that codes for the antibody active site. Eventually, cells that produce antibodies with higher affinity for their cognate antigen are selected and clonally expanded. Here, we propose a new statistical approach based on maximum entropy modeling in which a scoring function related to the binding affinity of antibodies against a specific antigen is inferred from a sample of sequences of the immune repertoire of an individual. We use our inference strategy to infer a statistical model on a data set obtained by sequencing a fairly large portion of the immune repertoire of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10-6, outperforming other sequence- and structure-based models.
Maximum entropy inference of seabed attenuation parameters using ship radiated broadband noise.
Knobles, D P
2015-12-01
The received acoustic field generated by a single passage of a research vessel on the New Jersey continental shelf is employed to infer probability distributions for the parameter values representing the frequency dependence of the seabed attenuation and the source levels of the ship. The statistical inference approach employed in the analysis is a maximum entropy methodology. The average value of the error function, needed to uniquely specify a conditional posterior probability distribution, is estimated with data samples from time periods in which the ship-receiver geometry is dominated by either the stern or bow aspect. The existence of ambiguities between the source levels and the environmental parameter values motivates an attempt to partially decouple these parameter values. The main result is the demonstration that parameter values for the attenuation (α and the frequency exponent), the sediment sound speed, and the source levels can be resolved through a model space reduction technique. The results of this multi-step statistical inference developed for ship radiated noise is then tested by processing towed source data over the same bandwidth and source track to estimate continuous wave source levels that were measured independently with a reference hydrophone on the tow body.
A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution
Piotrowski, Edward W.; Sładkowski, Jan
2009-03-01
The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a
Ecosystem functioning and maximum entropy production: a quantitative test of hypotheses.
Meysman, Filip J R; Bruers, Stijn
2010-05-12
The idea that entropy production puts a constraint on ecosystem functioning is quite popular in ecological thermodynamics. Yet, until now, such claims have received little quantitative verification. Here, we examine three 'entropy production' hypotheses that have been forwarded in the past. The first states that increased entropy production serves as a fingerprint of living systems. The other two hypotheses invoke stronger constraints. The state selection hypothesis states that when a system can attain multiple steady states, the stable state will show the highest entropy production rate. The gradient response principle requires that when the thermodynamic gradient increases, the system's new stable state should always be accompanied by a higher entropy production rate. We test these three hypotheses by applying them to a set of conventional food web models. Each time, we calculate the entropy production rate associated with the stable state of the ecosystem. This analysis shows that the first hypothesis holds for all the food webs tested: the living state shows always an increased entropy production over the abiotic state. In contrast, the state selection and gradient response hypotheses break down when the food web incorporates more than one trophic level, indicating that they are not generally valid.
Gutierrez-Jurado, H. A.; Guan, H.; Wang, J.; Wang, H.; Bras, R. L.; Simmons, C. T.
2015-12-01
Quantification of evapotranspiration (ET) and its partition over regions of heterogeneous topography and canopy poses a challenge using traditional approaches. In this study, we report the results of a novel field experiment design guided by the Maximum Entropy Production model of ET (MEP-ET), formulated for estimating evaporation and transpiration from homogeneous soil and canopy. A catchment with complex terrain and patchy vegetation in South Australia was instrumented to measure temperature, humidity and net radiation at soil and canopy surfaces. Performance of the MEP-ET model to quantify transpiration and soil evaporation was evaluated during wet and dry conditions with independently and directly measured transpiration from sapflow and soil evaporation using the Bowen Ratio Energy Balance (BREB). MEP-ET transpiration shows remarkable agreement with that obtained through sapflow measurements during wet conditions, but consistently overestimates the flux during dry periods. However, an additional term introduced to the original MEP-ET model accounting for higher stomatal regulation during dry spells, based on differences between leaf and air vapor pressure deficits and temperatures, significantly improves the model performance. On the other hand, MEP-ET soil evaporation is in good agreement with that from BREB regardless of moisture conditions. The experimental design allows a plot and tree scale quantification of evaporation and transpiration respectively. This study confirms for the first time that the MEP-ET originally developed for homogeneous open bare soil and closed canopy can be used for modeling ET over heterogeneous land surfaces. Furthermore, we show that with the addition of an empirical function simulating the plants ability to regulate transpiration, and based on the same measurements of temperature and humidity, the method can produce reliable estimates of ET during both wet and dry conditions without compromising its parsimony.
Yu, Hwa-Lung; Wang, Chih-Hsin
2013-02-05
Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.
High resolution VLBI polarization imaging of AGN with the maximum entropy method
Coughlan, Colm P.; Gabuzda, Denise C.
2016-12-01
Radio polarization images of the jets of Active Galactic Nuclei (AGN) can provide a deep insight into the launching and collimation mechanisms of relativistic jets. However, even at VLBI scales, resolution is often a limiting factor in the conclusions that can be drawn from observations. The maximum entropy method (MEM) is a deconvolution algorithm that can outperform the more common CLEAN algorithm in many cases, particularly when investigating structures present on scales comparable to or smaller than the nominal beam size with `super-resolution'. A new implementation of the MEM suitable for single- or multiple-wavelength VLBI polarization observations has been developed and is described here. Monte Carlo simulations comparing the performances of CLEAN and MEM at reconstructing the properties of model images are presented; these demonstrate the enhanced reliability of MEM over CLEAN when images of the fractional polarization and polarization angle are constructed using convolving beams that are appreciably smaller than the full CLEAN beam. The results of using this new MEM software to image VLBA observations of the AGN 0716+714 at six different wavelengths are presented, and compared to corresponding maps obtained with CLEAN. MEM and CLEAN maps of Stokes I, the polarized flux, the fractional polarization and the polarization angle are compared for convolving beams ranging from the full CLEAN beam down to a beam one-third of this size. MEM's ability to provide more trustworthy polarization imaging than a standard CLEAN-based deconvolution when convolving beams appreciably smaller than the full CLEAN beam are used is discussed.
Rui A. P. Perdigão
2012-06-01
Full Text Available The application of the Maximum Entropy (ME principle leads to a minimum of the Mutual Information (MI, I(X,Y, between random variables X,Y, which is compatible with prescribed joint expectations and given ME marginal distributions. A sequence of sets of joint constraints leads to a hierarchy of lower MI bounds increasingly approaching the true MI. In particular, using standard bivariate Gaussian marginal distributions, it allows for the MI decomposition into two positive terms: the Gaussian MI (I_{g}, depending upon the Gaussian correlation or the correlation between ‘Gaussianized variables’, and a non‑Gaussian MI (I_{ng}, coinciding with joint negentropy and depending upon nonlinear correlations. Joint moments of a prescribed total order p are bounded within a compact set defined by Schwarz-like inequalities, where I_{ng} grows from zero at the ‘Gaussian manifold’ where moments are those of Gaussian distributions, towards infinity at the set’s boundary where a deterministic relationship holds. Sources of joint non-Gaussianity have been systematized by estimating I_{ng} between the input and output from a nonlinear synthetic channel contaminated by multiplicative and non-Gaussian additive noises for a full range of signal-to-noise ratio (snr variances. We have studied the effect of varying snr on I_{g} and I_{ng} under several signal/noise scenarios.
Potential distribution of Xylella fastidiosa in Italy: a maximum entropy model
Luciano BOSSO
2016-05-01
Full Text Available Species distribution models may provide realistic scenarios to explain the influence of bioclimatic variables in the context of emerging plant pathogens. Xylella fastidiosa is a xylem-limited Gram-negative bacterium causing severe diseases in many plant species. We developed a maximum entropy model for X. fastidiosa in Italy. Our objectives were to carry out a preliminary analysis of the species’ potential geographical distribution and determine which eco-geographical variables may favour its presence in other Italian regions besides Apulia. The analysis of single variable contribution showed that precipitation of the driest (40.3% and wettest (30.4% months were the main factors influencing model performance. Altitude, precipitation of warmest quarter, mean temperature of coldest quarter, and land cover provided a total contribution of 19.5%. Based on the model predictions, X. fastidiosa has a high probability (> 0.8 of colonizing areas characterized by: i low altitude (0–150 m a.s.l.; ii precipitations in the driest month < 10 mm, in the wettest month ranging between 80–110 mm and during the warmest quarter < 60 mm; iii mean temperature of coldest quarter ≥ 8°C; iv agricultural areas comprising intensive agriculture, complex cultivation patterns, olive groves, annual crops associated with permanent crops, orchards and vineyards; forest (essentially oak woodland; and Mediterranean shrubland. Species distribution models showed a high probability of X. fastidiosa occurrence in the regions of Apulia, Calabria, Basilicata, Sicily, Sardinia and coastal areas of Campania, Lazio and south Tuscany. Maxent models achieved excellent levels of predictive performance according to area under curve (AUC, true skill statistic (TSS and minimum difference between training and testing AUC data (AUCdiff. Our study indicated that X. fastidiosa has the potential to overcome the current boundaries of distribution and affect areas of Italy outside Apulia.
Adom Giffin
2014-09-01
Full Text Available In this paper, we continue our efforts to show how maximum relative entropy (MrE can be used as a universal updating algorithm. Here, our purpose is to tackle a joint state and parameter estimation problem where our system is nonlinear and in a non-equilibrium state, i.e., perturbed by varying external forces. Traditional parameter estimation can be performed by using filters, such as the extended Kalman filter (EKF. However, as shown with a toy example of a system with first order non-homogeneous ordinary differential equations, assumptions made by the EKF algorithm (such as the Markov assumption may not be valid. The problem can be solved with exponential smoothing, e.g., exponentially weighted moving average (EWMA. Although this has been shown to produce acceptable filtering results in real exponential systems, it still cannot simultaneously estimate both the state and its parameters and has its own assumptions that are not always valid, for example when jump discontinuities exist. We show that by applying MrE as a filter, we can not only develop the closed form solutions, but we can also infer the parameters of the differential equation simultaneously with the means. This is useful in real, physical systems, where we want to not only filter the noise from our measurements, but we also want to simultaneously infer the parameters of the dynamics of a nonlinear and non-equilibrium system. Although there were many assumptions made throughout the paper to illustrate that EKF and exponential smoothing are special cases ofMrE, we are not “constrained”, by these assumptions. In other words, MrE is completely general and can be used in broader ways.
A seqlet-based maximum entropy Markov approach for protein secondary structure prediction
DONG; Qiwen; WANG; Xiaolong; LIN; Lei; GUAN; Yi
2005-01-01
A novel method for predicting the secondary structures of proteins from amino acid sequence has been presented. The protein secondary structure seqlets that are analogous to the words in natural language have been extracted. These seqlets will capture the relationship between amino acid sequence and the secondary structures of proteins and further form the protein secondary structure dictionary. To be elaborate, the dictionary is organism-specific. Protein secondary structure prediction is formulated as an integrated word segmentation and part of speech tagging problem. The word-lattice is used to represent the results of the word segmentation and the maximum entropy model is used to calculate the probability of a seqlet tagged as a certain secondary structure type. The method is markovian in the seqlets, permitting efficient exact calculation of the posterior probability distribution over all possible word segmentations and their tags by viterbi algorithm. The optimal segmentations and their tags are computed as the results of protein secondary structure prediction. The method is applied to predict the secondary structures of proteins of four organisms respectively and compared with the PHD method. The results show that the performance of this method is higher than that of PHD by about 3.9% Q3 accuracy and 4.6% SOV accuracy. Combining with the local similarity protein sequences that are obtained by BLAST can give better prediction. The method is also tested on the 50 CASP5 target proteins with Q3 accuracy 78.9% and SOV accuracy 77.1%. A web server for protein secondary structure prediction has been constructed which is available at http://www.insun. hit. edu. cn: 81/demos/biology/index.html.
Kai Yan
2015-01-01
Full Text Available A predictive model for droplet size and velocity distributions of a pressure swirl atomizer has been proposed based on the maximum entropy formalism (MEF. The constraint conditions of the MEF model include the conservation laws of mass, momentum, and energy. The effects of liquid swirling strength, Weber number, gas-to-liquid axial velocity ratio and gas-to-liquid density ratio on the droplet size and velocity distributions of a pressure swirl atomizer are investigated. Results show that model based on maximum entropy formalism works well to predict droplet size and velocity distributions under different spray conditions. Liquid swirling strength, Weber number, gas-to-liquid axial velocity ratio and gas-to-liquid density ratio have different effects on droplet size and velocity distributions of a pressure swirl atomizer.
Murata, T; Sato, T; Nakamura, S X
2016-01-01
The maximum entropy method is examined as a new tool for solving the ill-posed inversion problem involved in the Lorentz integral transformation (LIT) method. As an example, we apply the method to the spin-dipole strength function of 4He. We show that the method can be successfully used for inversion of LIT, provided the LIT function is available with a sufficient accuracy.
Neves, L A [Universidade Estadual Paulista, IGCE, DEMAC, Rio Claro, SP (Brazil); Oliveira, F R; Peres, F A [Faculdade de Tecnologia de Sao Jose do Rio Preto, Sao Jose do Rio Preto, SP (Brazil); Moreira, R D; Moriel, A R; De Godoy, M F [Faculdade de Medicina de Sao Jose do Rio Preto, FAMERP, Sao Jose do Rio Preto, SP (Brazil); Murta Junior, L O, E-mail: laneves@rc.unesp.br [Universidade de Sao Paulo, FFCLRP, Depto Computacao e Matematica, Ribeirao Preto (Brazil)
2011-03-01
This paper presents a method for the quantification of cellular rejection in endomyocardial biopsies of patients submitted to heart transplant. The model is based on automatic multilevel thresholding, which employs histogram quantification techniques, histogram slope percentage analysis and the calculation of maximum entropy. The structures were quantified with the aid of the multi-scale fractal dimension and lacunarity for the identification of behavior patterns in myocardial cellular rejection in order to determine the most adequate treatment for each case.
Reginatto, M.; Goldhagen, P.
1998-06-01
The problem of analyzing data from a multisphere neutron spectrometer to infer the energy spectrum of the incident neutrons is discussed. The main features of the code MAXED, a computer program developed to apply the maximum entropy principle to the deconvolution (unfolding) of multisphere neutron spectrometer data, are described, and the use of the code is illustrated with an example. A user`s guide for the code MAXED is included in an appendix. The code is available from the authors upon request.
Ashford, Oliver S; Davies, Andrew J.; Jones, Daniel O. B.
2014-01-01
Xenophyophores are a group of exclusively deep-sea agglutinating rhizarian protozoans, at least some of which are foraminifera. They are an important constituent of the deep-sea megafauna that are sometimes found in sufficient abundance to act as a significant source of habitat structure for meiofaunal and macrofaunal organisms. This study utilised maximum entropy modelling (Maxent) and a high-resolution environmental database to explore the environmental factors controlling the presence of X...
Neves, L. A.; Oliveira, F. R.; Peres, F. A.; Moreira, R. D.; Moriel, A. R.; de Godoy, M. F.; Murta Junior, L. O.
2011-03-01
This paper presents a method for the quantification of cellular rejection in endomyocardial biopsies of patients submitted to heart transplant. The model is based on automatic multilevel thresholding, which employs histogram quantification techniques, histogram slope percentage analysis and the calculation of maximum entropy. The structures were quantified with the aid of the multi-scale fractal dimension and lacunarity for the identification of behavior patterns in myocardial cellular rejection in order to determine the most adequate treatment for each case.
Estimation of Wild Fire Risk Area based on Climate and Maximum Entropy in Korean Peninsular
Kim, T.; Lim, C. H.; Song, C.; Lee, W. K.
2015-12-01
The number of forest fires and accompanying human injuries and physical damages has been increased by frequent drought. In this study, forest fire danger zone of Korea is estimated to predict and prepare for future forest fire hazard regions. The MaxEnt (Maximum Entropy) model is used to estimate the forest fire hazard region which estimates the probability distribution of the status. The MaxEnt model is primarily for the analysis of species distribution, but its applicability for various natural disasters is getting recognition. The detailed forest fire occurrence data collected by the MODIS for past 5 years (2010-2014) is used as occurrence data for the model. Also meteorology, topography, vegetation data are used as environmental variable. In particular, various meteorological variables are used to check impact of climate such as annual average temperature, annual precipitation, precipitation of dry season, annual effective humidity, effective humidity of dry season, aridity index. Consequently, the result was valid based on the AUC(Area Under the Curve) value (= 0.805) which is used to predict accuracy in the MaxEnt model. Also predicted forest fire locations were practically corresponded with the actual forest fire distribution map. Meteorological variables such as effective humidity showed the greatest contribution, and topography variables such as TWI (Topographic Wetness Index) and slope also contributed on the forest fire. As a result, the east coast and the south part of Korea peninsula were predicted to have high risk on the forest fire. In contrast, high-altitude mountain area and the west coast appeared to be safe with the forest fire. The result of this study is similar with former studies, which indicates high risks of forest fire in accessible area and reflects climatic characteristics of east and south part in dry season. To sum up, we estimated the forest fire hazard zone with existing forest fire locations and environment variables and had
The Maximum Entropy Approach to Record Abbreviation for Optimal Record Control.
Goyal, P.
1983-01-01
Tests performed on 6,260 titles from 3 machine-readable British National Bibliography files using an entropy based technique for abbreviation of text strings for use as a control code found that more than 94 percent of the titles generated a unique seven character code. Six references and an illustrative example are appended. (EJS)
Chavanis, Pierre-Henri, E-mail: chavanis@irsamc.ups-tlse.fr [Laboratoire de Physique Théorique, Université Paul Sabatier, 118 route de Narbonne, F-31062 Toulouse (France)
2014-12-01
In the context of two-dimensional (2D) turbulence, we apply the maximum entropy production principle (MEPP) by enforcing a local conservation of energy. This leads to an equation for the vorticity distribution that conserves all the Casimirs, the energy, and that increases monotonically the mixing entropy (H-theorem). Furthermore, the equation for the coarse-grained vorticity dissipates monotonically all the generalized enstrophies. These equations may provide a parametrization of 2D turbulence. They do not generally relax towards the maximum entropy state. The vorticity current vanishes for any steady state of the 2D Euler equation. Interestingly, the equation for the coarse-grained vorticity obtained from the MEPP turns out to coincide, after some algebraic manipulations, with the one obtained with the anticipated vorticity method. This shows a connection between these two approaches when the conservation of energy is treated locally. Furthermore, the newly derived equation, which incorporates a diffusion term and a drift term, has a nice physical interpretation in terms of a selective decay principle. This sheds new light on both the MEPP and the anticipated vorticity method. (paper)
Kleidon, A
2010-05-12
The Earth system is remarkably different from its planetary neighbours in that it shows pronounced, strong global cycling of matter. These global cycles result in the maintenance of a unique thermodynamic state of the Earth's atmosphere which is far from thermodynamic equilibrium (TE). Here, I provide a simple introduction of the thermodynamic basis to understand why Earth system processes operate so far away from TE. I use a simple toy model to illustrate the application of non-equilibrium thermodynamics and to classify applications of the proposed principle of maximum entropy production (MEP) to such processes into three different cases of contrasting flexibility in the boundary conditions. I then provide a brief overview of the different processes within the Earth system that produce entropy, review actual examples of MEP in environmental and ecological systems, and discuss the role of interactions among dissipative processes in making boundary conditions more flexible. I close with a brief summary and conclusion.
A Method of LSB substitution based on image blocks and maximum entropy
Mohamed Radouane
2013-01-01
Full Text Available In this paper we introduce an algorithm of digital watermarking based on embedding watermark into sub images with LSB technique. The watermark is embedded into specifics blocks of the host image, the selection of blocks are based on entropy value. The simulation results show that the visual quality of the watermarked image and the extracted watermark is good, this result is presented and proved by a high PSNR value.
Oseen vortex as a maximum entropy state of a two dimensional fluid
Montgomery, D. C.; Matthaeus, W. H.
2011-07-01
During the last four decades, a considerable number of investigations has been carried out into the evolution of turbulence in two dimensional Navier-Stokes flows. Much of the information has come from numerical solution of the (otherwise insoluble) dynamical equations and thus has necessarily required some kind of boundary conditions: spatially periodic, no-slip, stress-free, or free-slip. The theoretical framework that has proved to be of the most predictive value has been one employing an entropy functional (sometimes called the Boltzmann entropy) whose maximization has been correlated well in several cases with the late-time configurations into which the computed turbulence has relaxed. More recently, flow in the unbounded domain has been addressed by Gallay and Wayne who have shown a late-time relaxation to the classical Oseen vortex (also sometimes called the Lamb-Oseen vortex) for situations involving a finite net circulation or non-zero total integrated vorticity. Their proof involves powerful but difficult mathematics that might be thought to be beyond the preparation of many practicing fluid dynamicists. The purpose of this present paper is to remark that relaxation to the Oseen vortex can also be predicted in the more intuitive framework that has previously proved useful in predicting computational results with boundary conditions: that of an appropriate entropy maximization. The results make no assumption about the size of the Reynolds numbers, as long as they are finite, and the viscosity is treated as finite throughout.
Lorenz, Ralph D
2010-05-12
The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b.
Cavalli, Andrea; Camilloni, Carlo; Vendruscolo, Michele
2013-03-07
In order to characterise the dynamics of proteins, a well-established method is to incorporate experimental parameters as replica-averaged structural restraints into molecular dynamics simulations. Here, we justify this approach in the case of interproton distance information provided by nuclear Overhauser effects by showing that it generates ensembles of conformations according to the maximum entropy principle. These results indicate that the use of replica-averaged structural restraints in molecular dynamics simulations, given a force field and a set of experimental data, can provide an accurate approximation of the unknown Boltzmann distribution of a system.
McDonald, James G.; Groth, Clinton P. T.
2013-09-01
The ability to predict continuum and transition-regime flows by hyperbolic moment methods offers the promise of several advantages over traditional techniques. These methods offer an extended range of physical validity as compared with the Navier-Stokes equations and can be used for the prediction of many non-equilibrium flows with a lower expense than particle-based methods. Also, the hyperbolic first-order nature of the resulting partial differential equations leads to mathematical and numerical advantages. Moment equations generated through an entropy-maximization principle are particularly attractive due to their apparent robustness; however, their application to practical situations involving viscous, heat-conducting gases has been hampered by several issues. Firstly, the lack of closed-form expressions for closing fluxes leads to numerical expense as many integrals of distribution functions must be computed numerically during the course of a flow computation. Secondly, it has been shown that there exist physically realizable moment states for which the entropy-maximizing problem on which the method is based cannot be solved. Following a review of the theory surrounding maximum-entropy moment closures, this paper shows that both of these problems can be addressed in practice, at least for a simplified one-dimensional gas, and that the resulting flow predictions can be surprisingly good. The numerical results described provide significant motivations for the extension of these ideas to the fully three-dimensional case.
Seyed Mostafa Hosseinalipour; Hadiseh Karimaei; Ehsan Movahednejad
2016-01-01
The maximum entropy principle (MEP) is one of the first methods which have been used to predict droplet size and velocity distributions of liquid sprays. This method needs a mean droplets diameter as an input to predict the droplet size distribution. This paper presents a new sub-model based on the deterministic aspects of liquid atom-ization process independent of the experimental data to provide the mean droplets diameter for using in the maximum entropy formulation (MEF). For this purpose, a theoretical model based on the approach of energy conservation law entitled energy-based model (EBM) is presented. Based on this approach, atomization occurs due to the kinetic energy loss. Prediction of the combined model (MEF/EBM) is in good agreement with the avail-able experimental data. The energy-based model can be used as a fast and reliable enough model to obtain a good estimation of the mean droplets diameter of a spray and the combined model (MEF/EBM) can be used to wel predict the droplet size distribution at the primary breakup.
Ke, Jau-Chuan; Lin, Chuen-Horng
2008-11-01
We consider the M[x]/G/1 queueing system, in which the server operates N policy and a single vacation. As soon as the system becomes empty the server leaves for a vacation of random length V. When he returns from the vacation and the system size is greater than or equal to a threshold value N, he starts to serve the waiting customers. If he finds fewer customers than N. he waits in the system until the system size reaches or exceeds N. The server is subject to breakdowns according to a Poisson process and his repair time obeys an arbitrary distribution. We use maximum entropy principle to derive the approximate formulas for the steady-state probability distributions of the queue length. We perform a comparative analysis between the approximate results with established exact results for various batch size, vacation time, service time and repair time distributions. We demonstrate that the maximum entropy approach is efficient enough for practical purpose and is a feasible method for approximating the solution of complex queueing systems.
Maćkowiak, Mariusz; Kątowski, Piotr
1996-06-01
Two-dimensional zero-field nutation NQR spectroscopy has been used to determine the full quadrupolar tensor of spin - 3/2 nuclei in serveral molecular crystals containing the 3 5 Cl and 7 5 As nuclei. The problems of reconstructing 2D-nutation NQR spectra using conventional methods and the advantages of using implementation of the maximum entropy method (MEM) are analyzed. It is shown that the replacement of conventional Fourier transform by an alternative data processing by MEM in 2D NQR spectroscopy leads to sensitivity improvement, reduction of instrumental artefacts and truncation errors, shortened data acquisition times and suppression of noise, while at the same time increasing the resolution. The effects of off-resonance irradiation in nutation experiments are demonstrated both experimentally and theoretically. It is shown that off-resonance nutation spectroscopy is a useful extension of the conventional on-resonance experiments, thus facilitating the determination of asymmetry parameters in multiple spectrum. The theoretical description of the off-resonance effects in 2D nutation NQR spectroscopy is given, and general exact formulas for the asymmetry parameter are obtained. In off-resonance conditions, the resolution of the nutation NQR spectrum decreases with the spectrometer offset. However, an enhanced resolution can be achieved by using the maximum entropy method in 2D-data reconstruction.
Maximum Joint Entropy and Information-Based Collaboration of Automated Learning Machines
Malakar, N K; Lary, D J
2011-01-01
We are working to develop automated intelligent agents, which can act and react as learning machines with minimal human intervention. To accomplish this, an intelligent agent is viewed as a question-asking machine, which is designed by coupling the processes of inference and inquiry to form a model-based learning unit. In order to select maximally-informative queries, the intelligent agent needs to be able to compute the relevance of a question. This is accomplished by employing the inquiry calculus, which is dual to the probability calculus, and extends information theory by explicitly requiring context. Here, we consider the interaction between two question-asking intelligent agents, and note that there is a potential information redundancy with respect to the two questions that the agents may choose to pose. We show that the information redundancy is minimized by maximizing the joint entropy of the questions, which simultaneously maximizes the relevance of each question while minimizing the mutual informat...
Power-law distribution functions derived from maximum entropy and a symmetry relationship
Peterson, G J
2011-01-01
Power-law distributions are common, particularly in social physics. Here, we explore whether power-laws might arise as a consequence of a general variational principle for stochastic processes. We describe communities of 'social particles', where the cost of adding a particle to the community is shared equally between the particle joining the cluster and the particles that are already members of the cluster. Power-law probability distributions of community sizes arise as a natural consequence of the maximization of entropy, subject to this 'equal cost sharing' rule. We also explore a generalization in which there is unequal sharing of the costs of joining a community. Distributions change smoothly from exponential to power-law as a function of a sharing-inequality quantity. This work gives an interpretation of power-law distributions in terms of shared costs.
Mind the edge! The role of adjacency matrix degeneration in maximum entropy weighted network models
Sagarra, Oleguer; Díaz-Guilera, Albert
2015-01-01
Complex network null models based on entropy maximization are becoming a powerful tool to characterize and analyze data from real systems. However, it is not easy to extract good and unbiased information from these models: A proper understanding of the nature of the underlying events represented in them is crucial. In this paper we emphasize this fact stressing how an accurate counting of configurations compatible with given constraints is fundamental to build good null models for the case of networks with integer valued adjacency matrices constructed from aggregation of one or multiple layers. We show how different assumptions about the elements from which the networks are built give rise to distinctively different statistics, even when considering the same observables to match those of real data. We illustrate our findings by applying the formalism to three datasets using an open-source software package accompanying the present work and demonstrate how such differences are clearly seen when measuring networ...
Hao Chiang, Shou; Valdez, Miguel; Chen, Chi-Farn
2016-06-01
Forest is a very important ecosystem and natural resource for living things. Based on forest inventories, government is able to make decisions to converse, improve and manage forests in a sustainable way. Field work for forestry investigation is difficult and time consuming, because it needs intensive physical labor and the costs are high, especially surveying in remote mountainous regions. A reliable forest inventory can give us a more accurate and timely information to develop new and efficient approaches of forest management. The remote sensing technology has been recently used for forest investigation at a large scale. To produce an informative forest inventory, forest attributes, including tree species are unavoidably required to be considered. In this study the aim is to classify forest tree species in Erdenebulgan County, Huwsgul province in Mongolia, using Maximum Entropy method. The study area is covered by a dense forest which is almost 70% of total territorial extension of Erdenebulgan County and is located in a high mountain region in northern Mongolia. For this study, Landsat satellite imagery and a Digital Elevation Model (DEM) were acquired to perform tree species mapping. The forest tree species inventory map was collected from the Forest Division of the Mongolian Ministry of Nature and Environment as training data and also used as ground truth to perform the accuracy assessment of the tree species classification. Landsat images and DEM were processed for maximum entropy modeling, and this study applied the model with two experiments. The first one is to use Landsat surface reflectance for tree species classification; and the second experiment incorporates terrain variables in addition to the Landsat surface reflectance to perform the tree species classification. All experimental results were compared with the tree species inventory to assess the classification accuracy. Results show that the second one which uses Landsat surface reflectance coupled
S. H. Chiang
2016-06-01
Full Text Available Forest is a very important ecosystem and natural resource for living things. Based on forest inventories, government is able to make decisions to converse, improve and manage forests in a sustainable way. Field work for forestry investigation is difficult and time consuming, because it needs intensive physical labor and the costs are high, especially surveying in remote mountainous regions. A reliable forest inventory can give us a more accurate and timely information to develop new and efficient approaches of forest management. The remote sensing technology has been recently used for forest investigation at a large scale. To produce an informative forest inventory, forest attributes, including tree species are unavoidably required to be considered. In this study the aim is to classify forest tree species in Erdenebulgan County, Huwsgul province in Mongolia, using Maximum Entropy method. The study area is covered by a dense forest which is almost 70% of total territorial extension of Erdenebulgan County and is located in a high mountain region in northern Mongolia. For this study, Landsat satellite imagery and a Digital Elevation Model (DEM were acquired to perform tree species mapping. The forest tree species inventory map was collected from the Forest Division of the Mongolian Ministry of Nature and Environment as training data and also used as ground truth to perform the accuracy assessment of the tree species classification. Landsat images and DEM were processed for maximum entropy modeling, and this study applied the model with two experiments. The first one is to use Landsat surface reflectance for tree species classification; and the second experiment incorporates terrain variables in addition to the Landsat surface reflectance to perform the tree species classification. All experimental results were compared with the tree species inventory to assess the classification accuracy. Results show that the second one which uses Landsat surface
Jirasek, A [Department of Physics and Astronomy, University of Victoria, Victoria BC V8W 3P6 (Canada); Matthews, Q [Department of Physics and Astronomy, University of Victoria, Victoria BC V8W 3P6 (Canada); Hilts, M [Medical Physics, BC Cancer Agency-Vancouver Island Centre, Victoria BC V8R 6V5 (Canada); Schulze, G [Michael Smith Laboratories, University of British Columbia, Vancouver BC V6T 1Z4 (Canada); Blades, M W [Department of Chemistry, University of British Columbia, Vancouver BC V6T 1Z1 (Canada); Turner, R F B [Michael Smith Laboratories, University of British Columbia, Vancouver BC V6T 1Z4 (Canada); Department of Chemistry, University of British Columbia, Vancouver BC V6T 1Z1 (Canada); Department of Electrical and Computer Engineering, University of British Columbia, Vancouver BC V6T 1Z4 (Canada)
2006-05-21
This study presents a new method of image signal-to-noise ratio (SNR) enhancement by utilizing a newly developed 2D two-point maximum entropy regularization method (TPMEM). When utilized as an image filter, it is shown that 2D TPMEM offers unsurpassed flexibility in its ability to balance the complementary requirements of image smoothness and fidelity. The technique is evaluated for use in the enhancement of x-ray computed tomography (CT) images of irradiated polymer gels used in radiation dosimetry. We utilize a range of statistical parameters (e.g. root-mean square error, correlation coefficient, error histograms, Fourier data) to characterize the performance of TPMEM applied to a series of synthetic images of varying initial SNR. These images are designed to mimic a range of dose intensity patterns that would occur in x-ray CT polymer gel radiation dosimetry. Analysis is extended to a CT image of a polymer gel dosimeter irradiated with a stereotactic radiation therapy dose distribution. Results indicate that TPMEM performs strikingly well on radiation dosimetry data, significantly enhancing the SNR of noise-corrupted images (SNR enhancement factors >15 are possible) while minimally distorting the original image detail (as shown by the error histograms and Fourier data). It is also noted that application of this new TPMEM filter is not restricted exclusively to x-ray CT polymer gel dosimetry image data but can in future be extended to a wide range of radiation dosimetry data.
Jirasek, A; Matthews, Q; Hilts, M; Schulze, G; Blades, M W; Turner, R F B
2006-05-21
This study presents a new method of image signal-to-noise ratio (SNR) enhancement by utilizing a newly developed 2D two-point maximum entropy regularization method (TPMEM). When utilized as an image filter, it is shown that 2D TPMEM offers unsurpassed flexibility in its ability to balance the complementary requirements of image smoothness and fidelity. The technique is evaluated for use in the enhancement of x-ray computed tomography (CT) images of irradiated polymer gels used in radiation dosimetry. We utilize a range of statistical parameters (e.g. root-mean square error, correlation coefficient, error histograms, Fourier data) to characterize the performance of TPMEM applied to a series of synthetic images of varying initial SNR. These images are designed to mimic a range of dose intensity patterns that would occur in x-ray CT polymer gel radiation dosimetry. Analysis is extended to a CT image of a polymer gel dosimeter irradiated with a stereotactic radiation therapy dose distribution. Results indicate that TPMEM performs strikingly well on radiation dosimetry data, significantly enhancing the SNR of noise-corrupted images (SNR enhancement factors >15 are possible) while minimally distorting the original image detail (as shown by the error histograms and Fourier data). It is also noted that application of this new TPMEM filter is not restricted exclusively to x-ray CT polymer gel dosimetry image data but can in future be extended to a wide range of radiation dosimetry data.
Hwang, J; Carbotte, J P
2014-04-23
We use maximum entropy techniques to extract an electron-phonon density from optical data for the normal state at T = 45 K of MgB2. Limiting the analysis to a range of phonon energies below 110 meV, which is sufficient for capturing all phonon structures, we find a spectral function that is in good agreement with that calculated for the quasi-two-dimensional σ-band. Extending the analysis to higher energies, up to 160 meV, we find no evidence for any additional contributions to the fluctuation spectrum, but find that the data can only be understood if the density of states is taken to decrease with increasing energy.
Uchiyama, Takanori; Minamitani, Haruyuki; Sakata, Makoto
1990-01-01
The complex maximum entropy method and complex autoregressive model fitting with the singular value decomposition method (SVD) were applied to the free induction decay signal data obtained with a Fourier transform nuclear magnetic resonance spectrometer to estimate superresolved NMR spectra. The practical estimation of superresolved NMR spectra are shown on the data of phosphorus-31 nuclear magnetic resonance spectra. These methods provide sharp peaks and high signal-to-noise ratio compared with conventional fast Fourier transform. The SVD method was more suitable for estimating superresolved NMR spectra than the MEM because the SVD method allowed high-order estimation without spurious peaks, and it was easy to determine the order and the rank.
Sanchez-Martinez, M; Crehuet, R
2014-12-21
We present a method based on the maximum entropy principle that can re-weight an ensemble of protein structures based on data from residual dipolar couplings (RDCs). The RDCs of intrinsically disordered proteins (IDPs) provide information on the secondary structure elements present in an ensemble; however even two sets of RDCs are not enough to fully determine the distribution of conformations, and the force field used to generate the structures has a pervasive influence on the refined ensemble. Two physics-based coarse-grained force fields, Profasi and Campari, are able to predict the secondary structure elements present in an IDP, but even after including the RDC data, the re-weighted ensembles differ between both force fields. Thus the spread of IDP ensembles highlights the need for better force fields. We distribute our algorithm in an open-source Python code.
Jorge Pereira
2015-12-01
Full Text Available Biological invasion by exotic organisms became a key issue, a concern associated to the deep impacts on several domains described as resultant from such processes. A better understanding of the processes, the identification of more susceptible areas, and the definition of preventive or mitigation measures are identified as critical for the purpose of reducing associated impacts. The use of species distribution modeling might help on the purpose of identifying areas that are more susceptible to invasion. This paper aims to present preliminary results on assessing the susceptibility to invasion by the exotic species Acacia dealbata Mill. in the Ceira river basin. The results are based on the maximum entropy modeling approach, considered one of the correlative modelling techniques with better predictive performance. Models which validation is based on independent data sets present better performance, an evaluation based on the AUC of ROC accuracy measure.
A Maximum-Entropy Compound Distribution Model for Extreme Wave Heights of Typhoon-Affected Sea Areas
WANG Li-ping; SUN Xiao-guang; LU Ke-bo; XU De-lun
2012-01-01
A new compound distribution model for extreme wave heights of typhoon-affected sea areas is proposed on the basis of the maximum-entropy principle.The new model is formed by nesting a discrete distribution in a continuous one,having eight parameters which can be determined in terms of observed data of typhoon occurrence-frequency and extreme wave heights by numerically solving two sets of equations derived in this paper.The model is examined by using it to predict the N-year return-periodwave height at two hydrology stations in the Yellow Sea,and the predicted results are compared with those predicted by use of some other compound distribution models.Examinations and comparisons show that the model has some advantages for predicting the N-year return-period wave height in typhoon-affected sea areas.
Raghavan, Ram K; Goodin, Douglas G; Hanzlicek, Gregg A; Zolnerowich, Gregory; Dryden, Michael W; Anderson, Gary A; Ganta, Roman R
2016-03-01
The potential distribution of Amblyomma americanum ticks in Kansas was modeled using maximum entropy (MaxEnt) approaches based on museum and field-collected species occurrence data. Various bioclimatic variables were used in the model as potentially influential factors affecting the A. americanum niche. Following reduction of dimensionality among predictor variables using principal components analysis, which revealed that the first two principal axes explain over 87% of the variance, the model indicated that suitable conditions for this medically important tick species cover a larger area in Kansas than currently believed. Soil moisture, temperature, and precipitation were highly correlated with the first two principal components and were influential factors in the A. americanum ecological niche. Assuming that the niche estimated in this study covers the occupied distribution, which needs to be further confirmed by systematic surveys, human exposure to this known disease vector may be considerably under-appreciated in the state.
On the 'fake' inferred entanglement associated with the maximum entropy inference of quantum states
Batle, J.; Casas, M. [Departament de Fisica, Universitat de les Illes Balears, Palma de Mallorca (Spain); Plastino, A.R. [Departament de Fisica, Universitat de les Illes Balears, Palma de Mallorca (Spain); Faculty of Astronomy and Geophysics, National University La Plata, La Plata (Argentina); National Research Council, CONICET (AR)); Plastino, A. [National Research Council (CONICET) (Argentina); Department of Physics, National University La Plata, La Plata (Argentina)
2001-08-24
The inference of entangled quantum states by recourse to the maximum entropy (MaxEnt) principle is considered in connection with the recently pointed out problem of fake inferred entanglement (Horodecki R et al 1999 Phys. Rev. A 59 1799). We show that there are operators A-circumflex, both diagonal and non-diagonal in the Bell basis, such that, when the expectation value
Lawrence, K E; Summers, S R; Heath, A C G; McFadden, A M J; Pulford, D J; Pomroy, W E
2016-07-15
The tick-borne haemoparasite Theileria orientalis is the most important infectious cause of anaemia in New Zealand cattle. Since 2012 a previously unrecorded type, T. orientalis type 2 (Ikeda), has been associated with disease outbreaks of anaemia, lethargy, jaundice and deaths on over 1000 New Zealand cattle farms, with most of the affected farms found in the upper North Island. The aim of this study was to model the relative environmental suitability for T. orientalis transmission throughout New Zealand, to predict the proportion of cattle farms potentially suitable for active T. orientalis infection by region, island and the whole of New Zealand and to estimate the average relative environmental suitability per farm by region, island and the whole of New Zealand. The relative environmental suitability for T. orientalis transmission was estimated using the Maxent (maximum entropy) modelling program. The Maxent model predicted that 99% of North Island cattle farms (n=36,257), 64% South Island cattle farms (n=15,542) and 89% of New Zealand cattle farms overall (n=51,799) could potentially be suitable for T. orientalis transmission. The average relative environmental suitability of T. orientalis transmission at the farm level was 0.34 in the North Island, 0.02 in the South Island and 0.24 overall. The study showed that the potential spatial distribution of T. orientalis environmental suitability was much greater than presumed in the early part of the Theileria associated bovine anaemia (TABA) epidemic. Maximum entropy offers a computer efficient method of modelling the probability of habitat suitability for an arthropod vectored disease. This model could help estimate the boundaries of the endemically stable and endemically unstable areas for T. orientalis transmission within New Zealand and be of considerable value in informing practitioner and farmer biosecurity decisions in these respective areas.
Lu Lin
2009-10-01
Full Text Available Estimation of Distribution Algorithm (EDA is a new kinds of colony evolution algorithm, through counting excellent information of individuals of the present colony EDA construct probability distribution model, then sample the model produces newt generation. To solve the NP-Hard question as EDA searching optimum network structure a new Maximum Entropy Distribution Algorithm (MEEDA is provided. The algorithm takes Jaynes principle as the basis, makes use of the maximum entropy of random variables to estimate the minimum bias probability distribution of random variables, and then regard it as the evolution model of the algorithm, which produces the optimal/near optimal solution. Then this paper presents a rough programming model for job shop scheduling under uncertain information problem. The method overcomes the defects of traditional methods which need pre-set authorized characteristics or amount described attributes, designs multi-objective optimization mechanism and expands the application space of a rough set in the issue of job shop scheduling under uncertain information environment. Due to the complexity of the proposed model, traditional algorithms have low capability in producing a feasible solution. We use MEEDA in order to enable a definition of a solution within a reasonable amount of time. We assume that machine flexibility in processing operations to decrease the complexity of the proposed model. Muth and Thompson’s benchmark problems tests are used to verify and validate the proposed rough programming model and its algorithm. The computational results obtained by MEEDA are compared with GA. The compared results prove the effectiveness of MEEDA in the job shop scheduling problem under uncertain information environment.
最大熵算法在气象雨量预测中的应用分析%Application of Maximum Entropy Algorithm Meteorological Rainfall Prediction
王海燕
2014-01-01
将最大熵原理的计算方法应用到气象雨量预测中，通过有效的仿真实验能够证明最大熵方法在气象雨量预测中的可行性。%The calculation of the maximum entropy principle is applied to the weather forecast rainfall through effective simulation experiment to prove the feasibility of the maximum entropy method of meteorological rainfall prediction.
DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K
2012-04-05
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.
Maximum entropy modeling risk of anthrax in the Republic of Kazakhstan.
Abdrakhmanov, S K; Mukhanbetkaliyev, Y Y; Korennoy, F I; Sultanov, A A; Kadyrov, A S; Kushubaev, D B; Bakishev, T G
2017-09-01
The objective of this study was to zone the territory of the Republic of Kazakhstan (RK) into risk categories according to the probability of anthrax emergence in farm animals as stipulated by the re-activation of preserved natural foci. We used historical data on anthrax morbidity in farm animals during the period 1933 - 2014, collected by the veterinary service of the RK. The database covers the entire territory of the RK and contains 4058 anthrax outbreaks tied to 1798 unique locations. Considering the strongly pronounced natural focality of anthrax, we employed environmental niche modeling (Maxent) to reveal patterns in the outbreaks' linkages to specific combinations of environmental factors. The set of bioclimatic factors BIOCLIM, derived from remote sensing data, the altitude above sea level, the land cover type, the maximum green vegetation fraction (MGVF) and the soil type were examined as explanatory variables. The model demonstrated good predictive ability, while the MGVF, the bioclimatic variables reflecting precipitation level and humidity, and the soil type were found to contribute most significantly to the model. A continuous probability surface was obtained that reflects the suitability of the study area for the emergence of anthrax outbreaks. The surface was turned into a categorical risk map by averaging the probabilities within the administrative divisions at the 2nd level and putting them into four categories of risk, namely: low, medium, high and very high risk zones, where very high risk refers to more than 50% suitability to the disease re-emergence and low risk refers to less than 10% suitability. The map indicated increased risk of anthrax re-emergence in the districts along the northern, eastern and south-eastern borders of the country. It was recommended that the national veterinary service uses the risk map for the development of contra-epizootic measures aimed at the prevention of anthrax re-emergence in historically affected regions of
Song, F.; Monsen, A.; Li, Z. S.; Choi, E. -M.; MacManus-Driscoll, J. L.; Xiong, J.; Jia, Q. X.; Wahlstrom, E.; Wells, J. W.
2012-01-01
The surface and near-surface chemical composition of BiFe0.5Mn0.5O3 has been studied using a combination of low photon energy synchrotron photoemission spectroscopy, and a newly developed maximum entropy finite element model from which it is possible to extract the depth dependent chemical compositi
Ozawa, Hisashi; Shimokawa, Shinya; Sakuma, Hirofumi
Turbulence is ubiquitous in nature, yet remains an enigma in many respects. Here we investigate dissipative properties of turbulence so as to find out a statistical "law" of turbulence. Two general expressions are derived for a rate of entropy increase due to thermal and viscous dissipation (turbulent dissipation) in a fluid system. It is found with these equations that phenomenological properties of turbulence such as Malkus's suggestion on maximum heat transport in thermal convection as well as Busse's sug- gestion on maximum momentum transport in shear turbulence can rigorously be ex- plained by a unique state in which the rate of entropy increase due to the turbulent dissipation is at a maximum (dS/dt = Max.). It is also shown that the same state cor- responds to the maximum entropy climate suggested by Paltridge. The tendency to increase the rate of entropy increase has also been confirmed by our recent GCM ex- periments. These results suggest the existence of a universal law that manifests itself in the long-term statistics of turbulent fluid systems from laboratory-scale turbulence to planetary-scale circulations. Ref.) Ozawa, H., Shimokawa, S., and Sakuma, H., Phys. Rev. E 64, 026303, 2001.
Zaylaa, Amira; Oudjemia, Souad; Charara, Jamal; Girault, Jean-Marc
2015-09-01
This paper presents two new concepts for discrimination of signals of different complexity. The first focused initially on solving the problem of setting entropy descriptors by varying the pattern size instead of the tolerance. This led to the search for the optimal pattern size that maximized the similarity entropy. The second paradigm was based on the n-order similarity entropy that encompasses the 1-order similarity entropy. To improve the statistical stability, n-order fuzzy similarity entropy was proposed. Fractional Brownian motion was simulated to validate the different methods proposed, and fetal heart rate signals were used to discriminate normal from abnormal fetuses. In all cases, it was found that it was possible to discriminate time series of different complexity such as fractional Brownian motion and fetal heart rate signals. The best levels of performance in terms of sensitivity (90%) and specificity (90%) were obtained with the n-order fuzzy similarity entropy. However, it was shown that the optimal pattern size and the maximum similarity measurement were related to intrinsic features of the time series.
Jungemann, C.; Pham, A. T.; Meinerzhagen, B.; Ringhofer, C.; Bollhöfer, M.
2006-07-01
The Boltzmann equation for transport in semiconductors is projected onto spherical harmonics in such a way that the resultant balance equations for the coefficients of the distribution function times the generalized density of states can be discretized over energy and real spaces by box integration. This ensures exact current continuity for the discrete equations. Spurious oscillations of the distribution function are suppressed by stabilization based on a maximum entropy dissipation principle avoiding the H transformation. The derived formulation can be used on arbitrary grids as long as box integration is possible. The approach works not only with analytical bands but also with full band structures in the case of holes. Results are presented for holes in bulk silicon based on a full band structure and electrons in a Si NPN bipolar junction transistor. The convergence of the spherical harmonics expansion is shown for a device, and it is found that the quasiballistic transport in nanoscale devices requires an expansion of considerably higher order than the usual first one. The stability of the discretization is demonstrated for a range of grid spacings in the real space and bias points which produce huge gradients in the electron density and electric field. It is shown that the resultant large linear system of equations can be solved in a memory efficient way by the numerically robust package ILUPACK.
Zhao, Min; Chen, Yanming; Qu, Dacheng; Qu, Hong
2015-01-01
The substrates of a transporter are not only useful for inferring function of the transporter, but also important to discover compound-compound interaction and to reconstruct metabolic pathway. Though plenty of data has been accumulated with the developing of new technologies such as in vitro transporter assays, the search for substrates of transporters is far from complete. In this article, we introduce METSP, a maximum-entropy classifier devoted to retrieve transporter-substrate pairs (TSPs) from semistructured text. Based on the high quality annotation from UniProt, METSP achieves high precision and recall in cross-validation experiments. When METSP is applied to 182,829 human transporter annotation sentences in UniProt, it identifies 3942 sentences with transporter and compound information. Finally, 1547 confidential human TSPs are identified for further manual curation, among which 58.37% pairs with novel substrates not annotated in public transporter databases. METSP is the first efficient tool to extract TSPs from semistructured annotation text in UniProt. This tool can help to determine the precise substrates and drugs of transporters, thus facilitating drug-target prediction, metabolic network reconstruction, and literature classification.
Fu, Hui; Zhong, Jiayou; Yuan, Guixiang; Guo, Chunjing; Lou, Qian; Zhang, Wei; Xu, Jun; Ni, Leyi; Xie, Ping; Cao, Te
2015-01-01
Trait-based approaches have been widely applied to investigate how community dynamics respond to environmental gradients. In this study, we applied a series of maximum entropy (maxent) models incorporating functional traits to unravel the processes governing macrophyte community structure along water depth gradient in a freshwater lake. We sampled 42 plots and 1513 individual plants, and measured 16 functional traits and abundance of 17 macrophyte species. Study results showed that maxent model can be highly robust (99.8%) in predicting the species relative abundance of macrophytes with observed community-weighted mean (CWM) traits as the constraints, while relative low (about 30%) with CWM traits fitted from water depth gradient as the constraints. The measured traits showed notably distinct importance in predicting species abundances, with lowest for perennial growth form and highest for leaf dry mass content. For tuber and leaf nitrogen content, there were significant shifts in their effects on species relative abundance from positive in shallow water to negative in deep water. This result suggests that macrophyte species with tuber organ and greater leaf nitrogen content would become more abundant in shallow water, but would become less abundant in deep water. Our study highlights how functional traits distributed across gradients provide a robust path towards predictive community ecology.
Silver, R.N.; Gubernatis, J.E.; Sivia, D.S. (Los Alamos National Lab., NM (USA)); Jarrell, M. (Ohio State Univ., Columbus, OH (USA). Dept. of Physics)
1990-01-01
In this article we describe the results of a new method for calculating the dynamical properties of the Anderson model. QMC generates data about the Matsubara Green's functions in imaginary time. To obtain dynamical properties, one must analytically continue these data to real time. This is an extremely ill-posed inverse problem similar to the inversion of a Laplace transform from incomplete and noisy data. Our method is a general one, applicable to the calculation of dynamical properties from a wide variety of quantum simulations. We use Bayesian methods of statistical inference to determine the dynamical properties based on both the QMC data and any prior information we may have such as sum rules, symmetry, high frequency limits, etc. This provides a natural means of combining perturbation theory and numerical simulations in order to understand dynamical many-body problems. Specifically we use the well-established maximum entropy (ME) method for image reconstruction. We obtain the spectral density and transport coefficients over the entire range of model parameters accessible by QMC, with data having much larger statistical error than required by other proposed analytic continuation methods.
de Nazelle, Audrey; Arunachalam, Saravanan; Serre, Marc L
2010-08-01
States in the USA are required to demonstrate future compliance of criteria air pollutant standards by using both air quality monitors and model outputs. In the case of ozone, the demonstration tests aim at relying heavily on measured values, due to their perceived objectivity and enforceable quality. Weight given to numerical models is diminished by integrating them in the calculations only in a relative sense. For unmonitored locations, the EPA has suggested the use of a spatial interpolation technique to assign current values. We demonstrate that this approach may lead to erroneous assignments of nonattainment and may make it difficult for States to establish future compliance. We propose a method that combines different sources of information to map air pollution, using the Bayesian Maximum Entropy (BME) Framework. The approach gives precedence to measured values and integrates modeled data as a function of model performance. We demonstrate this approach in North Carolina, using the State's ozone monitoring network in combination with outputs from the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. We show that the BME data integration approach, compared to a spatial interpolation of measured data, improves the accuracy and the precision of ozone estimations across the state.
R Saravanan; K S Syed Ali; S Israel
2008-04-01
The local, average and electronic structure of the semiconducting materials Si and Ge has been studied using multipole, maximum entropy method (MEM) and pair distribution function (PDF) analyses, using X-ray powder data. The covalent nature of bonding and the interaction between the atoms are clearly revealed by the two-dimensional MEM maps plotted on (1 0 0) and (1 1 0) planes and one-dimensional density along [1 0 0], [1 1 0] and [1 1 1] directions. The mid-bond electron densities between the atoms are 0.554 e/Å3 and 0.187 e/Å3 for Si and Ge respectively. In this work, the local structural information has also been obtained by analyzing the atomic pair distribution function. An attempt has been made in the present work to utilize the X-ray powder data sets to refine the structure and electron density distribution using the currently available versatile methods, MEM, multipole analysis and determination of pair distribution function for these two systems.
Mesfin Dema
2014-05-01
Full Text Available We introduce a novel Maximum Entropy (MaxEnt framework that can generate 3D scenes by incorporating objects’ relevancy, hierarchical and contextual constraints in a unified model. This model is formulated by a Gibbs distribution, under the MaxEnt framework, that can be sampled to generate plausible scenes. Unlike existing approaches, which represent a given scene by a single And-Or graph, the relevancy constraint (defined as the frequency with which a given object exists in the training data require our approach to sample from multiple And-Or graphs, allowing variability in terms of objects’ existence across synthesized scenes. Once an And-Or graph is sampled from the ensemble, the hierarchical constraints are employed to sample the Or-nodes (style variations and the contextual constraints are subsequently used to enforce the corresponding relations that must be satisfied by the And-nodes. To illustrate the proposed methodology, we use desk scenes that are composed of objects whose existence, styles and arrangements (position and orientation can vary from one scene to the next. The relevancy, hierarchical and contextual constraints are extracted from a set of training scenes and utilized to generate plausible synthetic scenes that in turn satisfy these constraints. After applying the proposed framework, scenes that are plausible representations of the training examples are automatically generated.
周良明; 郭佩芳; 王强; 杜伊
2004-01-01
Based on the maximum entropy principle, a probability density function (PDF) is derived for the distribution of wave heights in a random wave field, without any more hypothesis. The present PDF, being a non-Rayleigh form, involves two parameters: the average wave height H and the state parameter γ. The role of γ in the distribution of wave heights is examined. It is found that γ may be a certain measure of sea state. A least square method for determining γ from measured data is proposed. In virtue of the method, the values of γ are determined for three sea states from the data measured in the East China Sea. The present PDF is compared with the well known Rayleigh PDF of wave height and it is shown that it much better fits the data than the Rayleigh PDF. It is expected that the present PDF would fit some other wave variables, since its derivation is not restricted only to the wave height.
Wang, Chaolin; Zhong, Shaobo; Zhang, Fushen; Huang, Quanyi
2016-11-01
Precipitation interpolation has been a hot area of research for many years. It had close relation to meteorological factors. In this paper, precipitation from 91 meteorological stations located in and around Yunnan, Guizhou and Guangxi Zhuang provinces (or autonomous region), Mainland China was taken into consideration for spatial interpolation. Multivariate Bayesian maximum entropy (BME) method with auxiliary variables, including mean relative humidity, water vapour pressure, mean temperature, mean wind speed and terrain elevation, was used to get more accurate regional distribution of annual precipitation. The means, standard deviations, skewness and kurtosis of meteorological factors were calculated. Variogram and cross- variogram were fitted between precipitation and auxiliary variables. The results showed that the multivariate BME method was precise with hard and soft data, probability density function. Annual mean precipitation was positively correlated with mean relative humidity, mean water vapour pressure, mean temperature and mean wind speed, negatively correlated with terrain elevation. The results are supposed to provide substantial reference for research of drought and waterlog in the region.
Wang, J.; Parolari, A.; Huang, S. Y.
2014-12-01
The objective of this study is to formulate and test plant water stress parameterizations for the recently proposed maximum entropy production (MEP) model of evapotranspiration (ET) over vegetated surfaces. . The MEP model of ET is a parsimonious alternative to existing land surface parameterizations of surface energy fluxes from net radiation, temperature, humidity, and a small number of parameters. The MEP model was previously tested for vegetated surfaces under well-watered and dry, dormant conditions, when the surface energy balance is relatively insensitive to plant physiological activity. Under water stressed conditions, however, the plant water stress response strongly affects the surface energy balance. This effect occurs through plant physiological adjustments that reduce ET to maintain leaf turgor pressure as soil moisture is depleted during drought. To improve MEP model of ET predictions under water stress conditions, the model was modified to incorporate this plant-mediated feedback between soil moisture and ET. We compare MEP model predictions to observations under a range of field conditions, including bare soil, grassland, and forest. The results indicate a water stress function that combines the soil water potential in the surface soil layer with the atmospheric humidity successfully reproduces observed ET decreases during drought. In addition to its utility as a modeling tool, the calibrated water stress functions also provide a means to infer ecosystem influence on the land surface state. Challenges associated with sampling model input data (i.e., net radiation, surface temperature, and surface humidity) are also discussed.
Electron density distribution and bonding in ZnSe and PbSe using maximum entropy method (MEM)
K S Syed Ali; R Saravanan; S Israel; R K Rajaram
2006-04-01
The study of electronic structure of materials and bonding is an important part of material characterization. The maximum entropy method (MEM) is a powerful tool for deriving accurate electron density distribution in crystalline materials using experimental data. In this paper, the attention is focused on producing electron density distribution of ZnSe and PbSe using JCPDS X-ray powder diffraction data. The covalent/ionic nature of the bonding and the interaction between the atoms are clearly revealed by the MEM maps. The mid bond electron densities between atoms in these systems are found to be 0.544 e/Å3 and 0.261 e/Å3, respectively for ZnSe and PbSe. The bonding in these two systems has been studied using two-dimensional MEM electron density maps on the (100) and (110) planes, and the one-dimensional electron density profiles along [100], [110] and [111] directions. The thermal parameters of the individual atoms have also been reported in this work. The algorithm of the MEM procedure has been presented.
Almog, Assaf
2014-01-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of time series of activity of their fundamental elements (such as stocks or neurons respectively). While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relationships between binary and non-binary properties of financial time series. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to replicate the observed binary/non-binary relations very well, and to mathematically...
Andersen, Casper Welzel; Bremholm, Martin; Vennestrøm, Peter Nicolai Ravnborg; Blichfeld, Anders Bank; Lundegaard, Lars Fahl; Iversen, Bo Brummerstedt
2014-11-01
Accurate structural models of reaction centres in zeolite catalysts are a prerequisite for mechanistic studies and further improvements to the catalytic performance. The Rietveld/maximum entropy method is applied to synchrotron powder X-ray diffraction data on fully dehydrated CHA-type zeolites with and without loading of catalytically active Cu(2+) for the selective catalytic reduction of NO x with NH3. The method identifies the known Cu(2+) sites in the six-membered ring and a not previously observed site in the eight-membered ring. The sum of the refined Cu occupancies for these two sites matches the chemical analysis and thus all the Cu is accounted for. It is furthermore shown that approximately 80% of the Cu(2+) is located in the new 8-ring site for an industrially relevant CHA zeolite with Si/Al = 15.5 and Cu/Al = 0.45. Density functional theory calculations are used to corroborate the positions and identity of the two Cu sites, leading to the most complete structural description of dehydrated silicoaluminate CHA loaded with catalytically active Cu(2+) cations.
Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang
2014-05-01
Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.
Maximum a posteriori estimation of crystallographic phases in X-ray diffraction tomography
Gürsoy, Doǧa; Bicer, Tekin; Almer, Jonathan D.; Kettimuthu, Rajkumar; Stock, Stuart; De Carlo, Francesco
2015-06-13
A maximum a posteriori approach is proposed for X-ray diffraction tomography for reconstructing three-dimensional spatial distribution of crystallographic phases and orientations of polycrystalline materials. The approach maximizes the a posteriori density which includes a Poisson log-likelihood and an a priori term that reinforces expected solution properties such as smoothness or local continuity. The reconstruction method is validated with experimental data acquired from a section of the spinous process of a porcine vertebra collected at the 1-ID-C beamline of the Advanced Photon Source, at Argonne National Laboratory. The reconstruction results show significant improvement in the reduction of aliasing and streaking artefacts, and improved robustness to noise and undersampling compared to conventional analytical inversion approaches. The approach has the potential to reduce data acquisition times, and significantly improve beamtime efficiency.
Westhoff, Martijn; Zehe, Erwin; Erpicum, Sébastien; Archambeau, Pierre; Pirotton, Michel; Dewals, Benjamin
2015-04-01
The Maximum Entropy Production (MEP) principle is a conjecture assuming that a medium is organized in such a way that maximum power is subtracted from a gradient driving a flux (with power being a flux times its driving gradient). This maximum power is also known as the Carnot limit. It has already been shown that the atmosphere operates close to this Carnot limit when it comes to heat transport from the Equator to the poles, or vertically, from the surface to the atmospheric boundary layer. To reach this state close to the Carnot limit, the effective thermal conductivity of the atmosphere is adapted by the creation of convection cells (e.g. wind). The aim of this study is to test if the soil's effective hydraulic conductivity also adapts itself in such a way that it operates close to the Carnot limit. The big difference between atmosphere and soil is the way of adaptation of its resistance. The soil's hydraulic conductivity is either changed by weathering processes, which is a very slow process, or by creation of preferential flow paths. In this study the latter process is simulated in a lab experiment, where we focus on the preferential flow paths created by piping. Piping is the process of backwards erosion of sand particles subject to a large pressure gradient. Since this is a relatively fast process, it is suitable for being tested in the lab. In the lab setup a horizontal sand bed connects two reservoirs that both drain freely at a level high enough to keep the sand bed always saturated. By adding water to only one reservoir, a horizontal pressure gradient is maintained. If the flow resistance is small, a large gradient develops, leading to the effect of piping. When pipes are being formed, the effective flow resistance decreases; the flow through the sand bed increases and the pressure gradient decreases. At a certain point, the flow velocity is small enough to stop the pipes from growing any further. In this steady state, the effective flow resistance of
Slater, Hannah; Michael, Edwin
2012-01-01
Modelling the spatial distributions of human parasite species is crucial to understanding the environmental determinants of infection as well as for guiding the planning of control programmes. Here, we use ecological niche modelling to map the current potential distribution of the macroparasitic disease, lymphatic filariasis (LF), in Africa, and to estimate how future changes in climate and population could affect its spread and burden across the continent. We used 508 community-specific infection presence data collated from the published literature in conjunction with five predictive environmental/climatic and demographic variables, and a maximum entropy niche modelling method to construct the first ecological niche maps describing potential distribution and burden of LF in Africa. We also ran the best-fit model against climate projections made by the HADCM3 and CCCMA models for 2050 under A2a and B2a scenarios to simulate the likely distribution of LF under future climate and population changes. We predict a broad geographic distribution of LF in Africa extending from the west to the east across the middle region of the continent, with high probabilities of occurrence in the Western Africa compared to large areas of medium probability interspersed with smaller areas of high probability in Central and Eastern Africa and in Madagascar. We uncovered complex relationships between predictor ecological niche variables and the probability of LF occurrence. We show for the first time that predicted climate change and population growth will expand both the range and risk of LF infection (and ultimately disease) in an endemic region. We estimate that populations at risk to LF may range from 543 and 804 million currently, and that this could rise to between 1.65 to 1.86 billion in the future depending on the climate scenario used and thresholds applied to signify infection presence.
Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi
2016-01-01
Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.
Almog, Assaf; Garlaschelli, Diego
2014-09-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.
Ashford, Oliver S.; Davies, Andrew J.; Jones, Daniel O. B.
2014-12-01
Xenophyophores are a group of exclusively deep-sea agglutinating rhizarian protozoans, at least some of which are foraminifera. They are an important constituent of the deep-sea megafauna that are sometimes found in sufficient abundance to act as a significant source of habitat structure for meiofaunal and macrofaunal organisms. This study utilised maximum entropy modelling (Maxent) and a high-resolution environmental database to explore the environmental factors controlling the presence of Xenophyophorea and two frequently sampled xenophyophore species that are taxonomically stable: Syringammina fragilissima and Stannophyllum zonarium. These factors were also used to predict the global distribution of each taxon. Areas of high habitat suitability for xenophyophores were highlighted throughout the world's oceans, including in a large number of areas yet to be suitably sampled, but the Northeast and Southeast Atlantic Ocean, Gulf of Mexico and Caribbean Sea, the Red Sea and deep-water regions of the Malay Archipelago represented particular hotspots. The two species investigated showed more specific habitat requirements when compared to the model encompassing all xenophyophore records, perhaps in part due to the smaller number and relatively more clustered nature of the presence records available for modelling at present. The environmental variables depth, oxygen parameters, nitrate concentration, carbon-chemistry parameters and temperature were of greatest importance in determining xenophyophore distributions, but, somewhat surprisingly, hydrodynamic parameters were consistently shown to have low importance, possibly due to the paucity of well-resolved global hydrodynamic datasets. The results of this study (and others of a similar type) have the potential to guide further sample collection, environmental policy, and spatial planning of marine protected areas and industrial activities that impact the seafloor, particularly those that overlap with aggregations of
Burns, Brian; Wilson, Neil E; Furuyama, Jon K; Thomas, M Albert
2014-02-01
The four-dimensional (4D) echo-planar correlated spectroscopic imaging (EP-COSI) sequence allows for the simultaneous acquisition of two spatial (ky, kx) and two spectral (t2, t1) dimensions in vivo in a single recording. However, its scan time is directly proportional to the number of increments in the ky and t1 dimensions, and a single scan can take 20–40 min using typical parameters, which is too long to be used for a routine clinical protocol. The present work describes efforts to accelerate EP-COSI data acquisition by application of non-uniform under-sampling (NUS) to the ky–t1 plane of simulated and in vivo EP-COSI datasets then reconstructing missing samples using maximum entropy (MaxEnt) and compressed sensing (CS). Both reconstruction problems were solved using the Cambridge algorithm, which offers many workflow improvements over other l1-norm solvers. Reconstructions of retrospectively under-sampled simulated data demonstrate that the MaxEnt and CS reconstructions successfully restore data fidelity at signal-to-noise ratios (SNRs) from 4 to 20 and 5× to 1.25× NUS. Retrospectively and prospectively 4× under-sampled 4D EP-COSI in vivo datasets show that both reconstruction methods successfully remove NUS artifacts; however, MaxEnt provides reconstructions equal to or better than CS. Our results show that NUS combined with iterative reconstruction can reduce 4D EP-COSI scan times by 75% to a clinically viable 5 min in vivo, with MaxEnt being the preferred method. 2013 John Wiley & Sons, Ltd.
Xiaokang Kou
2016-01-01
Full Text Available Land surface temperature (LST plays a major role in the study of surface energy balances. Remote sensing techniques provide ways to monitor LST at large scales. However, due to atmospheric influences, significant missing data exist in LST products retrieved from satellite thermal infrared (TIR remotely sensed data. Although passive microwaves (PMWs are able to overcome these atmospheric influences while estimating LST, the data are constrained by low spatial resolution. In this study, to obtain complete and high-quality LST data, the Bayesian Maximum Entropy (BME method was introduced to merge 0.01° and 0.25° LSTs inversed from MODIS and AMSR-E data, respectively. The result showed that the missing LSTs in cloudy pixels were filled completely, and the availability of merged LSTs reaches 100%. Because the depths of LST and soil temperature measurements are different, before validating the merged LST, the station measurements were calibrated with an empirical equation between MODIS LST and 0~5 cm soil temperatures. The results showed that the accuracy of merged LSTs increased with the increasing quantity of utilized data, and as the availability of utilized data increased from 25.2% to 91.4%, the RMSEs of the merged data decreased from 4.53 °C to 2.31 °C. In addition, compared with the filling gap method in which MODIS LST gaps were filled with AMSR-E LST directly, the merged LSTs from the BME method showed better spatial continuity. The different penetration depths of TIR and PMWs may influence fusion performance and still require further studies.
Guoqing Li
2014-11-01
Full Text Available Black locust (Robinia pseudoacacia L. is a tree species of high economic and ecological value, but is also considered to be highly invasive. Understanding the global potential distribution and ecological characteristics of this species is a prerequisite for its practical exploitation as a resource. Here, a maximum entropy modeling (MaxEnt was used to simulate the potential distribution of this species around the world, and the dominant climatic factors affecting its distribution were selected by using a jackknife test and the regularized gain change during each iteration of the training algorithm. The results show that the MaxEnt model performs better than random, with an average test AUC value of 0.9165 (±0.0088. The coldness index, annual mean temperature and warmth index were the most important climatic factors affecting the species distribution, explaining 65.79% of the variability in the geographical distribution. Species response curves showed unimodal relationships with the annual mean temperature and warmth index, whereas there was a linear relationship with the coldness index. The dominant climatic conditions in the core of the black locust distribution are a coldness index of −9.8 °C–0 °C, an annual mean temperature of 5.8 °C–14.5 °C, a warmth index of 66 °C–168 °C and an annual precipitation of 508–1867 mm. The potential distribution of black locust is located mainly in the United States, the United Kingdom, Germany, France, the Netherlands, Belgium, Italy, Switzerland, Australia, New Zealand, China, Japan, South Korea, South Africa, Chile and Argentina. The predictive map of black locust, climatic thresholds and species response curves can provide globally applicable guidelines and valuable information for policymakers and planners involved in the introduction, planting and invasion control of this species around the world.
Larecki, Wieslaw; Banach, Zbigniew
2014-01-01
This paper analyzes the propagation of the waves of weak discontinuity in a phonon gas described by the four-moment maximum entropy phonon hydrodynamics involving a nonlinear isotropic phonon dispersion relation. For the considered hyperbolic equations of phonon gas hydrodynamics, the eigenvalue problem is analyzed and the condition of genuine nonlinearity is discussed. The speed of the wave front propagating into the region in thermal equilibrium is first determined in terms of the integral formula dependent on the phonon dispersion relation and subsequently explicitly calculated for the Dubey dispersion-relation model: |k|=ωc-1(1+bω2). The specification of the parameters c and b for sodium fluoride (NaF) and semimetallic bismuth (Bi) then makes it possible to compare the calculated dependence of the wave-front speed on the sample’s temperature with the empirical relations of Coleman and Newman (1988) describing for NaF and Bi the variation of the second-sound speed with temperature. It is demonstrated that the calculated temperature dependence of the wave-front speed resembles the empirical relation and that the parameters c and b obtained from fitting respectively the empirical relation and the original material parameters of Dubey (1973) are of the same order of magnitude, the difference being in the values of the numerical factors. It is also shown that the calculated temperature dependence is in good agreement with the predictions of Hardy and Jaswal’s theory (Hardy and Jaswal, 1971) on second-sound propagation. This suggests that the nonlinearity of a phonon dispersion relation should be taken into account in the theories aiming at the description of the wave-type phonon heat transport and that the Dubey nonlinear isotropic dispersion-relation model can be very useful for this purpose.
Hannah Slater
Full Text Available Modelling the spatial distributions of human parasite species is crucial to understanding the environmental determinants of infection as well as for guiding the planning of control programmes. Here, we use ecological niche modelling to map the current potential distribution of the macroparasitic disease, lymphatic filariasis (LF, in Africa, and to estimate how future changes in climate and population could affect its spread and burden across the continent. We used 508 community-specific infection presence data collated from the published literature in conjunction with five predictive environmental/climatic and demographic variables, and a maximum entropy niche modelling method to construct the first ecological niche maps describing potential distribution and burden of LF in Africa. We also ran the best-fit model against climate projections made by the HADCM3 and CCCMA models for 2050 under A2a and B2a scenarios to simulate the likely distribution of LF under future climate and population changes. We predict a broad geographic distribution of LF in Africa extending from the west to the east across the middle region of the continent, with high probabilities of occurrence in the Western Africa compared to large areas of medium probability interspersed with smaller areas of high probability in Central and Eastern Africa and in Madagascar. We uncovered complex relationships between predictor ecological niche variables and the probability of LF occurrence. We show for the first time that predicted climate change and population growth will expand both the range and risk of LF infection (and ultimately disease in an endemic region. We estimate that populations at risk to LF may range from 543 and 804 million currently, and that this could rise to between 1.65 to 1.86 billion in the future depending on the climate scenario used and thresholds applied to signify infection presence.
Campbell, Cara; Hilderbrand, Robert H.
2017-01-01
Species distribution modelling can be useful for the conservation of rare and endangered species. Freshwater mussel declines have thinned species ranges producing spatially fragmented distributions across large areas. Spatial fragmentation in combination with a complex life history and heterogeneous environment makes predictive modelling difficult.A machine learning approach (maximum entropy) was used to model occurrences and suitable habitat for the federally endangered dwarf wedgemussel, Alasmidonta heterodon, in Maryland's Coastal Plain catchments. Landscape-scale predictors (e.g. land cover, land use, soil characteristics, geology, flow characteristics, and climate) were used to predict the suitability of individual stream segments for A. heterodon.The best model contained variables at three scales: minimum elevation (segment scale), percentage Tertiary deposits, low intensity development, and woody wetlands (sub-catchment), and percentage low intensity development, pasture/hay agriculture, and average depth to the water table (catchment). Despite a very small sample size owing to the rarity of A. heterodon, cross-validated prediction accuracy was 91%.Most predicted suitable segments occur in catchments not known to contain A. heterodon, which provides opportunities for new discoveries or population restoration. These model predictions can guide surveys toward the streams with the best chance of containing the species or, alternatively, away from those streams with little chance of containing A. heterodon.Developed reaches had low predicted suitability for A. heterodon in the Coastal Plain. Urban and exurban sprawl continues to modify stream ecosystems in the region, underscoring the need to preserve existing populations and to discover and protect new populations.
Ngo, Chuong; Leonhardt, Steffen; Zhang, Tony; Lüken, Markus; Misgeld, Berno; Vollmer, Thomas; Tenbrock, Klaus; Lehmann, Sylvia
2017-01-01
Electrical impedance tomography (EIT) provides global and regional information about ventilation by means of relative changes in electrical impedance measured with electrodes placed around the thorax. In combination with lung function tests, e.g. spirometry and body plethysmography, regional information about lung ventilation can be achieved. Impedance changes strictly correlate with lung volume during tidal breathing and mechanical ventilation. Initial studies presumed a correlation also during forced expiration maneuvers. To quantify the validity of this correlation in extreme lung volume changes during forced breathing, a measurement system was set up and applied on seven lung-healthy volunteers. Simultaneous measurements of changes in lung volume using EIT imaging and pneumotachography were obtained with different breathing patterns. Data was divided into a synchronizing phase (spontaneous breathing) and a test phase (maximum effort breathing and forced maneuvers). The EIT impedance changes correlate strictly with spirometric data during slow breathing with increasing and maximum effort ([Formula: see text]) and during forced expiration maneuvers ([Formula: see text]). Strong correlations in spirometric volume parameters [Formula: see text] ([Formula: see text]), [Formula: see text]/FVC ([Formula: see text]), and flow parameters PEF, [Formula: see text], [Formula: see text], [Formula: see text] ([Formula: see text]) were observed. According to the linearity during forced expiration maneuvers, EIT can be used during pulmonary function testing in combination with spirometry for visualisation of regional lung ventilation.
Hsia, Wei-Shen
1986-01-01
In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.
Livingston, Richard A.; Jin, Shuang
2005-05-01
Bridges and other civil structures can exhibit nonlinear and/or chaotic behavior under ambient traffic or wind loadings. The probability density function (pdf) of the observed structural responses thus plays an important role for long-term structural health monitoring, LRFR and fatigue life analysis. However, the actual pdf of such structural response data often has a very complicated shape due to its fractal nature. Various conventional methods to approximate it can often lead to biased estimates. This paper presents recent research progress at the Turner-Fairbank Highway Research Center of the FHWA in applying a novel probabilistic scaling scheme for enhanced maximum entropy evaluation to find the most unbiased pdf. The maximum entropy method is applied with a fractal interpolation formulation based on contraction mappings through an iterated function system (IFS). Based on a fractal dimension determined from the entire response data set by an algorithm involving the information dimension, a characteristic uncertainty parameter, called the probabilistic scaling factor, can be introduced. This allows significantly enhanced maximum entropy evaluation through the added inferences about the fine scale fluctuations in the response data. Case studies using the dynamic response data sets collected from a real world bridge (Commodore Barry Bridge, PA) and from the simulation of a classical nonlinear chaotic system (the Lorenz system) are presented in this paper. The results illustrate the advantages of the probabilistic scaling method over conventional approaches for finding the unbiased pdf especially in the critical tail region that contains the larger structural responses.
Tsai Richard
2007-09-01
Full Text Available Abstract Background Bioinformatics tools for automatic processing of biomedical literature are invaluable for both the design and interpretation of large-scale experiments. Many information extraction (IE systems that incorporate natural language processing (NLP techniques have thus been developed for use in the biomedical field. A key IE task in this field is the extraction of biomedical relations, such as protein-protein and gene-disease interactions. However, most biomedical relation extraction systems usually ignore adverbial and prepositional phrases and words identifying location, manner, timing, and condition, which are essential for describing biomedical relations. Semantic role labeling (SRL is a natural language processing technique that identifies the semantic roles of these words or phrases in sentences and expresses them as predicate-argument structures. We construct a biomedical SRL system called BIOSMILE that uses a maximum entropy (ME machine-learning model to extract biomedical relations. BIOSMILE is trained on BioProp, our semi-automatic, annotated biomedical proposition bank. Currently, we are focusing on 30 biomedical verbs that are frequently used or considered important for describing molecular events. Results To evaluate the performance of BIOSMILE, we conducted two experiments to (1 compare the performance of SRL systems trained on newswire and biomedical corpora; and (2 examine the effects of using biomedical-specific features. The experimental results show that using BioProp improves the F-score of the SRL system by 21.45% over an SRL system that uses a newswire corpus. It is noteworthy that adding automatically generated template features improves the overall F-score by a further 0.52%. Specifically, ArgM-LOC, ArgM-MNR, and Arg2 achieve statistically significant performance improvements of 3.33%, 2.27%, and 1.44%, respectively. Conclusion We demonstrate the necessity of using a biomedical proposition bank for training
Wang, L.; Kerr, L. A.; Bridger, E.
2016-12-01
Changes in species distributions have been widely associated with climate change. Understanding how ocean conditions influence marine fish distributions is critical for elucidating the role of climate in ecosystem change and forecasting how fish may be distributed in the future. Species distribution models (SDMs) can enable estimation of the likelihood of encountering species in space or time as a function of environmental conditions. Traditional SDMs are applied to scientific-survey data that include both presences and absences. Maximum entropy (MaxEnt) models are promising tools as they can be applied to presence-only data, such as those collected from fisheries or citizen science programs. We used MaxEnt to relate the occurrence records of marine fish species (e.g. Atlantic herring, Atlantic mackerel, and butterfish) from NOAA Northeast Fisheries Observer Program to environmental conditions. Environmental variables from earth system data, such as sea surface temperature (SST), sea bottom temperature (SBT), Chlorophyll-a, bathymetry, North Atlantic oscillation (NAO), and Atlantic multidecadal oscillation (AMO), were matched with species occurrence for MaxEnt modeling the fish distributions in Northeast Shelf area. We developed habitat suitability maps for these species, and assessed the relative influence of environmental factors on their distributions. Overall, SST and Chlorophyll-a had greatest influence on their monthly distributions, with bathymetry and SBT having moderate influence and climate indices (NAO and AMO) having little influence. Across months, Atlantic herring distribution was most related to SST 10th percentile, and Atlantic mackerel and butterfish distributions were most related to previous month SST. The fish distributions were most affected by previous month Chlorophyll-a in summer months, which may indirectly indicate the accumulative impact of primary productivity. Results highlighted the importance of spatial and temporal scales when using
Tanabe, Yuki; Kido, Teruhito; Kurata, Akira; Sawada, Shun; Suekuni, Hiroshi; Kido, Tomoyuki; Yokoi, Takahiro; Miyagawa, Masao; Mochizuki, Teruhito [Ehime University Graduate School of Medicine, Department of Radiology, Toon City, Ehime (Japan); Uetani, Teruyoshi; Inoue, Katsuji [Ehime University Graduate School of Medicine, Department of Cardiology, Pulmonology, Hypertension and Nephrology, Toon City, Ehime (Japan)
2017-04-15
To evaluate the feasibility of three-dimensional (3D) maximum principal strain (MP-strain) derived from cardiac computed tomography (CT) for detecting myocardial infarction (MI). Forty-three patients who underwent cardiac CT and magnetic resonance imaging (MRI) were retrospectively selected. Using the voxel tracking of motion coherence algorithm, the peak CT MP-strain was measured using the 16-segment model. With the trans-mural extent of late gadolinium enhancement (LGE) and the distance from MI, all segments were classified into four groups (infarcted, border, adjacent, and remote segments); infarcted and border segments were defined as MI with LGE positive. Diagnostic performance of MP-strain for detecting MI was compared with per cent systolic wall thickening (%SWT) assessed by MRI using receiver-operating characteristic curve analysis at a segment level. Of 672 segments excluding16 segments influenced by artefacts, 193 were diagnosed as MI. Sensitivity and specificity of peak MP-strain to identify MI were 81 % [95 % confidence interval (95 % CI): 74-88 %] and 86 % (81-92 %) compared with %SWT: 76 % (60-95 %) and 68 % (48-84 %), respectively. The area under the curve of peak MP-strain was superior to %SWT [0.90 (0.87-0.93) vs. 0.80 (0.76-0.83), p < 0.05]. CT MP-strain has a potential to provide incremental value to coronary CT angiography for detecting MI. (orig.)
Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.
2015-03-01
In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.
Application of a maximum likelihood algorithm to ultrasound modulated optical tomography.
Huynh, Nam T; He, Diwei; Hayes-Gill, Barrie R; Crowe, John A; Walker, John G; Mather, Melissa L; Rose, Felicity R A J; Parker, Nicholas G; Povey, Malcolm J W; Morgan, Stephen P
2012-02-01
In pulsed ultrasound modulated optical tomography (USMOT), an ultrasound (US) pulse performs as a scanning probe within the sample as it propagates, modulating the scattered light spatially distributed along its propagation axis. Detecting and processing the modulated signal can provide a 1-dimensional image along the US axis. A simple model is developed wherein the detected signal is modelled as a convolution of the US pulse and the properties (ultrasonic/optical) of the medium along the US axis. Based upon this model, a maximum likelihood (ML) method for image reconstruction is established. For the first time to our knowledge, the ML technique for an USMOT signal is investigated both theoretically and experimentally. The ML method inverts the data to retrieve the spatially varying properties of the sample along the US axis, and a signal proportional to the optical properties can be acquired. Simulated results show that the ML method can serve as a useful reconstruction tool for a pulsed USMOT signal even when the signal-to-noise ratio (SNR) is close to unity. Experimental data using 5 cm thick tissue phantoms (scattering coefficient μ(s) = 6.5 cm(-1), anisotropy factor g=0.93) demonstrate that the axial resolution is 160 μm and the lateral resolution is 600 μm using a 10 MHz transducer.
C. Gray
2014-01-01
Full Text Available Introduction. Maximum diameter of an abdominal aortic aneurysm (AAA is the main indication for surgery. This study compared colour duplex ultrasound (CDU and computed tomography (CT in assessing AAA diameter. Patients and Methods. Patients were included if they had both scans performed within 90 days. Pearson’s correlation coefficient, paired t-test, and limits of agreement (LOA were calculated for the whole group. Subgroup analysis of small (6.5 cm aneurysms was performed. A P value of <0.05 was considered statistically significant. Results. 389 patients were included, giving 130 pairs of tests for comparison. Excellent correlation was in the whole group (r = 0.95 and in the subgroups (r = 0.94; 0.69; 0.96, resp.. Small LOA between the two imaging modalities was found in all subgroups. Conclusion. Small aneurysms can be accurately measured using CDU. CDU is preferable for small AAAs, but cannot supplant CT for planning aortic intervention.
Bruce T. Milne
2017-05-01
Full Text Available Stream networks are branched structures wherein water and energy move between land and atmosphere, modulated by evapotranspiration and its interaction with the gravitational dissipation of potential energy as runoff. These actions vary among climates characterized by Budyko theory, yet have not been integrated with Horton scaling, the ubiquitous pattern of eco-hydrological variation among Strahler streams that populate river basins. From Budyko theory, we reveal optimum entropy coincident with high biodiversity. Basins on either side of optimum respond in opposite ways to precipitation, which we evaluated for the classic Hubbard Brook experiment in New Hampshire and for the Whitewater River basin in Kansas. We demonstrate that Horton ratios are equivalent to Lagrange multipliers used in the extremum function leading to Shannon information entropy being maximal, subject to constraints. Properties of stream networks vary with constraints and inter-annual variation in water balance that challenge vegetation to match expected resource supply throughout the network. The entropy-Horton framework informs questions of biodiversity, resilience to perturbations in water supply, changes in potential evapotranspiration, and land use changes that move ecosystems away from optimal entropy with concomitant loss of productivity and biodiversity.
Furbish, David J.; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan L.
2016-01-01
We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.
Physical entropy, information entropy and their evolution equations
无
2001-01-01
Inspired by the evolution equation of nonequilibrium statistical physics entropy and the concise statistical formula of the entropy production rate, we develop a theory of the dynamic information entropy and build a nonlinear evolution equation of the information entropy density changing in time and state variable space. Its mathematical form and physical meaning are similar to the evolution equation of the physical entropy: The time rate of change of information entropy density originates together from drift, diffusion and production. The concise statistical formula of information entropy production rate is similar to that of physical entropy also. Furthermore, we study the similarity and difference between physical entropy and information entropy and the possible unification of the two statistical entropies, and discuss the relationship among the principle of entropy increase, the principle of equilibrium maximum entropy and the principle of maximum information entropy as well as the connection between them and the entropy evolution equation.
Gruendling, Till; Guilhaus, Michael; Barner-Kowollik, Christopher
2008-09-15
We report on the successful application of size exclusion chromatography (SEC) combined with electrospray ionization mass spectrometry (ESI-MS) and refractive index (RI) detection for the determination of accurate molecular weight distributions of synthetic polymers, corrected for chromatographic band broadening. The presented method makes use of the ability of ESI-MS to accurately depict the peak profiles and retention volumes of individual oligomers eluting from the SEC column, whereas quantitative information on the absolute concentration of oligomers is obtained from the RI-detector only. A sophisticated computational algorithm based on the maximum entropy principle is used to process the data gained by both detectors, yielding an accurate molecular weight distribution, corrected for chromatographic band broadening. Poly(methyl methacrylate) standards with molecular weights up to 10 kDa serve as model compounds. Molecular weight distributions (MWDs) obtained by the maximum entropy procedure are compared to MWDs, which were calculated by a conventional calibration of the SEC-retention time axis with peak retention data obtained from the mass spectrometer. Comparison showed that for the employed chromatographic system, distributions below 7 kDa were only weakly influenced by chromatographic band broadening. However, the maximum entropy algorithm could successfully correct the MWD of a 10 kDa standard for band broadening effects. Molecular weight averages were between 5 and 14% lower than the manufacturer stated data obtained by classical means of calibration. The presented method demonstrates a consistent approach for analyzing data obtained by coupling mass spectrometric detectors and concentration sensitive detectors to polymer liquid chromatography.
Maximum Entropy Threshold Segmentation Algorithm Based on 2D-WLDH%基于2D-WLDH的最大熵阈值分割算法
邹小林
2012-01-01
The traditional 2D maximum entropy threshold segmentation algorithm has an inadequately reasonable assumption that the sum of probabilities of main-diagonal distinct is approximately one in the 2D histogram and the algorithm is time-consuming. Aiming at this problem, a new maximum entropy segmentation algorithm is proposed in this paper. Based on gray level and Weber Local Descriptors(WLD), it constructs a 2D WLD Histogram(2D-WLDH), and applies it to the maximum entropy threshold segmentation. In order to further improve the speed of the proposed algorithm, the fast recursive algorithm is deduced. Experimental results show that, compared with existing corresponding algorithms, the proposed algorithm can reduce the running time and achieve better segmentation quality.%在传统二维最大熵图像阈值分割算法中,二维直方图主对角区域的概率和近似为1的假设不够合理,且算法耗时较多.为此,提出一种新的最大熵分割算法.根据灰度级和韦伯局部描述子(WLD)建立二维WLD直方图(2D-WLDH),将其用于最大熵的阈值分割,并设计快速递推算法,以提高运行速度.实验结果表明,该算法的运行时间较少,分割效果较好.
Mello, Pier A.; Shi, Zhou; Genack, Azriel Z.
2016-08-01
We study the average energy - or particle - density of waves inside disordered 1D multiply-scattering media. We extend the transfer-matrix technique that was used in the past for the calculation of the intensity beyond the sample to study the intensity in the interior of the sample by considering the transfer matrices of the two segments that form the entire waveguide. The statistical properties of the two disordered segments are found using a maximum-entropy ansatz subject to appropriate constraints. The theoretical expressions are shown to be in excellent agreement with 1D transfer-matrix simulations.
Ferreira,Amanda de Freitas; Henriques,João César Guimarães; Almeida,Guilherme de Araújo; Machado,Asbel Rodrigues; Machado, Naila Aparecida de Godoi; Fernandes Neto,Alfredo Júlio
2009-01-01
This research consisted of a quantitative assessment, and aimed to measure the possible discrepancies between the maxillomandibular positions for centric relation (CR) and maximum intercuspation (MI), using computed tomography volumetric cone beam (cone beam method). The sample of the study consisted of 10 asymptomatic young adult patients divided into two types of standard occlusion: normal occlusion and Angle Class I occlusion. In order to obtain the centric relation, a JIG device and mandi...
N. Ranjbar
2016-09-01
Full Text Available Knowledge of species’ habitat needs is considered as one of the requirements of wildlife management. We studied seasonal habitat suitability and habitat associations of wild goat (Capra aegagrus in Kolah-Qazi National Park, one of its typical habitats in central Asia, using Maximum Entropy approach. The study area was confined to mountainous areas as the potential habitat of the wild goat. Elevation, distance to water sources, distance to human settlements, and distance to guard patrol roads were recognised as the most important variables determining habitat suitability of the species. The extent of suitable habitats was maximum in spring (3882.25 ha and the least in summer (1362.5 ha. The AUC values of MaxEnt revealed acceptable to good efficiency (AUC ≥0.7. The obtained results may have implications for conservation of the wild goat in similar habitats across its distribution range.
边坡工程可靠性分析的最大熵方法%THE MAXIMUM ENTROPY METHOD FOR RELIABILITY ANALYSIS OF SLOPE ENGINEERING
王宇; 张慧; 贾志刚
2012-01-01
边坡工程可靠性分析的最大熵方法,利用已有样本的部分信息来使熵最大化,充分利用了随机变量的高阶矩信息,由样本矩来推断边坡可靠性功能函数的概率密度函数,求解边坡的破坏概率.该方法对基本随机变量的分布没有特别要求,避免了常规方法计算过程中在迭代点处对非正态随机变量进行近似当量正态化处理的缺陷.通常,功能函数的真实概率密度函数很难、甚至无法求得,将Pearson曲线族引入岩土参数随机变量高阶矩的求解当中,可以很容易地得到功能函数的高阶中心矩,然后,基于最大熵原理拟合得到功能函数的最大熵密度函数,采用区间截断法和高斯-克朗罗德数值积分法分别确定最大熵密度函数的拉格郎日系数和边坡的破坏概率.算例分析结果表明:该方法计算效率高,结果可靠,克服了传统方法求解过程复杂、精度低的缺点,将其应用于工程边坡的可靠性分析当中,发展潜力大,具有一定的应用前景和实用价值.%The maximum entropy method is used to conduct the reliability analysis for slope engineering. The entropy is enlarged by the partial information of the existed samples. The high order moment information of the random variables fully uses the sample moment to infer the slope reliability probability density function. Then the slope failure probability is calculated. This method is for the distribution of basic random variables without special requirement. It avoids the conventional method in the process of computation in the iteration points for non-normal random variables to approximate the yield of the normal processes defects. Usually, the function of real probability density function is difficult to obtain,even can't be calculated. So the Pearson curve clan is introduced to solve the high-order moment for geotechnical parameter random variable. It can easily get the function of high order center. It is based on
Esposito, Rosario; Mensitieri, Giuseppe; de Nicola, Sergio
2015-12-21
A new algorithm based on the Maximum Entropy Method (MEM) is proposed for recovering both the lifetime distribution and the zero-time shift from time-resolved fluorescence decay intensities. The developed algorithm allows the analysis of complex time decays through an iterative scheme based on entropy maximization and the Brent method to determine the minimum of the reduced chi-squared value as a function of the zero-time shift. The accuracy of this algorithm has been assessed through comparisons with simulated fluorescence decays both of multi-exponential and broad lifetime distributions for different values of the zero-time shift. The method is capable of recovering the zero-time shift with an accuracy greater than 0.2% over a time range of 2000 ps. The center and the width of the lifetime distributions are retrieved with relative discrepancies that are lower than 0.1% and 1% for the multi-exponential and continuous lifetime distributions, respectively. The MEM algorithm is experimentally validated by applying the method to fluorescence measurements of the time decays of the flavin adenine dinucleotide (FAD).
Mantsyzov, Alexey B; Maltsev, Alexander S; Ying, Jinfa; Shen, Yang; Hummer, Gerhard; Bax, Ad
2014-09-01
α-Synuclein is an intrinsically disordered protein of 140 residues that switches to an α-helical conformation upon binding phospholipid membranes. We characterize its residue-specific backbone structure in free solution with a novel maximum entropy procedure that integrates an extensive set of NMR data. These data include intraresidue and sequential H(N) − H(α) and H(N) − H(N) NOEs, values for (3) JHNHα, (1) JHαCα, (2) JCαN, and (1) JCαN, as well as chemical shifts of (15)N, (13)C(α), and (13)C' nuclei, which are sensitive to backbone torsion angles. Distributions of these torsion angles were identified that yield best agreement to the experimental data, while using an entropy term to minimize the deviation from statistical distributions seen in a large protein coil library. Results indicate that although at the individual residue level considerable deviations from the coil library distribution are seen, on average the fitted distributions agree fairly well with this library, yielding a moderate population (20-30%) of the PPII region and a somewhat higher population of the potentially aggregation-prone β region (20-40%) than seen in the database. A generally lower population of the αR region (10-20%) is found. Analysis of (1)H − (1)H NOE data required consideration of the considerable backbone diffusion anisotropy of a disordered protein.
Mantsyzov, Alexey B; Maltsev, Alexander S; Ying, Jinfa; Shen, Yang; Hummer, Gerhard; Bax, Ad
2014-01-01
α-Synuclein is an intrinsically disordered protein of 140 residues that switches to an α-helical conformation upon binding phospholipid membranes. We characterize its residue-specific backbone structure in free solution with a novel maximum entropy procedure that integrates an extensive set of NMR data. These data include intraresidue and sequential HN–Hα and HN–HN NOEs, values for 3JHNHα, 1JHαCα, 2JCαN, and 1JCαN, as well as chemical shifts of 15N, 13Cα, and 13C′ nuclei, which are sensitive to backbone torsion angles. Distributions of these torsion angles were identified that yield best agreement to the experimental data, while using an entropy term to minimize the deviation from statistical distributions seen in a large protein coil library. Results indicate that although at the individual residue level considerable deviations from the coil library distribution are seen, on average the fitted distributions agree fairly well with this library, yielding a moderate population (20–30%) of the PPII region and a somewhat higher population of the potentially aggregation-prone β region (20–40%) than seen in the database. A generally lower population of the αR region (10–20%) is found. Analysis of 1H–1H NOE data required consideration of the considerable backbone diffusion anisotropy of a disordered protein. PMID:24976112
Dewar, Roderick [Unite de Bioclimatologie, INRA Centre de Bordeaux, BP 81, 33883 Villenave d' Ornon (France)
2003-01-24
Jaynes' information theory formalism of statistical mechanics is applied to the stationary states of open, non-equilibrium systems. First, it is shown that the probability distribution p{sub {gamma}} of the underlying microscopic phase space trajectories {gamma} over a time interval of length {tau} satisfies p{sub {gamma}} {proportional_to} exp({tau}{sigma}{sub {gamma}}/2k{sub B}) where {sigma}{sub {gamma}} is the time-averaged rate of entropy production of {gamma}. Three consequences of this result are then derived: (1) the fluctuation theorem, which describes the exponentially declining probability of deviations from the second law of thermodynamics as {tau} {yields} {infinity}; (2) the selection principle of maximum entropy production for non-equilibrium stationary states, empirical support for which has been found in studies of phenomena as diverse as the Earth's climate and crystal growth morphology; and (3) the emergence of self-organized criticality for flux-driven systems in the slowly-driven limit. The explanation of these results on general information theoretic grounds underlines their relevance to a broad class of stationary, non-equilibrium systems. In turn, the accumulating empirical evidence for these results lends support to Jaynes' formalism as a common predictive framework for equilibrium and non-equilibrium statistical mechanics.
Haijing Niu; Ping Guo; Xiaodong Song; Tianzi Jiang
2008-01-01
The sensitivity of diffuse optical tomography (DOT) imaging exponentially decreases with the increase of photon penetration depth, which leads to a poor depth resolution for DOT. In this letter, an exponential adjustment method (EAM) based on maximum singular value of layered sensitivity is proposed. Optimal depth resolution can be achieved by compensating the reduced sensitivity in the deep medium. Simulations are performed using a semi-infinite model and the simulation results show that the EAM method can substantially improve the depth resolution of deeply embedded objects in the medium. Consequently, the image quality and the reconstruction accuracy for these objects have been largely improved.
Moua, Yi; Roux, Emmanuel; Girod, Romain; Dusfour, Isabelle; de Thoisy, Benoit; Seyler, Frédérique; Briolant, Sébastien
2016-12-22
Malaria is an important health issue in French Guiana. Its principal mosquito vector in this region is Anopheles darlingi Root. Knowledge of the spatial distribution of this species is still very incomplete due to the extent of French Guiana and the difficulty to access most of the territory. Species distribution modeling based on the maximal entropy procedure was used to predict the spatial distribution of An. darlingi using 39 presence sites. The resulting model provided significantly high prediction performances (mean 10-fold cross-validated partial area under the curve and continuous Boyce index equal to, respectively, 1.11-with a level of omission error of 20%-and 0.42). The model also provided a habitat suitability map and environmental response curves in accordance with the known entomological situation. Several environmental characteristics that had a positive correlation with the presence of An. darlingi were highlighted: nonpermanent anthropogenic changes of the natural environment, the presence of roads and tracks, and opening of the forest. Some geomorphological landforms and high altitude landscapes appear to be unsuitable for An. darlingi The species distribution modeling was able to reliably predict the distribution of suitable habitats for An. darlingi in French Guiana. Results allowed completion of the knowledge of the spatial distribution of the principal malaria vector in this Amazonian region, and identification of the main factors that favor its presence. They should contribute to the definition of a necessary targeted vector control strategy in a malaria pre-elimination stage, and allow extrapolation of the acquired knowledge to other Amazonian or malaria-endemic contexts.
Mobli, Mehdi; Stern, Alan S.; Bermel, Wolfgang; King, Glenn F.; Hoch, Jeffrey C.
2010-05-01
One of the stiffest challenges in structural studies of proteins using NMR is the assignment of sidechain resonances. Typically, a panel of lengthy 3D experiments are acquired in order to establish connectivities and resolve ambiguities due to overlap. We demonstrate that these experiments can be replaced by a single 4D experiment that is time-efficient, yields excellent resolution, and captures unique carbon-proton connectivity information. The approach is made practical by the use of non-uniform sampling in the three indirect time dimensions and maximum entropy reconstruction of the corresponding 3D frequency spectrum. This 4D method will facilitate automated resonance assignment procedures and it should be particularly beneficial for increasing throughput in NMR-based structural genomics initiatives.
Rong Jiang
2014-09-01
Full Text Available As the early design decision-making structure, a software architecture plays a key role in the final software product quality and the whole project. In the software design and development process, an effective evaluation of the trustworthiness of a software architecture can help making scientific and reasonable decisions on the architecture, which are necessary for the construction of highly trustworthy software. In consideration of lacking the trustworthiness evaluation and measurement studies for software architecture, this paper provides one trustworthy attribute model of software architecture. Based on this model, the paper proposes to use the Principle of Maximum Entropy (POME and Grey Decision-making Method (GDMM as the trustworthiness evaluation method of a software architecture and proves the scientificity and rationality of this method, as well as verifies the feasibility through case analysis.
Shigemitsu, Yoshiki; Ikeya, Teppei; Yamamoto, Akihiro; Tsuchie, Yuusuke; Mishima, Masaki; Smith, Brian O; Güntert, Peter; Ito, Yutaka
2015-02-06
Despite their advantages in analysis, 4D NMR experiments are still infrequently used as a routine tool in protein NMR projects due to the long duration of the measurement and limited digital resolution. Recently, new acquisition techniques for speeding up multidimensional NMR experiments, such as nonlinear sampling, in combination with non-Fourier transform data processing methods have been proposed to be beneficial for 4D NMR experiments. Maximum entropy (MaxEnt) methods have been utilised for reconstructing nonlinearly sampled multi-dimensional NMR data. However, the artefacts arising from MaxEnt processing, particularly, in NOESY spectra have not yet been clearly assessed in comparison with other methods, such as quantitative maximum entropy, multidimensional decomposition, and compressed sensing. We compared MaxEnt with other methods in reconstructing 3D NOESY data acquired with variously reduced sparse sampling schedules and found that MaxEnt is robust, quick and competitive with other methods. Next, nonlinear sampling and MaxEnt processing were applied to 4D NOESY experiments, and the effect of the artefacts of MaxEnt was evaluated by calculating 3D structures from the NOE-derived distance restraints. Our results demonstrated that sufficiently converged and accurate structures (RMSD of 0.91Å to the mean and 1.36Å to the reference structures) were obtained even with NOESY spectra reconstructed from 1.6% randomly selected sampling points for indirect dimensions. This suggests that 3D MaxEnt processing in combination with nonlinear sampling schedules is still a useful and advantageous option for rapid acquisition of high-resolution 4D NOESY spectra of proteins.
Maximum Entropy Method of Image Segmentation Based on Genetic Algorithm%改进的最大熵算法在图像分割中的应用
王文渊; 王芳梅
2011-01-01
The traditional entropy threshold has shortcomings of theory and computational complexity, resulting in time - consuming in image segmentation and low efficiency. In order to improve the efficiency and accuracy of image segmentation, an image segmentation method is proposed, which combines the improved genetic algorithm with maxi-mum entropy algorithm. First, the two -dimensional histogram based on the image gray value information is used to extract features, then three genetic operations of selecting, crossover and mutation are used to search for the optimal threshold for image segmentation. Simulation results show that the improved algorithm, compared with the traditional maximum entropy image segmentation algorithm, increases segmentation efficiency, and the accuracy of image seg-mentation has greatly improved, which speeds up the segmentation speed.%研究图像分割优化问题,要求图像分割速度快,清晰度高.针对传统的熵值法在理论上存在的不足,同时抗噪能力差,速度慢,图像模糊等缺陷,造成图像分割过程耗时长,分割效率低等问题.为了提高图像分割效率和精确度,提出一种改进的遗传算法和最大熵算法相结合的图像分割新方法.首先依据图像二维直方图信息来对图像进行特征提取,最后通过遗传算法的选择、交叉和变异操作搜索最优阈值,从而获得最优阈值来对图像进行分割.仿真结果表明,改进的算法与传统最大熵值的图像分割算法相比,分割效率明显提高,同时图像分割的精度也大大提高,加快了图像分割的速度,为设计提供了依据.
Ferreira, Amanda de Freitas; Henriques, João César Guimarães; Almeida, Guilherme Araújo; Machado, Asbel Rodrigues; Machado, Naila Aparecida de Godoi; Fernandes Neto, Alfredo Júlio
2009-01-01
This research consisted of a quantitative assessment, and aimed to measure the possible discrepancies between the maxillomandibular positions for centric relation (CR) and maximum intercuspation (MI), using computed tomography volumetric cone beam (cone beam method). The sample of the study consisted of 10 asymptomatic young adult patients divided into two types of standard occlusion: normal occlusion and Angle Class I occlusion. In order to obtain the centric relation, a JIG device and mandible manipulation were used to deprogram the habitual conditions of the jaw. The evaluations were conducted in both frontal and lateral tomographic images, showing the condyle/articular fossa relation. The images were processed in the software included in the NewTom 3G device (QR NNT software version 2.00), and 8 tomographic images were obtained per patient, four laterally and four frontally exhibiting the TMA's (in CR and MI, on both sides, right and left). By means of tools included in another software, linear and angular measurements were performed and statistically analyzed by student t test. According to the methodology and the analysis performed in asymptomatic patients, it was not possible to detect statistically significant differences between the positions of centric relation and maximum intercuspation. However, the resources of cone beam tomography are of extreme relevance to the completion of further studies that use heterogeneous groups of samples in order to compare the results.
Amanda de Freitas Ferreira
2009-01-01
Full Text Available This research consisted of a quantitative assessment, and aimed to measure the possible discrepancies between the maxillomandibular positions for centric relation (CR and maximum intercuspation (MI, using computed tomography volumetric cone beam (cone beam method. The sample of the study consisted of 10 asymptomatic young adult patients divided into two types of standard occlusion: normal occlusion and Angle Class I occlusion. In order to obtain the centric relation, a JIG device and mandible manipulation were used to deprogram the habitual conditions of the jaw. The evaluations were conducted in both frontal and lateral tomographic images, showing the condyle/articular fossa relation. The images were processed in the software included in the NewTom 3G device (QR NNT software version 2.00, and 8 tomographic images were obtained per patient, four laterally and four frontally exhibiting the TMA's (in CR and MI, on both sides, right and left. By means of tools included in another software, linear and angular measurements were performed and statistically analyzed by student t test. According to the methodology and the analysis performed in asymptomatic patients, it was not possible to detect statistically significant differences between the positions of centric relation and maximum intercuspation. However, the resources of cone beam tomography are of extreme relevance to the completion of further studies that use heterogeneous groups of samples in order to compare the results.
Escalante Sandoval, Carlos A.; Dominguez Esquivel, Jose Y. [Universidad Nacional Autonoma de Mexico (Mexico)
2001-09-01
The principle of maximum entropy (POME) is used to derive an alternative method of parameter estimation for the bivariate Gumbel distribution. A simple algorithm for this parameter estimation technique is presented. This method is applied to analyze the precipitation in a region of Mexico. Design events are compered with those obtained by the maximum likelihood procedure. According to the results, the proposed technique is a suitable option to be considered when performing frequency analysis of precipitation with small samples. [Spanish] El principio de maxima entropia, conocido como POME, es utilizado para derivar un procedimiento alternativo de estimacion de parametros de la distribucion bivariada de valores extremos con marginales Gumbel. El modelo se aplica al analisis de la precipitacion maxima en 24 horas en una region de Mexico y los eventos de diseno obtenidos son comparados con los proporcionados por la tecnica de maxima verosimilitud. De acuerdo con los resultados obtenidos, se concluye que la tecnica propuesta representa una buena opcion, sobre todo para el caso de muestras pequenas.
Drought frequency analysis using stochastic simulation with maximum entropy model%基于最大熵分布模拟的干旱频率分析
张明; 金菊良; 王国庆; 周润娟
2013-01-01
为提高干旱频率分析结果的可靠性,提出了基于最大熵分布模拟的干旱频率分析方法.该模型首先用自回归模型剔除年径流量序列中的相依成分,分别计算年径流残差项序列各阶样本矩,并通过加速遗传算法求解获得残差项的最大熵概率分布函数,得到研究区域年径流量序列的最大熵分布随机模型；然后采用Monte Carlo随机模拟研究区域长、短序列的年径流量序列,在比较模拟效果的基础上,用轮次分析方法得到研究区域的干旱发生频率情况.区域干旱频率分析的实例研究结果显示,最大熵分布和P-Ⅲ分布的模拟结果在各统计特性上较为接近,充分说明最大熵分布模拟结果的准确性；窟野河流域温家川站10 000年的年径流量序列轮次分析结果表明,温家川站发生连续12年严重干旱事件的概率为2.6％,重现期为203年.基于最大熵分布模拟残差项序列,由于不事先假定理论分布线型,使得最大熵分布模拟结果更具有适用性,适合于处理水资源系统中各种降水、径流等模拟分析工作.%This paper develops a model for drought frequency analysis using maximum entropy distribution to improve the reliability of analysis. This model adopts three steps. First, it calculates the probability density function ( PDF) of the residual series of annual runoff after eliminating the dependent components with a auto-regression process. Second, it simulates the maximum entropy PDF of a purely random series generated by a Monte Carlo model with a rejection technique. Third, it calculates the negative run lengths for a simulated long-term annual runoff series, so that a frequency curve of these lengths was obtained and used in drought frequency analysis. Its application to the runoff at the Wenjiachuan station in the Kuye river basin indicates that its stochastic simulations are better than those with a P-Ⅲ distribution method. And on the basis of a
Segmentation Based on Clustering and Maximum Entropy Method%基于空间模式聚类最大熵图像分割算法研究
陈秋红; 沈云琴
2012-01-01
研究图像分割优化问题,在分割图像中,提取信息受到各种因素影响,分割效果不理想.针对图像分割计算复杂,造成图像分割分辨率低,清晰度不高.同时,当图像中的信息量非常大时,图像分割非常耗时.为了有效地分割图像,提出了一种基于空间模式聚类和最大熵算法原理相结合的图像分割方法.首先对图像采用最大熵算法进行图像分割,为每个熵区域定义特征量.根据不同的特征量计算相似区域之间的欧氏距离和空间距离,从而确定像素聚类中心的距离.然后对分割后的图像区域采用基于空间模式聚类方案进行合并,并对图像进行二值化处理.仿真表明与传统图像分割相比,提高了分割效率,分割出的图像边缘效果清晰,证明了算法的可行性和有效性.%The paper studied Image segmentation optimization problem. For the computational complexity and oth er factors, many image segmentation algorithms have low resolution of image segmentation and low clarity. When ima ges contain large amount of information, the image segmentations are very time-consuming]'. In order to effectively segment images, a space model was proposed based on clustering and principle of maximum entropy algorithm. First ly , the maximum entropy algorithm was used for image segmentation, and characteristics were defined for each entro py region. Based on different characteristics, the Euclidean distance and space distance between similar regions were calculated to determine the distance between cluster center pixel. Then, segmented image areas were clustered based on joint space mode, and binarized. Simulation results show that compared with the traditional image segmentation, this image segmentation has clear edge effects, which demonstrates the feasibility and effectiveness of the algorithm.
董新峰; 李郝林; 余慧杰
2013-01-01
The methods of maximum entropy and discrimination information were applied to the analysis of X2 direction degradation of a M1432 grinding machine. The maximum entropy principle was used to obtain the accurate maximum entropy probability density distribution of the vibration. Then, the discrimination information was made in use to analyze the variations of maximum entropy probability density distribution that can judge the state of machine tool spindle system. The results show that in the X2 direction, the workpiece spindle in the example has tiny degradation.%采用最大熵原理与鉴别信息方法对M1432B型磨床工件主轴X2方向退化进行分析:用最大熵原理获得工件主轴在4 ～10月份振动信号最大熵概率密度分布,再用鉴别信息对该概率密度分布变化进行计算,通过鉴别信息变化判断主轴系统状态的变化,结果表明,M1432B型磨床工件主轴x2方向发生微小退化.
李红; 雷志勇
2011-01-01
提出了最大熵谱估计和LMS自适应算法提取激光测距系统的反射回波信号。研究了最大熵谱估计的信号检测原理,采用Burg算法求取AR模型相关参数,设计LMS自适应滤波器提取回波信号,并分析了Burg最大熵谱估计在激光测距系统回波信号检测中的应用。仿真分析表明,最大熵谱估计和LMS自适应算法相结合可以有效地从背景噪声中提取有用的激光反射回波信号。%Maximum entropy spectral estimation and LMS adaptive algorithm were proposed to extract the reflection echo signal of laser ranging system.The theory of the maximum entropy spectra estimation was researched,the Burg algorithm was used to obtain correlative parameters of model,and LMS filter was designed to extract useful signal from weak echo signal.The applications of Burg maximum entropy spectral estimation were analyzed in echo signal detection of laser ranging system.Simluation results show that the maximum entropy spectral estimation and LMS adaptive algorithm can extract echo signal of laser ranging system effectively from background noises.
Kiatgamolchai, S
2000-01-01
gamma has the bowl shape with the minimum at x approx 0.25-0.3. These characteristics suggest a possible influence of alloy disorder scattering. The mobilities and activation energies of the carriers in the boron-doped cap vary between samples and this is believed to be due to boron-spike near the Si/Si-substrate interface, in some samples. The source of electron-like carrier is presently unknown. Magnetotransport properties of modulation-doped p-type Si sub 1 sub - sub x Ge sub x /Si and Si sub 1 sub - sub x Ge sub x /Si sub 1 sub - sub y Ge sub y heterostructures were studied, in the magnetic field range 0-12 T, and in the temperature range 0.35-300 K. The experimental data within the classical regime have been analysed by mobility spectrum analysis, in order to separate the influences of different parallel conduction paths. A new method of mobility spectrum analysis has been developed by the based on the concept of maximum-entropy, and this computation has been shown to overcome several drawbacks or limita...
Huang, S.-Y.; Wang, J.
2016-07-01
A coupled force-restore model of surface soil temperature and moisture (FRMEP) is formulated by incorporating the maximum entropy production model of surface heat fluxes and including the gravitational drainage term. The FRMEP model driven by surface net radiation and precipitation are independent of near-surface atmospheric variables with reduced sensitivity to the uncertainties of model input and parameters compared to the classical force-restore models (FRM). The FRMEP model was evaluated using observations from two field experiments with contrasting soil moisture conditions. The modeling errors of the FRMEP predicted surface temperature and soil moisture are lower than those of the classical FRMs forced by observed or bulk formula based surface heat fluxes (bias 1 ~ 2°C versus ~4°C, 0.02 m3 m-3 versus 0.05 m3 m-3). The diurnal variations of surface temperature, soil moisture, and surface heat fluxes are well captured by the FRMEP model measured by the high correlations between the model predictions and observations (r ≥ 0.84). Our analysis suggests that the drainage term cannot be neglected under wet soil condition. A 1 year simulation indicates that the FRMEP model captures the seasonal variation of surface temperature and soil moisture with bias less than 2°C and 0.01 m3 m-3 and correlation coefficients of 0.93 and 0.9 with observations, respectively.
Hsia, Wei Shen
1989-01-01
A validated technology data base is being developed in the areas of control/structures interaction, deployment dynamics, and system performance for Large Space Structures (LSS). A Ground Facility (GF), in which the dynamics and control systems being considered for LSS applications can be verified, was designed and built. One of the important aspects of the GF is to verify the analytical model for the control system design. The procedure is to describe the control system mathematically as well as possible, then to perform tests on the control system, and finally to factor those results into the mathematical model. The reduction of the order of a higher order control plant was addressed. The computer program was improved for the maximum entropy principle adopted in Hyland's MEOP method. The program was tested against the testing problem. It resulted in a very close match. Two methods of model reduction were examined: Wilson's model reduction method and Hyland's optimal projection (OP) method. Design of a computer program for Hyland's OP method was attempted. Due to the difficulty encountered at the stage where a special matrix factorization technique is needed in order to obtain the required projection matrix, the program was successful up to the finding of the Linear Quadratic Gaussian solution but not beyond. Numerical results along with computer programs which employed ORACLS are presented.
S. Y. Ma
Full Text Available In this paper case studies of propagation characteristics of two TIDs are presented which are induced by atmospheric gravity waves in the auroral F-region on a magnetic quiet day. By means of maximum entropy cross-spectral analysis of EISCAT CP2 data, apparent full wave-number vectors of the TIDs are obtained as a function of height. The analysis results show that the two events considered can be classified as moderately large-scale TID and medium-scale TID, respectively. One exhibits a dominant period of about 72 min, a mean horizontal phase speed of about 180 m/s (corresponding to a horizontal wavelength of about 780 km directed south-eastwards and a vertical phase speed of 55 m/s for a height of about 300 km. The other example shows a dominant period of 44 min, a mean horizontal phase velocity of about 160 m/s (corresponding to a horizontal wavelength of about 420 km directed southwestwards, and a vertical phase velocity of about 50 m/s at 250 km altitude.
Key words. Ionosphere · Auroral ionosphere · Ionosphere-atmosphere interactions · Wave propagation
Pakdad, Kamran; Hanafi-Bojd, Ahmad Ali; Vatandoost, Hassan; Sedaghat, Mohammad Mehdi; Raeisi, Ahmad; Moghaddam, Abdolreza Salahi; Foroushani, Abbas Rahimi
2017-05-01
Malaria is considered as a major public health problem in southern areas of Iran. The goal of this study was to predict best ecological niches of three main malaria vectors of Iran: Anopheles stephensi, Anopheles culicifacies s.l. and Anopheles fluviatilis s.l. A databank was created which included all published data about Anopheles species of Iran from 1961 to 2015. The suitable environmental niches for the three above mentioned Anopheles species were predicted using maximum entropy model (MaxEnt). AUC (area under Roc curve) values were 0.943, 0.974 and 0.956 for An. stephensi, An. culicifacies s.l. and An. fluviatilis s.l respectively, which are considered as high potential power of model in the prediction of species niches. The biggest bioclimatic contributor for An. stephensi and An. fluviatilis s.l. was bio 15 (precipitation seasonality), 25.5% and 36.1% respectively, followed by bio 1 (annual mean temperature), 20.8% for An. stephensi and bio 4 (temperature seasonality) with 49.4% contribution for An. culicifacies s.l. This is the first step in the mapping of the country's malaria vectors. Hence, future weather situation can change the dispersal maps of Anopheles. Iran is under elimination phase of malaria, so that such spatio-temporal studies are essential and could provide guideline for decision makers for IVM strategies in problematic areas. Copyright © 2017 Elsevier B.V. All rights reserved.
Application of Maximum Entropy Method on Option Pricing%最大熵方法在组合期权定价中的应用
董莹; 季鑫
2012-01-01
在欧式期权的基础上,采用最大熵方法,求得无偏差的概率分布,对组合期权进行定价与求解.在此过程中,应用自融资无套利市场原理作为变化的基础,在无风险资产同时存在的条件下,通过惩罚函数法及BFGS算法的综合应用进行价格求解,使组合期权定价方法更为准确.%On the basis of the European option,combined option pricing can be measured and solved by a series of probability distribution which can be produced by the maximum entropy method.Regarding the theory of the self-financing and no-arbitrage as the basis of changing,the combined option pricing can be made in the condition of risk-free assets by the methods of penalty function and BFGS algorithm,which makes the method of combined option pricing can be settled accurately.
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M
2016-04-21
Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.
Ghammraoui, Bahaa; Badal, Andreu; Popescu, Lucretiu M.
2016-04-01
Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter cross section of the investigated object revealing structural information of tissue under investigation. In the original CSCT proposals the reconstruction of images from coherently scattered x-rays is done at each scattering angle separately using analytic reconstruction. In this work we develop a maximum likelihood estimation of scatter components algorithm (ML-ESCA) that iteratively reconstructs images using a few material component basis functions from coherent scatter projection data. The proposed algorithm combines the measured scatter data at different angles into one reconstruction equation with only a few component images. Also, it accounts for data acquisition statistics and physics, modeling effects such as polychromatic energy spectrum and detector response function. We test the algorithm with simulated projection data obtained with a pencil beam setup using a new version of MC-GPU code, a Graphical Processing Unit version of PENELOPE Monte Carlo particle transport simulation code, that incorporates an improved model of x-ray coherent scattering using experimentally measured molecular interference functions. The results obtained for breast imaging phantoms using adipose and glandular tissue cross sections show that the new algorithm can separate imaging data into basic adipose and water components at radiation doses comparable with Breast Computed Tomography. Simulation results also show the potential for imaging microcalcifications. Overall, the component images obtained with ML-ESCA algorithm have a less noisy appearance than the images obtained with the conventional filtered back projection algorithm for each individual scattering angle. An optimization study for x-ray energy range selection for breast CSCT is also presented.
Prathapa, Siriyara Jagannatha; Mondal, Swastik; van Smaalen, Sander
2013-04-01
Dynamic model densities according to Mondal et al. [(2012), Acta Cryst. A68, 568-581] are presented for independent atom models (IAM), IAMs after high-order refinements (IAM-HO), invariom (INV) models and multipole (MP) models of α-glycine, DL-serine, L-alanine and Ala-Tyr-Ala at T ≃ 20 K. Each dynamic model density is used as prior in the calculation of electron density according to the maximum entropy method (MEM). We show that at the bond-critical points (BCPs) of covalent C-C and C-N bonds the IAM-HO and INV priors produce reliable MEM density maps, including reliable values for the density and its Laplacian. The agreement between these MEM density maps and dynamic MP density maps is less good for polar C-O bonds, which is explained by the large spread of values of topological descriptors of C-O bonds in static MP densities. The density and Laplacian at BCPs of hydrogen bonds have similar values in MEM density maps obtained with all four kinds of prior densities. This feature is related to the smaller spatial variation of the densities in these regions, as expressed by small magnitudes of the Laplacians and the densities. It is concluded that the use of the IAM-HO prior instead of the IAM prior leads to improved MEM density maps. This observation shows interesting parallels to MP refinements, where the use of the IAM-HO as an initial model is the accepted procedure for solving MP parameters. A deconvolution of thermal motion and static density that is better than the deconvolution of the IAM appears to be necessary in order to arrive at the best MP models as well as at the best MEM densities.
Nagarajan, Rajakumar; Iqbal, Zohaib; Burns, Brian; Wilson, Neil E; Sarma, Manoj K; Margolis, Daniel A; Reiter, Robert E; Raman, Steven S; Thomas, M Albert
2015-11-01
The overlap of metabolites is a major limitation in one-dimensional (1D) spectral-based single-voxel MRS and multivoxel-based MRSI. By combining echo planar spectroscopic imaging (EPSI) with a two-dimensional (2D) J-resolved spectroscopic (JPRESS) sequence, 2D spectra can be recorded in multiple locations in a single slice of prostate using four-dimensional (4D) echo planar J-resolved spectroscopic imaging (EP-JRESI). The goal of the present work was to validate two different non-linear reconstruction methods independently using compressed sensing-based 4D EP-JRESI in prostate cancer (PCa): maximum entropy (MaxEnt) and total variation (TV). Twenty-two patients with PCa with a mean age of 63.8 years (range, 46-79 years) were investigated in this study. A 4D non-uniformly undersampled (NUS) EP-JRESI sequence was implemented on a Siemens 3-T MRI scanner. The NUS data were reconstructed using two non-linear reconstruction methods, namely MaxEnt and TV. Using both TV and MaxEnt reconstruction methods, the following observations were made in cancerous compared with non-cancerous locations: (i) higher mean (choline + creatine)/citrate metabolite ratios; (ii) increased levels of (choline + creatine)/spermine and (choline + creatine)/myo-inositol; and (iii) decreased levels of (choline + creatine)/(glutamine + glutamate). We have shown that it is possible to accelerate the 4D EP-JRESI sequence by four times and that the data can be reliably reconstructed using the TV and MaxEnt methods. The total acquisition duration was less than 13 min and we were able to detect and quantify several metabolites.
Feng, Jinian
2017-01-01
Climate change will markedly impact biology, population ecology, and spatial distribution patterns of insect pests because of the influence of future greenhouse effects on insect development and population dynamics. Onion maggot, Delia antiqua, larvae are subterranean pests with limited mobility, that directly feed on bulbs of Allium sp. and render them completely unmarketable. Modeling the spatial distribution of such a widespread and damaging pest is crucial not only to identify current potentially suitable climactic areas but also to predict where the pest is likely to spread in the future so that appropriate monitoring and management programs can be developed. In this study, Maximum Entropy Niche Modeling was used to estimate the current potential distribution of D. antiqua and to predict the future distribution of this species in 2030, 2050, 2070 and 2080 by using emission scenario (A2) with 7 climate variables. The results of this study show that currently highly suitable habitats for D.antiqua occur throughout most of East Asia, some regions of North America, Western Europe, and Western Asian countries near the Caspian sea and Black Sea. In the future, we predict an even broader distribution of this pest spread more extensively throughout Asia, North America and Europe, particularly in most of European countries, Central regions of United States and much of East Asia. Our present day and future predictions can enhance strategic planning of agricultural organizations by identifying regions that will need to develop Integrated Pest Management programs to manage the onion maggot. The distribution forecasts will also help governments to optimize economic investments in management programs for this pest by identifying regions that are or will become less suitable for current and future infestations. PMID:28158259
Breece, Matthew W; Oliver, Matthew J; Cimino, Megan A; Fox, Dewayne A
2013-01-01
Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus) experienced severe declines due to habitat destruction and overfishing beginning in the late 19(th) century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt) approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19(th) century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960's. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species.
Samy, Ali; Dinnebier, Robert E; van Smaalen, Sander; Jansen, Martin
2010-04-01
In a systematic approach, the ability of the Maximum Entropy Method (MEM) to reconstruct the most probable electron density of highly disordered crystal structures from X-ray powder diffraction data was evaluated. As a case study, the ambient temperature crystal structures of disordered alpha-Rb(2)[C(2)O(4)] and alpha-Rb(2)[CO(3)] and ordered delta-K(2)[C(2)O(4)] were investigated in detail with the aim of revealing the ;true' nature of the apparent disorder. Different combinations of F (based on phased structure factors) and G constraints (based on structure-factor amplitudes) from different sources were applied in MEM calculations. In particular, a new combination of the MEM with the recently developed charge-flipping algorithm with histogram matching for powder diffraction data (pCF) was successfully introduced to avoid the inevitable bias of the phases of the structure-factor amplitudes by the Rietveld model. Completely ab initio electron-density distributions have been obtained with the MEM applied to a combination of structure-factor amplitudes from Le Bail fits with phases derived from pCF. All features of the crystal structures, in particular the disorder of the oxalate and carbonate anions, and the displacements of the cations, are clearly obtained. This approach bears the potential of a fast method of electron-density determination, even for highly disordered materials. All the MEM maps obtained in this work were compared with the MEM map derived from the best Rietveld refined model. In general, the phased observed structure factors obtained from Rietveld refinement (applying F and G constraints) were found to give the closest description of the experimental data and thus lead to the most accurate image of the actual disorder.
Samy, A.; Dinnebier, R; van Smaalen, S; Jansen, M
2010-01-01
In a systematic approach, the ability of the Maximum Entropy Method (MEM) to reconstruct the most probable electron density of highly disordered crystal structures from X-ray powder diffraction data was evaluated. As a case study, the ambient temperature crystal structures of disordered {alpha}-Rb{sub 2}[C{sub 2}O{sub 4}] and {alpha}-Rb{sub 2}[CO{sub 3}] and ordered {delta}-K{sub 2}[C{sub 2}O{sub 4}] were investigated in detail with the aim of revealing the 'true' nature of the apparent disorder. Different combinations of F (based on phased structure factors) and G constraints (based on structure-factor amplitudes) from different sources were applied in MEM calculations. In particular, a new combination of the MEM with the recently developed charge-flipping algorithm with histogram matching for powder diffraction data (pCF) was successfully introduced to avoid the inevitable bias of the phases of the structure-factor amplitudes by the Rietveld model. Completely ab initio electron-density distributions have been obtained with the MEM applied to a combination of structure-factor amplitudes from Le Bail fits with phases derived from pCF. All features of the crystal structures, in particular the disorder of the oxalate and carbonate anions, and the displacements of the cations, are clearly obtained. This approach bears the potential of a fast method of electron-density determination, even for highly disordered materials. All the MEM maps obtained in this work were compared with the MEM map derived from the best Rietveld refined model. In general, the phased observed structure factors obtained from Rietveld refinement (applying F and G constraints) were found to give the closest description of the experimental data and thus lead to the most accurate image of the actual disorder.
Lombardo, L.
2016-07-18
This study aims at evaluating the performance of the Maximum Entropy method in assessing landslide susceptibility, exploiting topographic and multispectral remote sensing predictors. We selected the catchment of the Giampilieri stream, which is located in the north-eastern sector of Sicily (southern Italy), as test site. On 1/10/2009, a storm rainfall triggered in this area hundreds of debris flow/avalanche phenomena causing extensive economical damage and loss of life. Within this area a presence-only-based statistical method was applied to obtain susceptibility models capable of distinguish future activation sites of debris flow and debris slide, which where the main source failure mechanisms for flow or avalanche type propagation. The set of predictors used in this experiment comprised primary and secondary topographic attributes, derived by processing a high resolution digital elevation model, CORINE land cover data and a set of vegetation and mineral indices obtained by processing multispectral ASTER images. All the selected data sources are dated before the disaster. A spatially random partition technique was adopted for validation, generating fifty replicates for each of the two considered movement typologies in order to assess accuracy, precision and reliability of the models. The debris slide and debris flow susceptibility models produced high performances with the first type being the best fitted. The evaluation of the probability estimates around the mean value for each mapped pixel shows an inverted relation, with the most robust models corresponding to the debris flows. With respect to the role of each predictor within the modelling phase, debris flows appeared to be primarily controlled by topographic attributes whilst the debris slides were better explained by remotely sensed derived indices, particularly by the occurrence of previous wildfires across the slope. The overall excellent performances of the two models suggest promising perspectives for
Matthew W Breece
Full Text Available Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus experienced severe declines due to habitat destruction and overfishing beginning in the late 19(th century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19(th century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960's. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species.
Yu Hua Tong
Full Text Available In this study, we develop a microdensitometry method using full width at half maximum (FWHM analysis of the retinal vascular structure in a spectral-domain optical coherence tomography (SD-OCT image and present the application of this method in the morphometry of arteriolar changes during hypertension.Two raters using manual and FWHM methods measured retinal vessel outer and lumen diameters in SD-OCT images. Inter-rater reproducibility was measured using coefficients of variation (CV, intraclass correlation coefficient and a Bland-Altman plot. OCT images from forty-three eyes of 43 hypertensive patients and 40 eyes of 40 controls were analyzed using an FWHM approach; wall thickness, wall cross-sectional area (WCSA and wall to lumen ratio (WLR were subsequently calculated.Mean difference in inter-rater agreement ranged from -2.713 to 2.658 μm when using a manual method, and ranged from -0.008 to 0.131 μm when using a FWHM approach. The inter-rater CVs were significantly less for the FWHM approach versus the manual method (P < 0.05. Compared with controls, the wall thickness, WCSA and WLR of retinal arterioles were increased in the hypertensive patients, particular in diabetic hypertensive patients.The microdensitometry method using a FWHM algorithm markedly improved inter-rater reproducibility of arteriolar morphometric analysis, and SD-OCT may represent a promising noninvasive method for in vivo arteriolar morphometry.
Boroomand, A.; Shafiee, M. J.; Wong, A.; Bizheva, K.
2015-03-01
The lateral resolution of a Spectral Domain Optical Coherence Tomography (SD-OCT) image is limited by the focusing properties of the OCT imaging probe optics, the wavelength range which SD-OCT system operates at, spherical and chromatic aberrations induced by the imaging optics, the optical properties of the imaged object, and in the special case of in-vivo retinal imaging by the optics of the eye. This limitation often results in challenges with resolving fine details and structures of the imaged sample outside of the Depth-Of-Focus (DOF) range. We propose a novel technique for generating Laterally Resolved OCT (LR-OCT) images using OCT measurements acquired with intentional imbrications. The proposed, novel method is based on a Maximum A Posteriori (MAP) reconstruction framework which takes advantage of a Stochastic Fully Connected Conditional Random Field (SFCRF) model to compensate for the artifacts and noise when reconstructing a LR-OCT image from imbricated OCT measurement. The proposed lateral resolution enhancement method was tested on synthetic OCT measurement as well as on a human cornea SDOCT image to evaluate the usefulness of the proposed approach in lateral resolution enhancement. Experimental results show that applying this method to OCT images, noticeably improves the sharpness of morphological features in the OCT image and in lateral direction, thus demonstrating better delineation of fine dot shape details in the synthetic OCT test, as well as better delineation of the keratocyte cells in the human corneal OCT test image.
On Joint and Conditional Entropies
D. V. Gokhale
1999-05-01
Full Text Available Abstract: It is shown that if the conditional densities of a bivariate random variable have maximum entropies, subject to certain constraints, then the bivariate density also maximizes entropy, subject to appropriate constraints. Some examples are discussed.
Marsh, T R
2000-01-01
I review the method of Doppler tomography which translates binary-star line profiles taken at a series of orbital phases into a distribution of emission over the binary. I begin with a discussion of the basic principles behind Doppler tomography, including a comparison of the relative merits of maximum entropy regularisation versus filtered back-projection for implementing the inversion. Following this I discuss the issue of noise in Doppler images and possible methods for coping with it. Then I move on to look at the results of Doppler Tomography applied to cataclysmic variable stars. Outstanding successes to date are the discovery of two-arm spiral shocks in cataclysmic variable accretion discs and the probing of the stream/magnetospheric interaction in magnetic cataclysmic variable stars. Doppler tomography has also told us much about the stream/disc interaction in non-magnetic systems and the irradiation of the secondary star in all systems. The latter indirectly reveals such effects as shadowing by the a...
The principle of maximum entropy and its applications in ecology%最大熵原理及其在生态学研究中的应用
邢丁亮; 郝占庆
2011-01-01
The principle of maximum entropy (MaxEnt) was originally studied in information theory and statistical mechanics, and was widely employed in a variety of contexts.MaxEnt provides a statistical inference of unknown distributions on the basis of partial knowledge without taking into any unknown information.Recently there has been growing interest in the use of MaxEnt in ecology.In this review, to provide an intuitive understanding of this principle, we firstly employ an example of dice throwing to demonstrate the underlying basis of MaxEnt, and list the steps one should take when applying this principle.Then we focus on its applications in some fields of ecology and biodiversity, including the predicting of species relative abundances using community aggregated traits (CATs), the MaxEnt niche model of biogeography based on environmental factors,the studying of macroecology patterns such as species abundance distribution (SAD) and species-area relationship (SAR), inferences of species interactions using species abundance matrix or merely occurrence (presence/absence) data, and the predicting of food web degree distributions.We also highlight the main debates about these applications and some recent tests of these models' strengths and limitations.We conclude with the discussion of some matters of attention ecologists should keep in mind when using MaxEnt.%最大熵原理(the principle of maximum entropy)起源于信息论和统计力学,是基于有限的已知信息对未知分布进行无偏推断的一种数学方法.这一方法在很多领域都有成功应用,但只是近几年才被应用到生态学研究中,并且还存在很多争论.我们从基本概念和方法出发,用掷骰子的例子阐明了最大熵原理的概念,并提出运用最大熵原理解决问题需要遵从的步骤.最大熵原理在生态学中的应用主要包括以下方面:(1)用群落水平功能性状的平均值作为约束条件来预测群落物种相对多度的模型;(2)
Ritter, André; Durst, Jürgen; Gödel, Karl; Haas, Wilhelm; Michel, Thilo; Rieger, Jens; Weber, Thomas; Wucherer, Lukas; Anton, Gisela
2013-01-01
Phase-wrapping artifacts, statistical image noise and the need for a minimum amount of phase steps per projection limit the practicability of x-ray grating based phase-contrast tomography, when using filtered back projection reconstruction. For conventional x-ray computed tomography, the use of statistical iterative reconstruction algorithms has successfully reduced artifacts and statistical issues. In this work, an iterative reconstruction method for grating based phase-contrast tomography is presented. The method avoids the intermediate retrieval of absorption, differential phase and dark field projections. It directly reconstructs tomographic cross sections from phase stepping projections by the use of a forward projecting imaging model and an appropriate likelihood function. The likelihood function is then maximized with an iterative algorithm. The presented method is tested with tomographic data obtained through a wave field simulation of grating based phase-contrast tomography. The reconstruction result...
Säwén, Elin; Massad, Tariq; Landersjö, Clas; Damberg, Peter; Widmalm, Göran
2010-08-21
The conformational space available to the flexible molecule α-D-Manp-(1-->2)-α-D-Manp-OMe, a model for the α-(1-->2)-linked mannose disaccharide in N- or O-linked glycoproteins, is determined using experimental data and molecular simulation combined with a maximum entropy approach that leads to a converged population distribution utilizing different input information. A database survey of the Protein Data Bank where structures having the constituent disaccharide were retrieved resulted in an ensemble with >200 structures. Subsequent filtering removed erroneous structures and gave the database (DB) ensemble having three classes of mannose-containing compounds, viz., N- and O-linked structures, and ligands to proteins. A molecular dynamics (MD) simulation of the disaccharide revealed a two-state equilibrium with a major and a minor conformational state, i.e., the MD ensemble. These two different conformation ensembles of the disaccharide were compared to measured experimental spectroscopic data for the molecule in water solution. However, neither of the two populations were compatible with experimental data from optical rotation, NMR (1)H,(1)H cross-relaxation rates as well as homo- and heteronuclear (3)J couplings. The conformational distributions were subsequently used as background information to generate priors that were used in a maximum entropy analysis. The resulting posteriors, i.e., the population distributions after the application of the maximum entropy analysis, still showed notable deviations that were not anticipated based on the prior information. Therefore, reparameterization of homo- and heteronuclear Karplus relationships for the glycosidic torsion angles Φ and Ψ were carried out in which the importance of electronegative substituents on the coupling pathway was deemed essential resulting in four derived equations, two (3)J(COCC) and two (3)J(COCH) being different for the Φ and Ψ torsions, respectively. These Karplus relationships are denoted
Research of Text Categorization Based on Improved Maximum Entropy Algorithm%改进的最大熵权值算法在文本分类中的应用
李学相
2012-01-01
This paper discussed the problems in text categorization accuracy. In traditional text classification algorithm, different feature words have the same affecte on classification result,and classification accuracy is lower,causing the increase algorithm time complexity. Because the maximum entropy model can integrated various relevant or irrelevant probability knowledge observed, the processing of many issues can achieve better results. In order to solve the above problems, this paper proposed an improved maximum entropy text classification, which fully combines c-mean and maximum entropy algorithm advantages. The algorithm firstly takes shannon entropy as maximum entropy model of the objective function, simplifies classifier expression form, and then uses c-mean algorithm to classify the optimal feature. The simulation results show that the proposed method can quickly get the optimal classification feature subsets, greatly improve text classification accuracy, compared with the traditional text classification.%由于传统算法存在着特征词不明确、分类结果有重叠、工作效率低的缺陷,为了解决上述问题,提出了一种改进的最大熵文本分类方法.最大熵模型可以综合观察到的各种相关或不相关的概率知识,对许多问题的处理都可以达到较好的结果.提出的方法充分结合了均值聚类和最大熵值算法的优点,算法首先以香农熵作为最大熵模型中的目标函数,简化分类器的表达形式,然后采用均值聚类算法对最优特征进行分类.经过实验论证,所提出的新算法能够在较短的时间内获得分类后得到的特征集,大大缩短了工作的时间,同时提高了工作的效率.
Advisor-advisee relationship identification based on maximum entropy model*%基于最大熵模型的导师-学生关系推测*
李勇军; 刘尊; 于会
2013-01-01
导师-学生关系是科研合作网络中重要的关系类型之一，准确识别此类关系对促进科研交流与合作、评审回避等有重要意义.以论文合作网络为基础，依据学生发表论文时通常与导师共同署名的现象，抽象出能够反映导师-学生合作关系的特征，提出了基于最大熵模型的导师-学生关系识别算法.利用DBLP中1990-2011年的论文数据进行实例验证，结果显示：1)关系类型识别结果的准确率超过95%；2)导师-学生关系终止时间的平均误差为1.39年.该方法在识别关系时避免了特征之间相互独立的约束，准确率优于其他同类识别算法，且建模方法对识别社交网络中的其他关系类型也具有借鉴意义.%Research collaboration network has become an essential part in our academic activities. We can keep or develop collaboration relationships with other researchers or share research results with them within the research collaboration network. It is well generally accepted that different relationships have essentially different influences on the collaboration of researchers. Such a scenario also hap-pens in our daily life. The advisor-advisee relationship plays an important role in the research collaboration network, so identification of advisor-advisee relationship can benefit the collaboration of researchers. In this paper, we aim to conduct a systematic investiga-tion of the problem of indentifying the social relationship types from publication networks, and try to propose an easily computed and effective solution to this problem. Based on the common knowledge that graduate student always co-authors his papers with his advisor and not vice versa, our study starts with an analysis on publication network, and retrieves these features that can represent the advisor-advisee relationship. According to these features, an advisor-advisee relationship identification algorithm based on maximum entropy model with
Boyaval, S.
2000-06-15
This PhD presents a study on a series of high pressure swirl atomizers dedicated to Gasoline Direct Injection (GDI). Measurements are performed in stationary and pulsed working conditions. A great aspect of this thesis is the development of an original experimental set-up to correct multiple light scattering that biases the drop size distributions measurements obtained with a laser diffraction technique (Malvern 2600D). This technique allows to perform a study of drop size characteristics near the injector tip. Correction factors on drop size characteristics and on the diffracted intensities are defined from the developed procedure. Another point consists in applying the Maximum Entropy Formalism (MEF) to calculate drop size distributions. Comparisons between experimental distributions corrected with the correction factors and the calculated distributions show good agreement. This work points out that the mean diameter D{sub 43}, which is also the mean of the volume drop size distribution, and the relative volume span factor {delta}{sub v} are important characteristics of volume drop size distributions. The end of the thesis proposes to determine local drop size characteristics from a new development of deconvolution technique for line-of-sight scattering measurements. The first results show reliable behaviours of radial evolution of local characteristics. In GDI application, we notice that the critical point is the opening stage of the injection. This study shows clearly the effects of injection pressure and nozzle internal geometry on the working characteristics of these injectors, in particular, the influence of the pre-spray. This work points out important behaviours that the improvement of GDI principle ought to consider. (author)
遗传算法粒在二维最大熵值图像分割中的应用%2-D Maximum Entropy Method of Image Segmentation Based on Genetic Algorithm
欧萍; 贺电
2011-01-01
研究图像分割,针对从图像中提取用户要求的特征目标,最优阈值的选取是图像准确分割的关键技术.传统二维最大熵值算法的最优阈值采用穷举方式进行寻优,耗时长,分割效率较低,易产生误分割.为了提高图像分割效率和准确性,提出一种遗传算法的二维最大熵值图像分割方法.先对原始图像进行灰度转换,绘制出图像的二维直方图.根据二维直方图信息选取适当灰度值进行初始化,采用遗传算法的初始种群,通过遗传算法选择、交叉和变异操作搜索最优阈值,获得的最优阈值对图像进行分割.实验结果表明,与传统二维最大熵值的图像分割算法相比,方法不仅运算速度加快,提高了分割效率,而且图像分割精度也大大提高.%In the 2-d image segmentation algorithm of maximum entropy value, the optimum threshold selection of image segmentation is the key technique. Traditional 2-d maximum entropy image segmentation algorithms use exhaustive way to find the optimal threshold, which is time-consuming, low efficient, and easy to generate the false division. In order to improve the accuracy and efficiency of image segmentation, this paper puts forward a genetic algorithm of 2-d maximum entropy value for image segmentation. This method firstly carries out gray level transform of the original image and draws the 2-d histogram. Then, according to the 2-d histogram information, appropriate gray value is selected to be initialized, The initial population of genetic algorithm is desinod, and each individual is represented with a mo-dimensional vector. Through the operators of selection, crossover and mutation, the optimal thresholds are searched, wich finally is taken as the optimal threshold of image segmentation. Experimental results show that compared with the maximum entropy with traditional 2-d image segmentation algorithm, this method can improve the computation speed, efficiency, and image
Massad, Tariq; Jarvet, Jueri [Stockholm University, Department of Biochemistry and Biophysics (Sweden); Tanner, Risto [National Institute of Chemical Physics and Biophysics (Estonia); Tomson, Katrin; Smirnova, Julia; Palumaa, Peep [Tallinn Technical University, Inst. of Gene Technology (Estonia); Sugai, Mariko; Kohno, Toshiyuki [Mitsubishi Kagaku Institute of Life Sciences (MITILS) (Japan); Vanatalu, Kalju [Tallinn Technical University, Inst. of Gene Technology (Estonia); Damberg, Peter [Stockholm University, Department of Biochemistry and Biophysics (Sweden)], E-mail: peter.damberg@dbb.su.se
2007-06-15
In this paper, we present a new method for structure determination of flexible 'random-coil' peptides. A numerical method is described, where the experimentally measured {sup 3}J{sup H{sup N}}{sup H{sup {alpha}}} and {sup 3}J{sup H{sup {alpha}}}{sup N{sup I}+1} couplings, which depend on the {phi} and {psi} dihedral angles, are analyzed jointly with the information from a coil-library through a maximum entropy approach. The coil-library is the distribution of dihedral angles found outside the elements of the secondary structure in the high-resolution protein structures. The method results in residue specific joint {phi},{psi}-distribution functions, which are in agreement with the experimental J-couplings and minimally committal to the information in the coil-library. The 22-residue human peptide hormone motilin, uniformly {sup 15}N-labeled was studied. The {sup 3}J{sup H{sup {alpha}}}{sup N{sup I}+1} were measured from the E.COSY pattern in the sequential NOESY cross-peaks. By employing homodecoupling and an in-phase/anti-phase filter, sharp H{sup {alpha}}-resonances (about 5 Hz) were obtained enabling accurate determination of the coupling with minimal spectral overlap. Clear trends in the resulting {phi},{psi}-distribution functions along the sequence are observed, with a nascent helical structure in the central part of the peptide and more extended conformations of the receptor binding N-terminus as the most prominent characteristics. From the {phi},{psi}-distribution functions, the contribution from each residue to the thermodynamic entropy, i.e., the segmental entropies, are calculated and compared to segmental entropies estimated from {sup 15}N-relaxation data. Remarkable agreement between the relaxation and J-couplings based methods is found. Residues belonging to the nascent helix and the C-terminus show segmental entropies, of approximately -20 J K{sup -1} mol{sup -1} and -12 J K{sup -1} mol{sup -1}, respectively, in both series. The agreement
Massad, Tariq; Jarvet, Jüri; Tanner, Risto; Tomson, Katrin; Smirnova, Julia; Palumaa, Peep; Sugai, Mariko; Kohno, Toshiyuki; Vanatalu, Kalju; Damberg, Peter
2007-06-01
In this paper, we present a new method for structure determination of flexible "random-coil" peptides. A numerical method is described, where the experimentally measured 3J(H(alpha)Nalpha) and [3J(H(alpha)Nalpha+1 couplings, which depend on the phi and psi dihedral angles, are analyzed jointly with the information from a coil-library through a maximum entropy approach. The coil-library is the distribution of dihedral angles found outside the elements of the secondary structure in the high-resolution protein structures. The method results in residue specific joint phi,psi-distribution functions, which are in agreement with the experimental J-couplings and minimally committal to the information in the coil-library. The 22-residue human peptide hormone motilin, uniformly 15N-labeled was studied. The 3J(H(alpha)-N(i+1)) were measured from the E.COSY pattern in the sequential NOESY cross-peaks. By employing homodecoupling and an in-phase/anti-phase filter, sharp H(alpha)-resonances (about 5 Hz) were obtained enabling accurate determination of the coupling with minimal spectral overlap. Clear trends in the resulting phi,psi-distribution functions along the sequence are observed, with a nascent helical structure in the central part of the peptide and more extended conformations of the receptor binding N-terminus as the most prominent characteristics. From the phi,psi-distribution functions, the contribution from each residue to the thermodynamic entropy, i.e., the segmental entropies, are calculated and compared to segmental entropies estimated from 15N-relaxation data. Remarkable agreement between the relaxation and J-couplings based methods is found. Residues belonging to the nascent helix and the C-terminus show segmental entropies, of approximately -20 J K(-1) mol(-1) and -12 J K(-1) mol(-1), respectively, in both series. The agreement between the two estimates of the segmental entropy, the agreement with the observed J-couplings, the agreement with the CD experiments
Bekenstein Entropy is String Entropy
Halyo, Edi
2009-01-01
We argue that Bekenstein entropy can be interpreted as the entropy of an effective string with a rescaled tension. Using the AdS/CFT correspondence we show that the Bekenstein entropy on the boundary CFT is given by the entropy of a string at the stretched horizon of the AdS black hole in the bulk. The gravitationally redshifted tension and energy of the string match those required to reproduce Bekenstein entropy.
Applications of Entropy in Finance: A Review
Guanqun Tong
2013-11-01
Full Text Available Although the concept of entropy is originated from thermodynamics, its concepts and relevant principles, especially the principles of maximum entropy and minimum cross-entropy, have been extensively applied in finance. In this paper, we review the concepts and principles of entropy, as well as their applications in the field of finance, especially in portfolio selection and asset pricing. Furthermore, we review the effects of the applications of entropy and compare them with other traditional and new methods.
何洋; 纪昌明; 田开华; 张验科; 李传刚
2016-01-01
为了更好的研究径流预报误差的分布规律，应用最大熵原理，建立径流预报误差分布的最大熵模型；以官地水库径流预报系列为例，计算其不同预见期的径流预报误差概率密度函数及分布曲线，将该分布曲线与理论正态分布曲线和样本直方图进行对比，结果表明最大熵法求得的误差分布能更好地描述径流预报误差的分布特性。考虑流域径流年内的丰枯变化，以枯水期、汛期和过渡期对径流系列进行划分，分别分析各个时期的误差分布规律，并给出预报误差在不同置信区间下的置信度，从而更好地掌握径流预报误差的分布规律，为提高径流预报精度提供一条新途径。%To deeply study the distribution law of runoff forecast error, the maximum entropy principle is applied and the maximum entropy model for the distribution of runoff prediction error is established in this paper. The authors use the runoff forecast series in Guandi Reservoir as an example and calculate the probability density function and distribution curve of the runoff forecast error for different forecasting periods. The distribution curves are compared with the theoretical normal distribution curves and the histogram of the samples. The results show that the distribution characteristics of the error distribution calculated by the maximum entropy method can describe the runoff forecasting error better. Considering the change of runoff years, the runoff series are divided into droughts, flood and transition seasons. The error distribution rule of each period is analyzed, and the confidence of forecasting error at different confidence interval offered, thus mastering the distribution rule of runoff forecasting error better and providing a new way to improve the accuracy of runoff forecasting.
刘忠宝; 王士同
2011-01-01
In order to circumvent the deficiencies of Support Vector Machine (SVM) and its improved algorithms, this paper presents Maximum-margin Learning Machine based on Entropy concept and Kernel density estimation (MLMEK). In MLMEK, data distributions in samples are represented by kernel density estimation and classification uncertainties are represented by entropy. MLMEK takes boundary data between classes and inner data in each class seriously, so it performs better than traditional SVM. MLMEK can work for two-class and one-class pattern classification. Experimental results obtained from UCI data sets verify that the algorithms proposed in the paper is effective and competitive.%该文针对支持向量机(SVM)及其变种的不足,提出一种基于熵理论和核密度估计的最大间隔学习机MLMEK.MLMEK引入了核密度估计和熵的概念,用核密度估计表征样本数据的分布特征,用熵表征分类的不确定性.MLMEK真实反映样本数据的分布特征；同时解决两类分类问题和单类分类问题；比传统SVM具有更好的分类性能.UCI数据集上的实验验证了MLMEK的有效性.
Generalized Entropy Concentration for Counts
Oikonomou, Kostas N
2016-01-01
We consider the phenomenon of entropy concentration under linear constraints in a discrete setting, using the "balls and bins" paradigm, but without the assumption that the number of balls allocated to the bins is known. Therefore instead of \\ frequency vectors and ordinary entropy, we have count vectors with unknown sum, and a certain generalized entropy. We show that if the constraints bound the allowable sums, this suffices for concentration to occur even in this setting. The concentration can be either in terms of deviation from the maximum generalized entropy value, or in terms of the norm of the difference from the maximum generalized entropy vector. Without any asymptotic considerations, we quantify the concentration in terms of various parameters, notably a tolerance on the constraints which ensures that they are always satisfied by an integral vector. Generalized entropy maximization is not only compatible with ordinary MaxEnt, but can also be considered an extension of it, as it allows us to address...
Caldarelli, Stefano; Catalano, Donata; Di Bari, Lorenzo; Lumetti, Marco; Ciofalo, Maurizio; Alberto Veracini, Carlo
1994-07-01
The dipolar couplings observed by NMR spectroscopy of solutes in nematic solvents (LX-NMR) are used to build up the maximum entropy (ME) probability distribution function of the variables describing the orientational and internal motion of the molecule. The ME conformational distributions of 2,2'- and 3,3'-dithiophene and 2,2':5',2″-terthiophene (α-terthienyl)thus obtained are compared with the results of previous studies. The 2,2'- and 3,3'-dithiophene molecules exhibit equilibria among cisoid and transoid forms; the probability maxima correspond to planar and twisted conformers for 2,2'- or 3,3'-dithiophene, respectively, 2,2':5',2″-Terthiophene has two internal degrees of freedom; the ME approach indicates that the trans, trans and cis, trans planar conformations are the most probable. The correlation between the two intramolecular rotations is also discussed.
Entropy of international trades
Oh, Chang-Young; Lee, D.-S.
2017-05-01
The organization of international trades is highly complex under the collective efforts towards economic profits of participating countries given inhomogeneous resources for production. Considering the trade flux as the probability of exporting a product from a country to another, we evaluate the entropy of the world trades in the period 1950-2000. The trade entropy has increased with time, and we show that it is mainly due to the extension of trade partnership. For a given number of trade partners, the mean trade entropy is about 60% of the maximum possible entropy, independent of time, which can be regarded as a characteristic of the trade fluxes' heterogeneity and is shown to be derived from the scaling and functional behaviors of the universal trade-flux distribution. The correlation and time evolution of the individual countries' gross-domestic products and the number of trade partners show that most countries achieved their economic growth partly by extending their trade relationship.
Checa-Garcia, Ramiro
2013-01-01
The main challenges of measuring precipitation are related to the spatio-temporal variability of the drop-size distribution, to the uncertainties that condition the modeling of that distribution, and to the instrumental errors present in the in situ estimations. This PhD dissertation proposes advances in all these questions. The relevance of the spatial variability of the drop-size distribution for remote sensing measurements and hydro-meteorology field studies is asserted by analyzing the measurement of a set of disdrometers deployed on a network of 5 squared kilometers. This study comprises the spatial variability of integral rainfall parameters, the ZR relationships, and the variations within the one moment scaling method. The modeling of the drop-size distribution is analyzed by applying the MaxEnt method and comparing it with the methods of moments and the maximum likelihood. The instrumental errors are analyzed with a compressive comparison of sampling and binning uncertainties that affect actual device...
R Saravanan
2006-06-01
A study of the electronic structure of the three sulphides, SrS, BaS and PuS has been carried out in this work, using the powder X-ray intensity data from JCPDS powder diffraction data base. The statistical approach, MEM (maximum entropy method) is used for the analysis of the data for the electron density distribution in these materials and an attempt has been made to understand the bonding between the metal atom and the sulphur atom. The mid-bond electron density is found to be maximum for PuS among these three sulphides, being 0.584 e/Å3 at 2.397 Å. SrS is found to have the lowest electron density at the mid-bond (0.003 e/Å3) at 2.118 Å from the origin leaving it more ionic than the other two sulphides studied in this work. The two-dimensional electron density maps on (1 0 0) and (1 1 0) planes and the one-dimensional profiles along the bonding direction [1 1 1] are used for these analyses. The overall and individual Debye-Waller factors of atoms in these systems have also been studied and analyzed. The refinements of the observed X-ray data were carried out using standard softwares and also a routine written by the author.
Saravanan, R.
2006-06-01
A study of the electronic structure of the three sulphides, SrS, BaS and PuS has been carried out in this work, using the powder X-ray intensity data from JCPDS powder diffraction data base. The statistical approach, MEM (maximum entropy method) is used for the analysis of the data for the electron density distribution in these materials and an attempt has been made to understand the bonding between the metal atom and the sulphur atom. The mid-bond electron density is found to be maximum for PuS among these three sulphides, being 0.584 e/Å^3 at 2.397 Å. SrS is found to have the lowest electron density at the mid-bond (0.003 e/Å^3) at 2.118 Å from the origin leaving it more ionic than the other two sulphides studied in this work. The two-dimensional electron density maps on (1 0 0) and (1 1 0) planes and the one-dimensional profiles along the bonding direction [1 1 1] are used for these analyses. The overall and individual Debye-Waller factors of atoms in these systems have also been studied and analyzed. The refinements of the observed X-ray data were carried out using standard softwares and also a routine written by the author.
Entropy production by simple electrical circuits
Miranda, E N
2012-01-01
The entropy production by simple electrical circuits (R, RC, RL) is analyzed. It comes out that the entropy production is minimal, in agreement with a well known theorem due to Prigogine. In this way, it is wrong a recent result by Zupanovic, Juretic and Botric (Physica Review E 70, 056198) who claimed that the entropy production in simple electrical circuits is a maximum
Inversion methods for fast-ion velocity-space tomography in fusion plasmas
Jacobsen, Asger Schou; Stagner, L.; Salewski, Mirko
2016-01-01
Velocity-space tomography has been used to infer 2D fast-ion velocity distribution functions. Here we compare the performance of five different tomographic inversion methods: truncated singular value decomposition, maximum entropy, minimum Fisher information and zeroth and first-order Tikhonov re...
Entropy: From Thermodynamics to Hydrology
Demetris Koutsoyiannis
2014-02-01
Full Text Available Some known results from statistical thermophysics as well as from hydrology are revisited from a different perspective trying: (a to unify the notion of entropy in thermodynamic and statistical/stochastic approaches of complex hydrological systems and (b to show the power of entropy and the principle of maximum entropy in inference, both deductive and inductive. The capability for deductive reasoning is illustrated by deriving the law of phase change transition of water (Clausius-Clapeyron from scratch by maximizing entropy in a formal probabilistic frame. However, such deductive reasoning cannot work in more complex hydrological systems with diverse elements, yet the entropy maximization framework can help in inductive inference, necessarily based on data. Several examples of this type are provided in an attempt to link statistical thermophysics with hydrology with a unifying view of entropy.
Iskender, Ilker; Kadioglu, Salih Zeki; Kosar, Altug; Atasalihi, Ali; Kir, Altan
2011-06-01
The maximum standardized uptake value (SUV(max)) varies among positron emission tomography-integrated computed tomography (PET/CT) centers in the staging of non-small cell lung cancer. We evaluated the ratio of the optimum SUV(max) cut-off for the lymph nodes to the median SUV(max) of the primary tumor (ratioSUV(max)) to determine SUV(max) variations between PET/CT scanners. The previously described PET predictive ratio (PPR) was also evaluated. PET/CT and mediastinoscopy and/or thoracotomy were performed on 337 consecutive patients between September 2005 and March 2009. Thirty-six patients were excluded from the study. The pathological results were correlated with the PET/CT findings. Histopathological examination was performed on 1136 N2 lymph nodes using 10 different PET/CT centers. The majority of patients (group A: 240) used the same PET/CT scanner at four different centers. Others patients were categorized as group B. The ratioSUV(max) for groups A and B was 0.18 and 0.22, respectively. The same ratio for centers 1, 2, 3 and 4 was 0.2, 0.21, 0.21, and 0.23, respectively. The optimal cut-off value of the PPR to predict mediastinal lymph node pathology for malignancy was 0.49 (likelihood ratio +2.02; sensitivity 70%, specificity 65%). We conclude that the ratioSUV(max) was similar for different scanners. Thus, SUV(max) is a valuable cut-off for comparing-centers.
陈海涛; 黄鑫; 邱林; 王文川
2013-01-01
提出了构建综合考虑自然因素与农作物生长周期之间量化关系的干旱度评价指标，并基于最大熵原理建立了项目区干旱度分布密度函数，避免了以往构建概率分布的随意性，实现了对区域农业干旱度进行量化评价的目的。首先根据作物在非充分灌溉条件下的减产率，建立了干旱程度的量化评价指标，然后通过蒙特卡罗法生成了长系列降雨资料，并计算历年干旱度指标，最后利用最大熵原理，构建了农业干旱度分布的概率分布密度函数。以河南省濮阳市渠村灌区为对象进行了实例计算。结果表明，该模型概念清晰，计算简便实用，结果符合实际，是一种较好的评估方法。%The evaluation index of drought degree,which comprehensively considering the quantitative rela⁃tionship between the crop growing period and natural factors, is presented in this paper. The distribution density function of drought degree has been established based on the maximum-entropy principle. It can avoid the randomness of probability distribution previous constructed and has realized purpose of quantita⁃tive evaluation of agricultural drought degree. Firstly, the quantitative evaluation index of drought degree was established according to the yield reduction rate of deficit irrigation conditions. Secondly,a long series rainfall data were generated by Monte-Carlo method and the past years index of drought degree were calcu⁃lated. Finally, the density function of probability distribution of agricultural drought degree distribution was constructed by using maximum entropy principle. As an example, the calculation results of the distribution of drought degree of agriculture in Qucun irrigation area were presented. The results show that the model provides a better evaluation method with clear concept,simple and practical approach,and reasonable out⁃comes.
Söderberg, Karin; Kubota, Yoshiki; Muroyama, Norihiro; Grüner, Daniel; Yoshimura, Arisa; Terasaki, Osamu
2008-08-01
Using short wavelength X-rays from synchrotron radiation (SPring-8), high-resolution powder diffraction patterns were collected. In order to study both the structural relationship and the mechanism of stability in the CaAl 2-xZn x system, among the Laves phases (MgCu 2 and MgNi 2 type) and KHg 2-type structures, the charge density distribution of CaAl 2-xZn x as a function of x was obtained from the diffraction data by Rietveld analysis combined with the maximum entropy method (MEM). In the MEM charge density maps overlapping electron densities were clearly observed, especially in the Kagomé nets of the Laves phases. In order to clarify the charge redistribution in the system, the deformation charge densities from the densities formed by the constituent free atoms are discussed. In the ternary MgNi 2-type phase, partial ordering of Al and Zn atoms is observed, a finding that is supported by ab-initio total energy calculations.
任永泰; 李丽
2011-01-01
利用基于极大熵准则赋权和基于实数加速遗传算法的投影寻踪方法相结合的组合附权法确定了各预警指标的权重；采用层次分析法计算水资源可持续利用复合系统中各子系统所占权重；利用综合评价模型计算出哈尔滨市水资源可持续发展指数；最终得到哈尔滨市水资源可持续利用预警结果.%The weights of each warning index are determined by combination enables law which is based on the maximum entropy criterion empowerment and projection pursuit method of real accelerating genetic algorithm; Using analytic hierarchy process to calculate the weights of each subsystem in composite system of water resources sustainable utilization; Sustainable development index of Harbin water resources is calculated by using comprehensive evaluation model; Warning results of Harbin water resources sustainable utilization are got eventually.
Karan, Belgin; Pourbagher, Aysin; Torun, Nese
2016-06-01
To evaluate the correlations between the apparent diffusion coefficient (ADC) value and the standardized uptake value (SUV) with prognostic factors in breast cancer. Seventy women with invasive breast cancer (56 cases of invasive ductal carcinoma, four of mixed ductal and lobular invasive carcinoma, three of lobular invasive carcinoma, two of micropapillary carcinoma, and one each of mixed ductal and mucinous carcinoma, mucinous carcinoma, medullary carcinoma, metaplastic carcinoma, and tubular carcinoma) were included in this study. All patients underwent presurgical breast magnetic resonance imaging (MRI) with diffusion-weighted imaging (DWI) at 1.5T and whole-body (18) F-fluorodeoxyglucose ((18) F-FDG) positron emission tomography (PET) / computed tomography (CT). For all invasive breast cancers and invasive ductal carcinomas, we assessed the relationships among ADC, SUV, and pathological prognostic factors. Both the median ADC value and maximum SUV (SUVmax) were significantly associated with vascular invasion (P = 0.008 and P = 0.026, respectively). SUVmax was also significantly correlated with tumor size (P = 0.001), histological grade (P = 0.001), lymph node status (P = 0.0015), estrogen receptor status (P = 0.010), and human epidermal growth factor receptor 2 status (P = 0.020), whereas ADC values were not. The correlation between the ADC and SUVmax was not significant (P = 0.356; R = -0.112). Mucinous carcinoma showed high ADC and relatively low SUVmax. Medullary carcinoma showed low ADC and high SUVmax. When we evaluated the relationships among ADC, SUVmax, and prognostic factors in the 56 invasive ductal carcinomas, our statistical results were not significantly changed, except SUVmax was also significantly associated with progesterone receptor status (P = 0.034), but not lymph node status. SUVmax may be valuable for predicting the prognosis of breast cancer. Both ADC and SUVmax are useful to predict vascular invasion. J. Magn. Reson. Imaging 2016
Miura Takeshi
2010-12-01
Full Text Available Abstract Background In this era of molecular targeting therapy when various systematic treatments can be selected, prognostic biomarkers are required for the purpose of risk-directed therapy selection. Numerous reports of various malignancies have revealed that 18-Fluoro-2-deoxy-D-glucose (18F-FDG accumulation, as evaluated by positron emission tomography, can be used to predict the prognosis of patients. The purpose of this study was to evaluate the impact of the maximum standardized uptake value (SUVmax from 18-fluoro-2-deoxy-D-glucose positron emission tomography/computed tomography (18F-FDG PET/CT on survival for patients with advanced renal cell carcinoma (RCC. Methods A total of 26 patients with advanced or metastatic RCC were enrolled in this study. The FDG uptake of all RCC lesions diagnosed by conventional CT was evaluated by 18F-FDG PET/CT. The impact of SUVmax on patient survival was analyzed prospectively. Results FDG uptake was detected in 230 of 243 lesions (94.7% excluding lung or liver metastases with diameters of less than 1 cm. The SUVmax of 26 patients ranged between 1.4 and 16.6 (mean 8.8 ± 4.0. The patients with RCC tumors showing high SUVmax demonstrated poor prognosis (P = 0.005 hazard ratio 1.326, 95% CI 1.089-1.614. The survival between patients with SUVmax equal to the mean of SUVmax, 8.8 or more and patients with SUVmax less than 8.8 were statistically different (P = 0.0012. This is the first report to evaluate the impact of SUVmax on advanced RCC patient survival. However, the number of patients and the follow-up period were still not extensive enough to settle this important question conclusively. Conclusions The survival of patients with advanced RCC can be predicted by evaluating their SUVmax using 18F-FDG-PET/CT. 18F-FDG-PET/CT has potency as an "imaging biomarker" to provide helpful information for the clinical decision-making.
Maximum entropy approach to fuzzy control
Ramer, Arthur; Kreinovich, Vladik YA.
1992-01-01
For the same expert knowledge, if one uses different &- and V-operations in a fuzzy control methodology, one ends up with different control strategies. Each choice of these operations restricts the set of possible control strategies. Since a wrong choice can lead to a low quality control, it is reasonable to try to loose as few possibilities as possible. This idea is formalized and it is shown that it leads to the choice of min(a + b,1) for V and min(a,b) for &. This choice was tried on NASA Shuttle simulator; it leads to a maximally stable control.
Determining the Tsallis parameter via maximum entropy
Conroy, J. M.; Miller, H. G.
2015-05-01
The nonextensive entropic measure proposed by Tsallis [C. Tsallis, J. Stat. Phys. 52, 479 (1988), 10.1007/BF01016429] introduces a parameter, q , which is not defined but rather must be determined. The value of q is typically determined from a piece of data and then fixed over the range of interest. On the other hand, from a phenomenological viewpoint, there are instances in which q cannot be treated as a constant. We present two distinct approaches for determining q depending on the form of the equations of constraint for the particular system. In the first case the equations of constraint for the operator O ̂ can be written as Tr (FqO ̂)=C , where C may be an explicit function of the distribution function F . We show that in this case one can solve an equivalent maxent problem which yields q as a function of the corresponding Lagrange multiplier. As an illustration the exact solution of the static generalized Fokker-Planck equation (GFPE) is obtained from maxent with the Tsallis enropy. As in the case where C is a constant, if q is treated as a variable within the maxent framework the entropic measure is maximized trivially for all values of q . Therefore q must be determined from existing data. In the second case an additional equation of constraint exists which cannot be brought into the above form. In this case the additional equation of constraint may be used to determine the fixed value of q .
Curtis, Tyler E; Roeder, Ryan K
2017-07-06
Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in
曾杰; 张永兴; 靳晓光
2011-01-01
通过分析国内外岩爆预测的判据,选择岩爆发生所需的力学条件、完整性条件、储能条件和脆性条件作为岩爆预测指标.引入岩爆预测的相对隶属度概念,计算了岩爆的相对隶属度模糊矩阵和预测指标的权重,以信息熵来描述并比较岩爆评价中的不确定性,定义了加权广义权距离来表征岩爆的差异.根据最大熵原理建立了岩爆预测的模糊最优化模型,对一些岩石地下工程实例进行了分析,预测结果与其他方法的分析结果以及实际情况基本一致.并将模型运用于葡萄山隧道岩爆预测,预测结果与实际岩爆情况符合较好.%In the analysis of rock burst criterion prediction at home and abroad, the prediction standards of rock burst are selected including the conditions of mechanics integrity, energy and brittle. The concept of relative membership degree on the rock burst prediction was introduced. The weight of standards and fuzzy matrix of relative membership degree are calculated. Uncertainty in rock burst prediction is described and compared according to the information entropy. Generalized weighted distance is also defined to characterize the differences in rock burst based on the maximum entropy principle, the establishment of a rock burst prediction fuzzy optimization model. The results from the application to practical example and comparisons with other methods are fairly good. Finally, the prediction model is applied in Putaoshan tunnel and the predictions are consistent with the actual rock burst.
Astuti, Valerio; Rovelli, Carlo
2016-01-01
Building on a technical result by Brunnemann and Rideout on the spectrum of the Volume operator in Loop Quantum Gravity, we show that the dimension of the space of the quadrivalent states --with finite-volume individual nodes-- describing a region with total volume smaller than $V$, has \\emph{finite} dimension, bounded by $V \\log V$. This allows us to introduce the notion of "volume entropy": the von Neumann entropy associated to the measurement of volume.
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2016-02-25
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
Equivalent Relation between Normalized Spatial Entropy and Fractal Dimension
Chen, Yanguang
2016-01-01
Fractal dimension is defined on the base of entropy, including macro state entropy and information entropy. The generalized dimension of multifractals is based on Renyi entropy. However, the mathematical transform from entropy to fractal dimension is not yet clear in both theory and practice. This paper is devoted to revealing the equivalence relation between spatial entropy and fractal dimension using box-counting method. Based on varied regular fractals, the numerical relationship between spatial entropy and fractal dimension is examined. The results show that the ratio of actual entropy (Mq) to the maximum entropy (Mmax) equals the ratio of actual dimension (Dq) to the maximum dimension (Dmax), that is, Mq/Mmax=Dq/Dmax. For real systems, the spatial entropy and fractal dimension of complex spatial systems such as cities can be converted into one another by means of functional box-counting method. The theoretical inference is verified by observational data of urban form. A conclusion is that normalized spat...
杜金华; 王莎
2013-01-01
首先介绍3种典型的用于翻译错误检测和分类的单词后验概率特征,即基于固定位置的词后验概率、基于滑动窗的词后验概率和基于词对齐的词后验概率,分析其对错误检测性能的影响；然后,将其分别与语言学特征如词性、词及由LG句法分析器抽取的句法特征等进行组合,利用最大熵分类器预测翻译错误,并在汉英NIST数据集上进行实验验证和比较.实验结果表明,不同的单词后验概率对分类错误率的影响是显著的,并且在词后验概率基础上加入语言学特征的组合特征可以显著降低分类错误率,提高译文错误预测性能.%The authors firstly introduce three typical word posterior probabilities (WPP) for error detection and classification, which are fixed position WPP, sliding window WPP, and alignment-based WPP, and analyzes their impact on the detection performance. Then each WPP feature is combined with three linguistic features (Word, POS and LG Parsing knowledge) over the maximum entropy classifier to predict the translation errors. Experimental results on Chinese-to-English NIST datasets show that the influences of different WPP features on the classification error rate (CER) are significant, and the combination of WPP with linguistic features can significantly reduce the CER and improve the prediction capability of the classifier.
张磊; 李珊; 彭舰; 陈黎; 黎红友
2014-01-01
In recent years, feature-opinion pairs classification of Chinese product review is one of the most important research field in Web data mining technology. In this paper, five types of Chinese dependency relationships for product review have been concluded based on the traditional English dependency grammar. The maximum entropy model is used to predict the opinion-relevant product feature relations. To train the model, a set of feature symbol combinations have been designed by means of Chinese dependency. The experiment result shows that the recall and F-score of our approach could reach 78.68%and 75.36%respectively, which is clearly superior to Hu’s adjacent based method and Popesecu’s pattern based method.%中文产品评论特征词与关联的情感词的分类是观点挖掘的重要研究内容之一。该文改进了英文依存关系语法，总结出5种常用的中文产品评论依存关系；利用最大熵模型进行训练，设计了基于依存关系的复合特征模板。实验证明，应用该复合模板进行特征-情感对的提取，系统的查全率和F-score相比于传统方法，分别提高到78.68%和75.36%。
Entropy-based implied volatility and its information content
X. Xiao (Xiao); C. Zhou (Chen)
2016-01-01
markdownabstractThis paper investigates the maximum entropy approach on estimating implied volatility. The entropy approach also allows to measure option implied skewness and kurtosis nonparametrically, and to construct confidence intervals. Simulations show that the en- tropy approach outperforms t
K B Athreya
2009-09-01
It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy $\\int fh_id_=_i$ for $i=1,2,\\ldots,\\ldots k$ the maximizer of entropy is an $f_0$ that is proportional to $\\exp(\\sum c_i h_i)$ for some choice of $c_i$. An extension of this to a continuum of constraints and many examples are presented.
Nonextensive random-matrix theory based on Kaniadakis entropy
Abul-Magd, A.Y. [Department of Mathematics, Faculty of Science, Zagazig University, Zagazig (Egypt)]. E-mail: a_y_abul_magd@hotmail.com
2007-02-12
The joint eigenvalue distributions of random-matrix ensembles are derived by applying the principle maximum entropy to the Renyi, Abe and Kaniadakis entropies. While the Renyi entropy produces essentially the same matrix-element distributions as the previously obtained expression by using the Tsallis entropy, and the Abe entropy does not lead to a closed form expression, the Kaniadakis entropy leads to a new generalized form of the Wigner surmise that describes a transition of the spacing distribution from chaos to order. This expression is compared with the corresponding expression obtained by assuming Tsallis' entropy as well as the results of a previous numerical experiment.
How objective is black hole entropy?
Lau, Y K
1994-01-01
The objectivity of black hole entropy is discussed in the particular case of a Schwarzchild black hole. Using Jaynes' maximum entropy formalism and Euclidean path integral evaluation of partition function, it is argued that in the semiclassical limit when the fluctutation of metric is neglected, the black hole entropy of a Schwarzchild black hole is equal to the maximal information entropy of an observer whose sole knowledge of the black hole is its mass. Black hole entropy becomes a measure of number of its internal mass eigenstates in accordance with the Boltzmann principle only in the limit of negligible relative mass fluctutation. {}From the information theoretic perspective, the example of a Schwarzchild black hole seems to suggest that black hole entropy is no different from ordinary thermodynamic entropy. It is a property of the experimental data of a black hole, rather than being an intrinsic physical property of a black hole itself independent of any observer. However, it is still weakly objective in...
基于最大熵模型的词位标注汉语分词%Chinese Word Segmentation via Word-position Tagging Based on Maximum Entropy Model
于江德; 王希杰; 樊孝忠
2011-01-01
The performance of Chinese word segmentation has been greatly improved by word-position-based approaches in recent years.This approach treated Chinese word segmentation as a word-position tagging.With the help of powerful sequence tagging model, word-position-based method quickly rose as a mainstream technique in this field.Feature template selection and tag sets selection was crucial in this method.The technique was studied via using different word-positions tag sets and maximum entropy model.Closed evaluations were performed on corpus from the second international Chinese word segmentation Bakeoff-2005, and comparative experiments were performed on different tag sets and feature templates.Experimental results showed that the feature template set TMPT-6 and six word-position tag sets was much better than the other.%近年来基于字的词位标注汉语分词方法极大地提高了分词的性能,该方法将汉语分词转化为字的词位标注问题,借助于优秀的序列标注模型,词位标注汉语分词逐渐成为汉语分词的主要技术路线.该方法中特征模板集设定和词位标注集的选择至关重要,采用不同的词位标注集,使用最大熵模型进一步研究了词位标注汉语分词技术.在国际汉语分词评测Bakeoff2005的语料上进行了封闭测试,并对比了不同词位标注集对分词性能的影响.实验表明所采用的六词位标注集配合相应的特征模板集TMPT-6较其他词位标注集分词性能要好.
梅灿华; 张玉红; 胡学钢; 李培培
2011-01-01
Traditional machine learning and data mining algorithms mainly assume that the training and test data must be in the same feature space and follow the same distribution. However, in real applications, the data distributions change frequently, so those two hypotheses are hence difficult to hold. In such cases, most traditional algorithms are no longer applicable, because they usually require re-collecting and re-labeling large amounts of data, which is very expensive and time consuming. As a new framework of learning, transfer learning could effectively solve this problem by transferring the knowledge learned from one or more source domains to a target domain. This paper focuses on one of the important branches in this field, namely inductive transfer learning. Therefore, a weighted algorithm of inductive transfer learning based on maximum entropy model is proposed. It transfers the parameters of model learned from the source domain to the target domain, and meanwhile adjusts the weights of instances in the target domain to obtain the model with higher accuracy. And thus it could speed up learning process and achieve domain adaptation. The experimental results show the effectiveness of this algorithm.%传统机器学习和数据挖掘算法主要基于两个假设:训练数据集和测试数据集具有相同的特征空间和数据分布.然而在实际应用中,这两个假设却难以成立,从而导致传统的算法不再适用.迁移学习作为一种新的学习框架能有效地解决该问题.着眼于迁移学习的一个重要分支——归纳迁移学习,提出了一种基于最大熵模型的加权归纳迁移学习算法WTLME.该算法通过将已训练好的原始领域模型参数迁移到目标领域,并对目标领域实例权重进行调整,从而获得了精度较高的目标领域模型.实验结果表明了该算法的有效性.
New multifactor spatial prediction method based on Bayesian maximum entropy%基于贝叶斯最大熵的多因子空间属性预测新方法
杨勇; 张楚天; 贺立源
2013-01-01
Summary The spatial distributions of soil properties (e.g.,organic matter and heavy metal content) are vital to soil quality evaluation and regional environment assessment.Currently,the spatial distribution of soil properties is usually predicted with classical geostatistics or environmental correlation.These two methods are different in theory.Geostatistics is based on spatial correlation of sampling points.However,it contains some deficiencies, such as the lack of effective utilization of environmental information,the smoothing effect of predicted results, difficult to meet the assumption of single point to multipoint Gaussian distribution etc .On the other hand,the theoretical basis of environmental correlation is based on the relationship between soil and environment,but it ignores the spatial correlation among sampling points.These two methods complement each other.Thus,it is very important to study how to integrate these two methods,so that the spatial correlation among sampling points and the relationship between soil and environmental factors can both be used to improve the prediction accuracy. We propose a new spatial prediction method based on the theory of Bayesian maximum entropy (BME), which is one of the most well-known modern spatiotemporal geostatistical techniques.The main objective is to incorporate the results of classical geostatistics and quantitative soil-landscape model in the BME framework. The result of ordinary Kriging was taken as the priori probability density function (pdf),as well as the sampling data as hard data,and the results of environmental correlation as soft data.Posterior pdf is calculated with priori pdf,hard data and soft data.According to the posterior pdf,the predicted values of non-sampling points could be obtained,which not only contained the spatial correlation between sample points,but also took into account the relationship between soil properties and the environment.Meanwhile,the soil organic matter contents in
Universal entropy relations: entropy formulae and entropy bound
Liu, Hang; Xu, Wei; Zhu, Bin
2016-01-01
We survey the applications of universal entropy relations in black holes with multi-horizons. In sharp distinction to conventional entropy product, the entropy relationship here not only improve our understanding of black hole entropy but was introduced as an elegant technique trick for handling various entropy bounds and sum. Despite the primarily technique role, entropy relations have provided considerable insight into several different types of gravity, including massive gravity, Einstein-Dilaton gravity and Horava-Lifshitz gravity. We present and discuss the results for each one.
Gadda, Davide; Vannucchi, Letizia; Niccolai, Franco; Neri, Anna T.; Carmignani, Luca; Pacini, Patrizio [Ospedale del Ceppo, U.O. Radiodiagnostica, Pistoia (Italy)
2005-12-01
Maximum intensity projections reconstructions from 2.5 mm unenhanced multidetector computed tomography axial slices were obtained from 49 patients within the first 6 h of anterior-circulation cerebral strokes to identify different patterns of the dense artery sign and their prognostic implications for location and extent of the infarcted areas. The dense artery sign was found in 67.3% of cases. Increased density of the whole M1 segment with extension to M2 of the middle cerebral artery was associated with a wider extension of cerebral infarcts in comparison to M1 segment alone or distal M1 and M2. A dense sylvian branch of the middle cerebral artery pattern was associated with a more restricted extension of infarct territory. We found 62.5% of patients without a demonstrable dense artery to have a limited peripheral cortical or capsulonuclear lesion. In patients with a 7-10 points on the Alberta Stroke Early Programme Computed Tomography Score and a dense proximal MCA in the first hours of ictus the mean decrease in the score between baseline and follow-up was 5.09{+-}1.92 points. In conclusion, maximum intensity projections from thin-slice images can be quickly obtained from standard computed tomography datasets using a multidetector scanner and are useful in identifying and correctly localizing the dense artery sign, with prognostic implications for the entity of cerebral damage. (orig.)
Alvarez R, J.T
1998-10-01
This thesis presents a microscopic model for the non-linear fluctuating hydrodynamic of superfluid helium ({sup 4} He), model developed by means of the Maximum Entropy Method (Maxent). In the chapter 1, it is demonstrated the necessity to developing a microscopic model for the fluctuating hydrodynamic of the superfluid helium, starting from to show a brief overview of the theories and experiments developed in order to explain the behavior of the superfluid helium. On the other hand, it is presented the Morozov heuristic method for the construction of the non-linear hydrodynamic fluctuating of simple fluid. Method that will be generalized for the construction of the non-linear fluctuating hydrodynamic of the superfluid helium. Besides, it is presented a brief summary of the content of the thesis. In the chapter 2, it is reproduced the construction of a Generalized Fokker-Planck equation, (GFP), for a distribution function associated with the coarse grained variables. Function defined with aid of a nonequilibrium statistical operator {rho}hut{sub FP} that is evaluated as Wigneris function through {rho}{sub CG} obtained by Maxent. Later this equation of GFP is reduced to a non-linear local FP equation from considering a slow and Markov process in the coarse grained variables. In this equation appears a matrix D{sub mn} defined with a nonequilibrium coarse grained statistical operator {rho}hut{sub CG}, matrix elements are used in the construction of the non-linear fluctuating hydrodynamics equations of the superfluid helium. In the chapter 3, the Lagrange multipliers are evaluated for to determine {rho}hut{sub CG} by means of the local equilibrium statistical operator {rho}hut{sub l}-tilde with the hypothesis that the system presents small fluctuations. Also are determined the currents associated with the coarse grained variables and furthermore are evaluated the matrix elements D{sub mn} but with aid of a quasi equilibrium statistical operator {rho}hut{sub qe} instead
Ingo Klein
2016-07-01
Full Text Available A new kind of entropy will be introduced which generalizes both the differential entropy and the cumulative (residual entropy. The generalization is twofold. First, we simultaneously define the entropy for cumulative distribution functions (cdfs and survivor functions (sfs, instead of defining it separately for densities, cdfs, or sfs. Secondly, we consider a general “entropy generating function” φ, the same way Burbea et al. (IEEE Trans. Inf. Theory 1982, 28, 489–495 and Liese et al. (Convex Statistical Distances; Teubner-Verlag, 1987 did in the context of φ-divergences. Combining the ideas of φ-entropy and cumulative entropy leads to the new “cumulative paired φ-entropy” ( C P E φ . This new entropy has already been discussed in at least four scientific disciplines, be it with certain modifications or simplifications. In the fuzzy set theory, for example, cumulative paired φ-entropies were defined for membership functions, whereas in uncertainty and reliability theories some variations of C P E φ were recently considered as measures of information. With a single exception, the discussions in the scientific disciplines appear to be held independently of each other. We consider C P E φ for continuous cdfs and show that C P E φ is rather a measure of dispersion than a measure of information. In the first place, this will be demonstrated by deriving an upper bound which is determined by the standard deviation and by solving the maximum entropy problem under the restriction of a fixed variance. Next, this paper specifically shows that C P E φ satisfies the axioms of a dispersion measure. The corresponding dispersion functional can easily be estimated by an L-estimator, containing all its known asymptotic properties. C P E φ is the basis for several related concepts like mutual φ-information, φ-correlation, and φ-regression, which generalize Gini correlation and Gini regression. In addition, linear rank tests for scale that
Quantum and Ecosystem Entropies
A. D. Kirwan
2008-06-01
Full Text Available Ecosystems and quantum gases share a number of superficial similarities including enormous numbers of interacting elements and the fundamental role of energy in such interactions. A theory for the synthesis of data and prediction of new phenomena is well established in quantum statistical mechanics. The premise of this paper is that the reason a comparable unifying theory has not emerged in ecology is that a proper role for entropy has yet to be assigned. To this end, a phase space entropy model of ecosystems is developed. Specification of an ecosystem phase space cell size based on microbial mass, length, and time scales gives an ecosystem uncertainty parameter only about three orders of magnitude larger than PlanckÃ¢Â€Â™s constant. Ecosystem equilibria is specified by conservation of biomass and total metabolic energy, along with the principle of maximum entropy at equilibria. Both Bose - Einstein and Fermi - Dirac equilibrium conditions arise in ecosystems applications. The paper concludes with a discussion of some broader aspects of an ecosystem phase space.
Local entropy of a nonequilibrium fermion system
Stafford, Charles A.; Shastry, Abhay
2017-03-01
The local entropy of a nonequilibrium system of independent fermions is investigated and analyzed in the context of the laws of thermodynamics. It is shown that the local temperature and chemical potential can only be expressed in terms of derivatives of the local entropy for linear deviations from local equilibrium. The first law of thermodynamics is shown to lead to an inequality, not equality, for the change in the local entropy as the nonequilibrium state of the system is changed. The maximum entropy principle (second law of thermodynamics) is proven: a nonequilibrium distribution has a local entropy less than or equal to a local equilibrium distribution satisfying the same constraints. It is shown that the local entropy of the system tends to zero when the local temperature tends to zero, consistent with the third law of thermodynamics.
Paracrystalline property of high-entropy alloys
Shaoqing Wang
2013-10-01
Full Text Available Atomic structure models of six-component high-entropy alloys with body-centered cubic structure are successfully built according to the principle of maximum entropy for the first time. The lattice distortion parameters g of seven typical high-entropy alloys are calculated. From the optimized lattice configuration of high-entropy alloys, we show that these alloys are ideal three-dimensional paracrystals. The formation mechanism, structural feature, mechanical property, and application prospect of high-entropy alloys are discussed in comparison with the traditional alloys. The novel properties of body-centered cubic high-entropy alloys are attributed to the failure of dislocation deformation mechanism and the difficulty of directed particle diffusion.
Generalized Gravitational Entropy from Fermion Fields
Huang, Wung-Hong
2016-01-01
The generalized gravitational entropy proposed in recent by Lewkowycz and Maldacena [1] is extended to the system of Fermion fields. We first find the regular wave solution of Fermion field which has arbitrary frequency and mode number on the BTZ spacetime, and then use it to calculate the exact gravitational entropy. The results show that there is a threshold frequency below which the Fermion fields could not contribute the generalized gravitational entropy. Also, the static and zero-mode solutions have not entropy, contrast to that in scalar field. We also found that the entropy of the static scalar fields and non-static fermions is an increasing function of mode number and, after arriving the maximum entropy it becomes a deceasing function and is derived to the asymptotic value.
An Entropy Measure of Non-Stationary Processes
Ling Feng Liu
2014-03-01
Full Text Available Shannon’s source entropy formula is not appropriate to measure the uncertainty of non-stationary processes. In this paper, we propose a new entropy measure for non-stationary processes, which is greater than or equal to Shannon’s source entropy. The maximum entropy of the non-stationary process has been considered, and it can be used as a design guideline in cryptography.
Kruglikov, Boris; Rypdal, Martin
2005-01-01
The topological entropy of piecewise affine maps is studied. It is shown that singularities may contribute to the entropy only if there is angular expansion and we bound the entropy via the expansion rates of the map. As a corollary we deduce that non-expanding conformal piecewise affine maps have zero topological entropy. We estimate the entropy of piecewise affine skew-products. Examples of abnormal entropy growth are provided.