WorldWideScience

Sample records for adaptive markov chain

  1. On adaptive Markov chain Monte Carlo algorithms

    Atchadé, Yves F.; Rosenthal, Jeffrey S.

    2005-01-01

    We look at adaptive Markov chain Monte Carlo algorithms that generate stochastic processes based on sequences of transition kernels, where each transition kernel is allowed to depend on the history of the process. We show under certain conditions that the stochastic process generated is ergodic, with appropriate stationary distribution. We use this result to analyse an adaptive version of the random walk Metropolis algorithm where the scale parameter σ is sequentially adapted using a Robbins-...

  2. An Adaptively Constructed Algebraic Multigrid Preconditioner for Irreducible Markov Chains

    Brannick, James; Kahl, Karsten; Sokolovic, Sonja

    2014-01-01

    The computation of stationary distributions of Markov chains is an important task in the simulation of stochastic models. The linear systems arising in such applications involve non-symmetric M-matrices, making algebraic multigrid methods a natural choice for solving these systems. In this paper we investigate extensions and improvements of the bootstrap algebraic multigrid framework for solving these systems. This is achieved by reworking the bootstrap setup process to use singular vectors i...

  3. Graphs: Associated Markov Chains

    Murthy, Garimella Rama

    2012-01-01

    In this research paper, weighted / unweighted, directed / undirected graphs are associated with interesting Discrete Time Markov Chains (DTMCs) as well as Continuous Time Markov Chains (CTMCs). The equilibrium / transient behaviour of such Markov chains is studied. Also entropy dynamics (Shannon entropy) of certain structured Markov chains is investigated. Finally certain structured graphs and the associated Markov chains are studied.

  4. Adaptive continuous time Markov chain approximation model to general jump-diffusions

    Mario Cerrato; Chia Chun Lo; Konstantinos Skindilias

    2011-01-01

    We propose a non-equidistant Q rate matrix formula and an adaptive numerical algorithm for a continuous time Markov chain to approximate jump-diffusions with affine or non-affine functional specifications. Our approach also accommodates state-dependent jump intensity and jump distribution, a flexibility that is very hard to achieve with other numerical methods. The Kolmogorov-Smirnov test shows that the proposed Markov chain transition density converges to the one given by the likelihood expa...

  5. Discrete Quantum Markov Chains

    Faigle, Ulrich

    2010-01-01

    A framework for finite-dimensional quantum Markov chains on Hilbert spaces is introduced. Quantum Markov chains generalize both classical Markov chains with possibly hidden states and existing models of quantum walks on finite graphs. Quantum Markov chains are based on Markov operations that may be applied to quantum systems and include quantum measurements, for example. It is proved that quantum Markov chains are asymptotically stationary and hence possess ergodic and entropic properties. With a quantum Markov chain one may associate a quantum Markov process, which is a stochastic process in the classical sense. Generalized Markov chains allow a representation with respect to a generalized Markov source model with definite (but possibly hidden) states relative to which observables give rise to classical stochastic processes. It is demonstrated that this model allows for observables to violate Bell's inequality.

  6. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM

    2008-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.

  7. Fields From Markov Chains

    Justesen, Jørn

    2005-01-01

    A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly.......A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly....

  8. Adaptive relaxation for the steady-state analysis of Markov chains

    Horton, Graham

    1994-01-01

    We consider a variant of the well-known Gauss-Seidel method for the solution of Markov chains in steady state. Whereas the standard algorithm visits each state exactly once per iteration in a predetermined order, the alternative approach uses a dynamic strategy. A set of states to be visited is maintained which can grow and shrink as the computation progresses. In this manner, we hope to concentrate the computational work in those areas of the chain in which maximum improvement in the solution can be achieved. We consider the adaptive approach both as a solver in its own right and as a relaxation method within the multi-level algorithm. Experimental results show significant computational savings in both cases.

  9. Fuzzy Markov chains: uncertain probabilities

    James J. Buckley; Eslami, Esfandiar

    2002-01-01

    We consider finite Markov chains where there are uncertainties in some of the transition probabilities. These uncertainties are modeled by fuzzy numbers. Using a restricted fuzzy matrix multiplication we investigate the properties of regular, and absorbing, fuzzy Markov chains and show that the basic properties of these classical Markov chains generalize to fuzzy Markov chains.

  10. Markov processes and controlled Markov chains

    Filar, Jerzy; Chen, Anyue

    2002-01-01

    The general theory of stochastic processes and the more specialized theory of Markov processes evolved enormously in the second half of the last century. In parallel, the theory of controlled Markov chains (or Markov decision processes) was being pioneered by control engineers and operations researchers. Researchers in Markov processes and controlled Markov chains have been, for a long time, aware of the synergies between these two subject areas. However, this may be the first volume dedicated to highlighting these synergies and, almost certainly, it is the first volume that emphasizes the contributions of the vibrant and growing Chinese school of probability. The chapters that appear in this book reflect both the maturity and the vitality of modern day Markov processes and controlled Markov chains. They also will provide an opportunity to trace the connections that have emerged between the work done by members of the Chinese school of probability and the work done by the European, US, Central and South Ameri...

  11. Phasic Triplet Markov Chains.

    El Yazid Boudaren, Mohamed; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar

    2014-11-01

    Hidden Markov chains have been shown to be inadequate for data modeling under some complex conditions. In this work, we address the problem of statistical modeling of phenomena involving two heterogeneous system states. Such phenomena may arise in biology or communications, among other fields. Namely, we consider that a sequence of meaningful words is to be searched within a whole observation that also contains arbitrary one-by-one symbols. Moreover, a word may be interrupted at some site to be carried on later. Applying plain hidden Markov chains to such data, while ignoring their specificity, yields unsatisfactory results. The Phasic triplet Markov chain, proposed in this paper, overcomes this difficulty by means of an auxiliary underlying process in accordance with the triplet Markov chains theory. Related Bayesian restoration techniques and parameters estimation procedures according to the new model are then described. Finally, to assess the performance of the proposed model against the conventional hidden Markov chain model, experiments are conducted on synthetic and real data. PMID:26353069

  12. Markov chain Monte Carlo test of toric homogeneous Markov chains

    Takemura, Akimichi; Hara, Hisayuki

    2010-01-01

    Markov chain models are used in various fields, such behavioral sciences or econometrics. Although the goodness of fit of the model is usually assessed by large sample approximation, it is desirable to use conditional tests if the sample size is not large. We study Markov bases for performing conditional tests of the toric homogeneous Markov chain model, which is the envelope exponential family for the usual homogeneous Markov chain model. We give a complete description of a Markov basis for ...

  13. Putting Markov Chains Back into Markov Chain Monte Carlo

    Barker, Richard J.; Schofield, Matthew R.

    2007-01-01

    Markov chain theory plays an important role in statistical inference both in the formulation of models for data and in the construction of efficient algorithms for inference. The use of Markov chains in modeling data has a long history, however the use of Markov chain theory in developing algorithms for statistical inference has only become popular recently. Using mark-recapture models as an illustration, we show how Markov chains can be used for developing demographic models and also ...

  14. Variance bounding Markov chains

    Roberts, Gareth O.; Jeffrey S. Rosenthal

    2008-01-01

    We introduce a new property of Markov chains, called variance bounding. We prove that, for reversible chains at least, variance bounding is weaker than, but closely related to, geometric ergodicity. Furthermore, variance bounding is equivalent to the existence of usual central limit theorems for all L2 functionals. Also, variance bounding (unlike geometric ergodicity) is preserved under the Peskun order. We close with some applications to Metropolis–Hastings algorithms.

  15. An adaptive Monte-Carlo Markov chain algorithm for inference from mixture signals

    Adaptive Metropolis (AM) is a powerful recent algorithmic tool in numerical Bayesian data analysis. AM builds on a well-known Markov Chain Monte Carlo algorithm but optimizes the rate of convergence to the target distribution by automatically tuning the design parameters of the algorithm on the fly. Label switching is a major problem in inference on mixture models because of the invariance to symmetries. The simplest (non-adaptive) solution is to modify the prior in order to make it select a single permutation of the variables, introducing an identifiability constraint. This solution is known to cause artificial biases by not respecting the topology of the posterior. In this paper we describe an online relabeling procedure which can be incorporated into the AM algorithm. We give elements of convergence of the algorithm and identify the link between its modified target measure and the original posterior distribution of interest. We illustrate the algorithm on a synthetic mixture model inspired by the muonic water Cherenkov signal of the surface detectors in the Pierre Auger Experiment.

  16. On Markov Chains and Filtrations

    Spreij, Peter

    1997-01-01

    In this paper we rederive some well known results for continuous time Markov processes that live on a finite state space.Martingale techniques are used throughout the paper. Special attention is paid to the construction of a continuous timeMarkov process, when we start from a discrete time Markov chain. The Markov property here holds with respect tofiltrations that need not be minimal.

  17. Markov chains theory and applications

    Sericola, Bruno

    2013-01-01

    Markov chains are a fundamental class of stochastic processes. They are widely used to solve problems in a large number of domains such as operational research, computer science, communication networks and manufacturing systems. The success of Markov chains is mainly due to their simplicity of use, the large number of available theoretical results and the quality of algorithms developed for the numerical evaluation of many metrics of interest.The author presents the theory of both discrete-time and continuous-time homogeneous Markov chains. He carefully examines the explosion phenomenon, the

  18. Quadratic Variation by Markov Chains

    Hansen, Peter Reinhard; Horel, Guillaume

    We introduce a novel estimator of the quadratic variation that is based on the the- ory of Markov chains. The estimator is motivated by some general results concerning filtering contaminated semimartingales. Specifically, we show that filtering can in prin- ciple remove the effects of market...... microstructure noise in a general framework where little is assumed about the noise. For the practical implementation, we adopt the dis- crete Markov chain model that is well suited for the analysis of financial high-frequency prices. The Markov chain framework facilitates simple expressions and elegant analyti...

  19. Bibliometric Application of Markov Chains.

    Pao, Miranda Lee; McCreery, Laurie

    1986-01-01

    A rudimentary description of Markov Chains is presented in order to introduce its use to describe and to predict authors' movements among subareas of the discipline of ethnomusicology. Other possible applications are suggested. (Author)

  20. Hidden hybrid Markov/semi-Markov chains.

    GUÉDON, YANN

    2005-01-01

    http://www.sciencedirect.com/science?ₒb=IssueURL&_tockey=%23TOC%235880%232005%23999509996%23596026%23FLA%23&ₐuth=y&view=c&ₐcct=C000056834&_version=1&_urlVersion=0&_userid=2292769&md5=87e7f8be94f92a8574da566c600ce631 International audience Models that combine Markovian states with implicit geometric state occupancy distributions and semi-Markovian states with explicit state occupancy distributions, are investigated. This type of model retains the flexibility of hidden semi-Markov chains ...

  1. Compressing redundant information in Markov chains

    Aletti, Giacomo

    2006-01-01

    Given a strongly stationary Markov chain and a finite set of stopping rules, we prove the existence of a polynomial algorithm which projects the Markov chain onto a minimal Markov chain without redundant information. Markov complexity is hence defined and tested on some classical problems.

  2. DREAM(D: an adaptive markov chain monte carlo simulation algorithm to solve discrete, noncontinuous, posterior parameter estimation problems

    J. A. Vrugt

    2011-04-01

    Full Text Available Formal and informal Bayesian approaches are increasingly being used to treat forcing, model structural, parameter and calibration data uncertainty, and summarize hydrologic prediction uncertainty. This requires posterior sampling methods that approximate the (evolving posterior distribution. We recently introduced the DiffeRential Evolution Adaptive Metropolis (DREAM algorithm, an adaptive Markov Chain Monte Carlo (MCMC method that is especially designed to solve complex, high-dimensional and multimodal posterior probability density functions. The method runs multiple chains in parallel, and maintains detailed balance and ergodicity. Here, I present the latest algorithmic developments, and introduce a discrete sampling variant of DREAM that samples the parameter space at fixed points. The development of this new code, DREAM(D, has been inspired by the existing class of integer optimization problems, and emerging class of experimental design problems. Such non-continuous parameter estimation problems are of considerable theoretical and practical interest. The theory developed herein is applicable to DREAM(ZS (Vrugt et al., 2011 and MT-DREAM(ZS (Laloy and Vrugt, 2011 as well. Two case studies involving a sudoku puzzle and rainfall – runoff model calibration problem are used to illustrate DREAM(D.

  3. Beyond Markov Chains, Towards Adaptive Memristor Network-based Music Generation

    Gale, Ella; Matthews, Oliver; Costello, Ben de Lacy; Adamatzky, Andrew

    2013-01-01

    We undertook a study of the use of a memristor network for music generation, making use of the memristor's memory to go beyond the Markov hypothesis. Seed transition matrices are created and populated using memristor equations, and which are shown to generate musical melodies and change in style over time as a result of feedback into the transition matrix. The spiking properties of simple memristor networks are demonstrated and discussed with reference to applications of music making. The lim...

  4. On a Result for Finite Markov Chains

    Kulathinal, Sangita; Ghosh, Lagnojita

    2006-01-01

    In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M…

  5. Intricacies of Dependence between Components of Multivariate Markov Chains: Weak Markov Consistency and Markov Copulae

    Bielecki, Tomasz R.; Jakubowski, Jacek; Niewęgłowski, Mariusz

    2011-01-01

    This article continues our study of Markovian consistency and Markov copulae. In particular, we characterize the weak Markovian consistency for finite Markov chains. We discuss some aspects of dependence between the components of a multivariate Markov chain in the context of weak Markovian consistency and strong Markovian consistency. In this connection, we also introduce and discuss the concept of weak Markov copulae.

  6. Bayesian M-T clustering for reduced parameterisation of Markov chains used for non-linear adaptive elements

    Valečková, Markéta; Kárný, Miroslav; Sutanto, E. L.

    2001-01-01

    Roč. 37, č. 6 (2001), s. 1071-1078. ISSN 0005-1098 R&D Projects: GA ČR GA102/99/1564 Grant ostatní: IST(XE) 1999/12058 Institutional research plan: AV0Z1075907 Keywords : Markov chain * clustering * Bayesian mixture estimation Subject RIV: BC - Control Systems Theory Impact factor: 1.449, year: 2001

  7. Spectral methods for quantum Markov chains

    The aim of this project is to contribute to our understanding of quantum time evolutions, whereby we focus on quantum Markov chains. The latter constitute a natural generalization of the ubiquitous concept of a classical Markov chain to describe evolutions of quantum mechanical systems. We contribute to the theory of such processes by introducing novel methods that allow us to relate the eigenvalue spectrum of the transition map to convergence as well as stability properties of the Markov chain.

  8. Using Games to Teach Markov Chains

    Johnson, Roger W.

    2003-01-01

    Games are promoted as examples for classroom discussion of stationary Markov chains. In a game context Markov chain terminology and results are made concrete, interesting, and entertaining. Game length for several-player games such as "Hi Ho! Cherry-O" and "Chutes and Ladders" is investigated and new, simple formulas are given. Slight…

  9. Transition Probability Estimates for Reversible Markov Chains

    Telcs, Andras

    2000-01-01

    This paper provides transition probability estimates of transient reversible Markov chains. The key condition of the result is the spatial symmetry and polynomial decay of the Green's function of the chain.

  10. Revisiting Causality in Markov Chains

    Shojaee, Abbas

    2016-01-01

    Identifying causal relationships is a key premise of scientific research. The growth of observational data in different disciplines along with the availability of machine learning methods offers the possibility of using an empirical approach to identifying potential causal relationships, to deepen our understandings of causal behavior and to build theories accordingly. Conventional methods of causality inference from observational data require a considerable length of time series data to capture cause-effect relationship. We find that potential causal relationships can be inferred from the composition of one step transition rates to and from an event. Also known as Markov chain, one step transition rates are a commonly available resource in different scientific disciplines. Here we introduce a simple, effective and computationally efficient method that we termed 'Causality Inference using Composition of Transitions CICT' to reveal causal structure with high accuracy. We characterize the differences in causes,...

  11. REPRESENTING MARKOV CHAINS WITH TRANSITION DIAGRAMS

    Farida Kachapova

    2013-01-01

    Full Text Available Stochastic processes have many useful applications and are taught in several university programmes. Students often encounter difficulties in learning stochastic processes and Markov chains, in particular. In this article we describe a teaching strategy that uses transition diagrams to represent a Markov chain and to re-define properties of its states in simple terms of directed graphs. This strategy utilises the students’ intuition and makes the learning of complex concepts about Markov chains faster and easier. The method is illustrated by worked examples. The described strategy helps students to master properties of finite Markov chains, so they have a solid basis for the study of infinite Markov chains and other stochastic processes.

  12. Entropy Rate for Hidden Markov Chains with rare transitions

    Peres, Yuval; Quas, Anthony

    2010-01-01

    We consider Hidden Markov Chains obtained by passing a Markov Chain with rare transitions through a noisy memoryless channel. We obtain asymptotic estimates for the entropy of the resulting Hidden Markov Chain as the transition rate is reduced to zero.

  13. Estimating hidden semi-Markov chains from discrete sequences.

    Guédon, Yann

    2003-01-01

    International audience This article addresses the estimation of hidden semi-Markov chains from nonstationary discrete sequences. Hidden semi-Markov chains are particularly useful to model the succession of homogeneous zones or segments along sequences. A discrete hidden semi-Markov chain is composed of a nonobservable state process, which is a semi-Markov chain, and a discrete output process. Hidden semi-Markov chains generalize hidden Markov chains and enable the modeling of various durat...

  14. Markov chains models, algorithms and applications

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  15. Markov chains analytic and Monte Carlo computations

    Graham, Carl

    2014-01-01

    Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec

  16. Generalized crested products of Markov chains

    D'Angeli, Daniele

    2010-01-01

    We define a finite Markov chain, called generalized crested product, which naturally appears as a generalization of the first crested product of Markov chains. A complete spectral analysis is developed and the $k$-step transition probability is given. It is important to remark that this Markov chain describes a more general version of the classical Ehrenfest diffusion model. As a particular case, one gets a generalization of the classical Insect Markov chain defined on the ultrametric space. Finally, an interpretation in terms of representation group theory is given, by showing the correspondence between the spectral decomposition of the generalized crested product and the Gelfand pairs associated with the generalized wreath product of permutation groups.

  17. Interacting Particle Markov Chain Monte Carlo

    Rainforth, Tom; Naesseth, Christian A.; Lindsten, Fredrik; Paige, Brooks; van de Meent, Jan-Willem; Doucet, Arnaud; Wood, Frank

    2016-01-01

    We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a PMCMC method that introduces a coupling between multiple standard and conditional sequential Monte Carlo samplers. Like related methods, iPMCMC is a Markov chain Monte Carlo sampler on an extended space. We present empirical results that show significant improvements in mixing rates relative to both non-interacting PMCMC samplers and a single PMCMC sampler with an equivalent total computational budget. An additional advant...

  18. Quantum Markov Chain Mixing and Dissipative Engineering

    Kastoryano, Michael James

    2012-01-01

    This thesis is the fruit of investigations on the extension of ideas of Markov chain mixing to the quantum setting, and its application to problems of dissipative engineering. A Markov chain describes a statistical process where the probability of future events depends only on the state of the sy....... Finally, we consider three independent tasks of dissipative engineering: dissipatively preparing a maximally entangled state of two atoms trapped in an optical cavity, dissipative preparation of graph states, and dissipative quantum computing construction.......This thesis is the fruit of investigations on the extension of ideas of Markov chain mixing to the quantum setting, and its application to problems of dissipative engineering. A Markov chain describes a statistical process where the probability of future events depends only on the state...... (stationary states). The aim of Markov chain mixing is to obtain (upper and/or lower) bounds on the number of steps it takes for the Markov chain to reach a stationary state. The natural quantum extensions of these notions are density matrices and quantum channels. We set out to develop a general mathematical...

  19. Stationary Probability Vectors of Higher-order Markov Chains

    Li, Chi-Kwong; Zhang, Shixiao

    2013-01-01

    We consider the higher-order Markov Chain, and characterize the second order Markov chains admitting every probability distribution vector as a stationary vector. The result is used to construct Markov chains of higher-order with the same property. We also study conditions under which the set of stationary vectors of the Markov chain has a certain affine dimension.

  20. Markov chains and decision processes for engineers and managers

    Sheskin, Theodore J

    2010-01-01

    Markov Chain Structure and ModelsHistorical NoteStates and TransitionsModel of the WeatherRandom WalksEstimating Transition ProbabilitiesMultiple-Step Transition ProbabilitiesState Probabilities after Multiple StepsClassification of StatesMarkov Chain StructureMarkov Chain ModelsProblemsReferencesRegular Markov ChainsSteady State ProbabilitiesFirst Passage to a Target StateProblemsReferencesReducible Markov ChainsCanonical Form of the Transition MatrixTh

  1. Analysis of a quantum Markov chain

    A quantum chain is analogous to a classical stationary Markov chain except that the probability measure is replaced by a complex amplitude measure and the transition probability matrix is replaced by a transition amplitude matrix. After considering the general situation, we study a particular example of a quantum chain whose transition amplitude matrix has the form of a Dirichlet matrix. Such matrices generate a discrete analog of the usual continuum Feynman amplitude. We then compute the probability distribution for these quantum chains

  2. Conditional Markov Chains Part II: Consistency and Copulae

    Bielecki, Tomasz R.; Jakubowski, Jacek; Niewęgłowski, Mariusz

    2015-01-01

    In this paper we continue the study of conditional Markov chains (CMCs) with finite state spaces, that we initiated in Bielecki, Jakubowski and Niew\\k{e}g\\l owski (2015). Here, we turn our attention to the study of Markov consistency and Markov copulae with regard to CMCs, and thus we follow up on the study of Markov consistency and Markov copulae for ordinary Markov chains that we presented in Bielecki, Jakubowski and Niew\\k{e}g\\l owski (2013).

  3. Markov chains for testing redundant software

    White, Allan L.; Sjogren, Jon A.

    1988-01-01

    A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.

  4. Entropy Computation in Partially Observed Markov Chains

    Desbouvries, François

    2006-11-01

    Let X = {Xn}n∈N be a hidden process and Y = {Yn}n∈N be an observed process. We assume that (X,Y) is a (pairwise) Markov Chain (PMC). PMC are more general than Hidden Markov Chains (HMC) and yet enable the development of efficient parameter estimation and Bayesian restoration algorithms. In this paper we propose a fast (i.e., O(N)) algorithm for computing the entropy of {Xn}n=0N given an observation sequence {yn}n=0N.

  5. Markov Chain Approximations to Singular Stable-like Processes

    Xu, Fangjun

    2012-01-01

    We consider the Markov chain approximations for singular stable-like processes. First we obtain properties of some Markov chains. Then we construct the approximating Markov chains and give a necessary condition for weak convergence of these chains to singular stable-like processes.

  6. Differential evolution Markov chain with snooker updater and fewer chains

    Vrugt, Jasper A [Los Alamos National Laboratory; Ter Braak, Cajo J F [NON LANL

    2008-01-01

    Differential Evolution Markov Chain (DE-MC) is an adaptive MCMC algorithm, in which multiple chains are run in parallel. Standard DE-MC requires at least N=2d chains to be run in parallel, where d is the dimensionality of the posterior. This paper extends DE-MC with a snooker updater and shows by simulation and real examples that DE-MC can work for d up to 50--100 with fewer parallel chains (e.g. N=3) by exploiting information from their past by generating jumps from differences of pairs of past states. This approach extends the practical applicability of DE-MC and is shown to be about 5--26 times more efficient than the optimal Normal random walk Metropolis sampler for the 97.5% point of a variable from a 25--50 dimensional Student T{sub 3} distribution. In a nonlinear mixed effects model example the approach outperformed a block-updater geared to the specific features of the model.

  7. Performance Modeling of Communication Networks with Markov Chains

    Mo, Jeonghoon

    2010-01-01

    This book is an introduction to Markov chain modeling with applications to communication networks. It begins with a general introduction to performance modeling in Chapter 1 where we introduce different performance models. We then introduce basic ideas of Markov chain modeling: Markov property, discrete time Markov chain (DTMe and continuous time Markov chain (CTMe. We also discuss how to find the steady state distributions from these Markov chains and how they can be used to compute the system performance metric. The solution methodologies include a balance equation technique, limiting probab

  8. Denumerable Markov decision chains: sensitive optimality criteria

    A. Hordijk (Arie); R. Dekker (Rommert)

    1991-01-01

    textabstractIn this paper we investigate denumerable state semi-Markov decision chains with small interest rates. We consider average and Blackwell optimality and allow for multiple closed sets and unbounded immediate rewards. Our analysis uses the existence of a Laurent series expansion for the tot

  9. Local stability in a transient Markov chain

    Adan, Ivo; Foss, Sergey; Shneer, Seva; Weiss, Gideon

    2015-01-01

    We prove two lemmas with conditions that a system, which is described by a transient Markov chain, will display local stability. Examples of such systems include partly overloaded Jackson networks, partly overloaded polling systems, and overloaded multi-server queues with skill based service, under first come first served policy.

  10. Markov chains with quasitoeplitz transition matrix

    Alexander M. Dukhovny

    1989-01-01

    Full Text Available This paper investigates a class of Markov chains which are frequently encountered in various applications (e.g. queueing systems, dams and inventories with feedback. Generating functions of transient and steady state probabilities are found by solving a special Riemann boundary value problem on the unit circle. A criterion of ergodicity is established.

  11. Markov Chains with Stochastically Stationary Transition Probabilities

    Orey, Steven

    1991-01-01

    Markov chains on a countable state space are studied under the assumption that the transition probabilities $(P_n(x,y))$ constitute a stationary stochastic process. An introductory section exposing some basic results of Nawrotzki and Cogburn is followed by four sections of new results.

  12. Metric on state space of Markov chain

    Rozinas, M. R.

    2010-01-01

    We consider finite irreducible Markov chains. It was shown that mean hitting time from one state to another satisfies the triangle inequality. Hence, sum of mean hitting time between couple of states in both directions is a metric on the space of states.

  13. Document Ranking Based upon Markov Chains.

    Danilowicz, Czeslaw; Balinski, Jaroslaw

    2001-01-01

    Considers how the order of documents in information retrieval responses are determined and introduces a method that uses a probabilistic model of a document set where documents are regarded as states of a Markov chain and where transition probabilities are directly proportional to similarities between documents. (Author/LRW)

  14. Asymptotic properties of quantum Markov chains

    The asymptotic dynamics of discrete quantum Markov chains generated by the most general physically relevant quantum operations is investigated. It is shown that it is confined to an attractor space in which the resulting quantum Markov chain is diagonalizable. A construction procedure of a basis of this attractor space and its associated dual basis of 1-forms is presented. It is applicable whenever a strictly positive quantum state exists which is contracted or left invariant by the generating quantum operation. Moreover, algebraic relations between the attractor space and Kraus operators involved in the definition of a quantum Markov chain are derived. This construction is not only expected to offer significant computational advantages in cases in which the dimension of the Hilbert space is large and the dimension of the attractor space is small, but it also sheds new light onto the relation between the asymptotic dynamics of discrete quantum Markov chains and fixed points of their generating quantum operations. Finally, we show that without any restriction our construction applies to all initial states whose support belongs to the so-called recurrent subspace. (paper)

  15. Markov Chain Estimation of Avian Seasonal Fecundity

    To explore the consequences of modeling decisions on inference about avian seasonal fecundity we generalize previous Markov chain (MC) models of avian nest success to formulate two different MC models of avian seasonal fecundity that represent two different ways to model renestin...

  16. A Martingale Decomposition of Discrete Markov Chains

    Hansen, Peter Reinhard

    We consider a multivariate time series whose increments are given from a homogeneous Markov chain. We show that the martingale component of this process can be extracted by a filtering method and establish the corresponding martingale decomposition in closed-form. This representation is useful for...

  17. One-Dimensional Markov Random Fields, Markov Chains and Topological Markov Fields

    Chandgotia, N; G. Han; Marcus, B; Meyerovitch, T; Pavlov, R

    2014-01-01

    In this paper we show that any one-dimensional stationary, finite-valued Markov Random Field (MRF) is a Markov chain, without any mixing condition or condition on the support. Our proof makes use of two properties of the support $X$ of a finite-valued stationary MRF: 1) $X$ is non-wandering (this is a property of the support of any finite-valued stationary process) and 2) $X$ is a topological Markov field (TMF). The latter is a new property that sits in between the classes of shifts of finite...

  18. Entropy rate of continuous-state hidden Markov chains

    Han, G; Marcus, B

    2010-01-01

    We prove that under mild positivity assumptions, the entropy rate of a continuous-state hidden Markov chain, observed when passing a finite-state Markov chain through a discrete-time continuous-output channel, is analytic as a function of the transition probabilities of the underlying Markov chain. We further prove that the entropy rate of a continuous-state hidden Markov chain, observed when passing a mixing finite-type constrained Markov chain through a discrete-time Gaussian channel, is sm...

  19. Parallel algorithms for simulating continuous time Markov chains

    Nicol, David M.; Heidelberger, Philip

    1992-01-01

    We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.

  20. MARKOV CHAIN PORTFOLIO LIQUIDITY OPTIMIZATION MODEL

    Eder Oliveira Abensur

    2014-05-01

    Full Text Available The international financial crisis of September 2008 and May 2010 showed the importance of liquidity as an attribute to be considered in portfolio decisions. This study proposes an optimization model based on available public data, using Markov chain and Genetic Algorithms concepts as it considers the classic duality of risk versus return and incorporating liquidity costs. The work intends to propose a multi-criterion non-linear optimization model using liquidity based on a Markov chain. The non-linear model was tested using Genetic Algorithms with twenty five Brazilian stocks from 2007 to 2009. The results suggest that this is an innovative development methodology and useful for developing an efficient and realistic financial portfolio, as it considers many attributes such as risk, return and liquidity.

  1. An interlacing theorem for reversible Markov chains

    Reversible Markov chains are an indispensable tool in the modeling of a vast class of physical, chemical, biological and statistical problems. Examples include the master equation descriptions of relaxing physical systems, stochastic optimization algorithms such as simulated annealing, chemical dynamics of protein folding and Markov chain Monte Carlo statistical estimation. Very often the large size of the state spaces requires the coarse graining or lumping of microstates into fewer mesoscopic states, and a question of utmost importance for the validity of the physical model is how the eigenvalues of the corresponding stochastic matrix change under this operation. In this paper we prove an interlacing theorem which gives explicit bounds on the eigenvalues of the lumped stochastic matrix. (fast track communication)

  2. Handbook of Markov chain Monte Carlo

    Brooks, Steve

    2011-01-01

    ""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.

  3. Numerical methods in Markov chain modeling

    Philippe, Bernard; Saad, Youcef; Stewart, William J.

    1989-01-01

    Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.

  4. Constrained Risk-Sensitive Markov Decision Chains

    Sladký, Karel

    Berlin : Springer, 2009 - (Fleischmann, B.; Borgwardt, K.; Klein, R.; Tuma, A.), s. 363-368 ISBN 978-3-642-00141-3. [Operations Research 2008. Augsburg (DE), 03.09.2008-05.09.2008] R&D Projects: GA ČR(CZ) GA402/08/0107; GA ČR GA402/07/1113 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov decision chain s * exponential utility functions * constraints Subject RIV: BB - Applied Statistics, Operational Research

  5. Bayesian Posterior Distributions Without Markov Chains

    Cole, Stephen R.; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B.

    2012-01-01

    Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential ex...

  6. Test automation for Markov Chain Usage Models

    Bettinotti, Adriana M.; Garavaglia, Mauricio

    2011-01-01

    Statistical testing with Markov Chain Usage Models is an effective method to be used by programmers and testers during web sites development, to guarantee the software reliability. The JUMBL software works on this models; it supports model construction with the TML language and analysis, tests generation and execution and analysis of tests results. This paper is targeted at test automation for web sites development with JUMBL and JWebUnit.

  7. The Engel algorithm for absorbing Markov chains

    Snell, J. Laurie

    2009-01-01

    In this module, suitable for use in an introductory probability course, we present Engel's chip-moving algorithm for finding the basic descriptive quantities for an absorbing Markov chain, and prove that it works. The tricky part of the proof involves showing that the initial distribution of chips recurs. At the time of writing (circa 1979) no published proof of this was available, though Engel had stated that such a proof had been found by L. Scheller.

  8. Monotone measures of ergodicity for Markov chains

    J. Keilson

    1998-01-01

    Full Text Available The following paper, first written in 1974, was never published other than as part of an internal research series. Its lack of publication is unrelated to the merits of the paper and the paper is of current importance by virtue of its relation to the relaxation time. A systematic discussion is provided of the approach of a finite Markov chain to ergodicity by proving the monotonicity of an important set of norms, each measures of egodicity, whether or not time reversibility is present. The paper is of particular interest because the discussion of the relaxation time of a finite Markov chain [2] has only been clean for time reversible chains, a small subset of the chains of interest. This restriction is not present here. Indeed, a new relaxation time quoted quantifies the relaxation time for all finite ergodic chains (cf. the discussion of Q1(t below Equation (1.7]. This relaxation time was developed by Keilson with A. Roy in his thesis [6], yet to be published.

  9. Adaptive Partially Hidden Markov Models

    Forchhammer, Søren Otto; Rasmussen, Tage

    1996-01-01

    Partially Hidden Markov Models (PHMM) have recently been introduced. The transition and emission probabilities are conditioned on the past. In this report, the PHMM is extended with a multiple token version. The different versions of the PHMM are applied to bi-level image coding.......Partially Hidden Markov Models (PHMM) have recently been introduced. The transition and emission probabilities are conditioned on the past. In this report, the PHMM is extended with a multiple token version. The different versions of the PHMM are applied to bi-level image coding....

  10. On the embedding problem for discrete-time Markov chains

    Guerry, Marie-Anne

    2013-01-01

    When a discrete-time homogenous Markov chain is observed at time intervals that correspond to its time unit, then the transition probabilities of the chain can be estimated using known maximum likelihood estimators. In this paper we consider a situation when a Markov chain is observed on time intervals with length equal to twice the time unit of the Markov chain. The issue then arises of characterizing probability matrices whose square root(s) are also probability matrices. ...

  11. A Bootstrap Algebraic Multilevel method for Markov Chains

    Bolten, M; Brannick, J; Frommer, A; Kahl, K; Livshits, I

    2010-01-01

    This work concerns the development of an Algebraic Multilevel method for computing stationary vectors of Markov chains. We present an efficient Bootstrap Algebraic Multilevel method for this task. In our proposed approach, we employ a multilevel eigensolver, with interpolation built using ideas based on compatible relaxation, algebraic distances, and least squares fitting of test vectors. Our adaptive variational strategy for computation of the state vector of a given Markov chain is then a combination of this multilevel eigensolver and associated multilevel preconditioned GMRES iterations. We show that the Bootstrap AMG eigensolver by itself can efficiently compute accurate approximations to the state vector. An additional benefit of the Bootstrap approach is that it yields an accurate interpolation operator for many other eigenmodes. This in turn allows for the use of the resulting AMG hierarchy to accelerate the MLE steps using standard multigrid correction steps. The proposed approach is applied to a rang...

  12. Growth and dissolution of macromolecular Markov chains

    Gaspard, Pierre

    2016-01-01

    The kinetics and thermodynamics of free living copolymerization are studied for processes with rates depending on k monomeric units of the macromolecular chain behind the unit that is attached or detached. In this case, the sequence of monomeric units in the growing copolymer is a kth-order Markov chain. In the regime of steady growth, the statistical properties of the sequence are determined analytically in terms of the attachment and detachment rates. In this way, the mean growth velocity as well as the thermodynamic entropy production and the sequence disorder can be calculated systematically. These different properties are also investigated in the regime of depolymerization where the macromolecular chain is dissolved by the surrounding solution. In this regime, the entropy production is shown to satisfy Landauer's principle.

  13. Combinatorial Markov chains on linear extensions

    Ayyer, Arvind; Schilling, Anne

    2012-01-01

    We consider generalizations of Schutzenberger's promotion operator on the set L of linear extensions of a finite poset of size n. This gives rise to a strongly connected graph on L. By assigning weights to the edges of the graph in two different ways, we study two Markov chains, both of which are irreducible. The stationary state of one gives rise to the uniform distribution, whereas the weights of the stationary state of the other has a nice product formula. This generalizes results by Hendricks on the Tsetlin library, which corresponds to the case when the poset is the anti-chain and hence L=S_n is the full symmetric group. We also provide explicit eigenvalues of the transition matrix in general when the poset is a rooted forest. This is shown by proving that the associated monoid is R-trivial and then using Steinberg's extension of Brown's theory for Markov chains on left regular bands to R-trivial monoids.

  14. Regeneration and Fixed-Width Analysis of Markov Chain Monte Carlo Algorithms

    Latuszynski, Krzysztof

    2009-07-01

    In the thesis we take the split chain approach to analyzing Markov chains and use it to establish fixed-width results for estimators obtained via Markov chain Monte Carlo procedures (MCMC). Theoretical results include necessary and sufficient conditions in terms of regeneration for central limit theorems for ergodic Markov chains and a regenerative proof of a CLT version for uniformly ergodic Markov chains with E_{π}f^2< infty. To obtain asymptotic confidence intervals for MCMC estimators, strongly consistent estimators of the asymptotic variance are essential. We relax assumptions required to obtain such estimators. Moreover, under a drift condition, nonasymptotic fixed-width results for MCMC estimators for a general state space setting (not necessarily compact) and not necessarily bounded target function f are obtained. The last chapter is devoted to the idea of adaptive Monte Carlo simulation and provides convergence results and law of large numbers for adaptive procedures under path-stability condition for transition kernels.

  15. Application of Markov Chains to Stock Trends

    Kevin J. Doubleday

    2011-01-01

    Full Text Available Problem statement: Modeling of the Dow Jones Industrial Average is frequently attempted in order to determine trading strategies with maximum payoff. Changes in the DJIA are important since movements may affect both individuals and corporations profoundly. Previous work showed that modeling a market as a random walk was valid and that a market may be viewed as having the Markov property. Approach: The aim of this research was to determine the relationship between a diverse portfolio of stocks and the market as a whole. To that end, the DJIA was analyzed using a discrete time stochastic model, namely a Markov Chain. Two models were highlighted, where the DJIA was considered as being in a state of (1 gain or loss and (2 small, moderate, or large gain or loss. A portfolio of five stocks was then considered and two models of the portfolio much the same as those for the DJIA. These models were used to obtain transitional probabilities and steady state probabilities. Results: Our results indicated that the portfolio behaved similarly to the entire DJIA, both in the simple model and the partitioned model. Conclusion: When treated as a Markov process, the entire market was useful in gauging how a diverse portfolio of stocks might behave. Future work may include different classifications of states to refine the transition matrices.

  16. Hitting time and inverse problems for Markov chains

    de la Peña, Victor; Gzyl, Henryk; McDonald, Patrick

    2008-01-01

    Let Wn be a simple Markov chain on the integers. Suppose that Xn is a simple Markov chain on the integers whose transition probabilities coincide with those of Wn off a finite set. We prove that there is an M > 0 such that the Markov chain Wn and the joint distributions of the first hitting time and first hitting place of Xn started at the origin for the sets {-M, M} and {-(M + 1), (M + 1)} algorithmically determine the transition probabilities of Xn.

  17. Analyticity of entropy rate of hidden Markov chains

    Han, G; Marcus, B

    2006-01-01

    We prove that under mild positivity assumptions the entropy rate of a hidden Markov chain varies analytically as a function of the underlying Markov chain parameters. A general principle to determine the domain of analyticity is stated. An example is given to estimate the radius of convergence for the entropy rate. We then show that the positivity assumptions can be relaxed, and examples are given for the relaxed conditions. We study a special class of hidden Markov chains in more detail: bin...

  18. Bounds on Lifting Continuous Markov Chains to Speed Up Mixing

    Ramanan, Kavita; Smith, Aaron

    2016-01-01

    It is often possible to speed up the mixing of a Markov chain $\\{ X_{t} \\}_{t \\in \\mathbb{N}}$ on a state space $\\Omega$ by \\textit{lifting}, that is, running a more efficient Markov chain $\\{ \\hat{X}_{t} \\}_{t \\in \\mathbb{N}}$ on a larger state space $\\hat{\\Omega} \\supset \\Omega$ that projects to $\\{ X_{t} \\}_{t \\in \\mathbb{N}}$ in a certain sense. In [CLP99], Chen, Lov{\\'a}sz and Pak prove that for Markov chains on finite state spaces, the mixing time of any lift of a Markov chain is at lea...

  19. Approximating Markov Chains: What and why

    Much of the current study of dynamical systems is focused on geometry (e.g., chaos and bifurcations) and ergodic theory. Yet dynamical systems were originally motivated by an attempt to open-quote open-quote solve,close-quote close-quote or at least understand, a discrete-time analogue of differential equations. As such, numerical, analytical solution techniques for dynamical systems would seem desirable. We discuss an approach that provides such techniques, the approximation of dynamical systems by suitable finite state Markov Chains. Steady state distributions for these Markov Chains, a straightforward calculation, will converge to the true dynamical system steady state distribution, with appropriate limit theorems indicated. Thus (i) approximation by a computable, linear map holds the promise of vastly faster steady state solutions for nonlinear, multidimensional differential equations; (ii) the solution procedure is unaffected by the presence or absence of a probability density function for the attractor, entirely skirting singularity, fractal/multifractal, and renormalization considerations. The theoretical machinery underpinning this development also implies that under very general conditions, steady state measures are weakly continuous with control parameter evolution. This means that even though a system may change periodicity, or become chaotic in its limiting behavior, such statistical parameters as the mean, standard deviation, and tail probabilities change continuously, not abruptly with system evolution. copyright 1996 American Institute of Physics

  20. Asymptotic evolution of quantum Markov chains

    The iterated quantum operations, so called quantum Markov chains, play an important role in various branches of physics. They constitute basis for many discrete models capable to explore fundamental physical problems, such as the approach to thermal equilibrium, or the asymptotic dynamics of macroscopic physical systems far from thermal equilibrium. On the other hand, in the more applied area of quantum technology they also describe general characteristic properties of quantum networks or they can describe different quantum protocols in the presence of decoherence. A particularly, an interesting aspect of these quantum Markov chains is their asymptotic dynamics and its characteristic features. We demonstrate there is always a vector subspace (typically low-dimensional) of so-called attractors on which the resulting superoperator governing the iterative time evolution of quantum states can be diagonalized and in which the asymptotic quantum dynamics takes place. As the main result interesting algebraic relations are presented for this set of attractors which allow to specify their dual basis and to determine them in a convenient way. Based on this general theory we show some generalizations concerning the theory of fixed points or asymptotic evolution of random quantum operations.

  1. CLTs and asymptotic variance of time-sampled Markov chains

    Latuszynski, Krzysztof

    2011-01-01

    For a Markov transition kernel $P$ and a probability distribution $ \\mu$ on nonnegative integers, a time-sampled Markov chain evolves according to the transition kernel $P_{\\mu} = \\sum_k \\mu(k)P^k.$ In this note we obtain CLT conditions for time-sampled Markov chains and derive a spectral formula for the asymptotic variance. Using these results we compare efficiency of Barker's and Metropolis algorithms in terms of asymptotic variance.

  2. Bayesian analysis of variable-order, reversible Markov chains

    Bacallado, Sergio

    2011-01-01

    We define a conjugate prior for the reversible Markov chain of order $r$. The prior arises from a partially exchangeable reinforced random walk, in the same way that the Beta distribution arises from the exchangeable Poly\\'{a} urn. An extension to variable-order Markov chains is also derived. We show the utility of this prior in testing the order and estimating the parameters of a reversible Markov model.

  3. Recursive smoothers for hidden discrete-time Markov chains

    Lakhdar Aggoun

    2005-01-01

    Full Text Available We consider a discrete-time Markov chain observed through another Markov chain. The proposed model extends models discussed by Elliott et al. (1995. We propose improved recursive formulae to update smoothed estimates of processes related to the model. These recursive estimates are used to update the parameter of the model via the expectation maximization (EM algorithm.

  4. Series Expansions for Finite-State Markov Chains

    Heidergott, Bernd; Hordijk, Arie; van Uitert, Miranda

    2005-01-01

    This paper provides series expansions of the stationary distribution of a finite Markov chain. This leads to an efficient numerical algorithm for computing the stationary distribution of a finite Markov chain. Numerical examples are given to illustrate the performance of the algorithm.

  5. A Markov Chain Model for Contagion

    Angelos Dassios

    2014-11-01

    Full Text Available We introduce a bivariate Markov chain counting process with contagion for modelling the clustering arrival of loss claims with delayed settlement for an insurance company. It is a general continuous-time model framework that also has the potential to be applicable to modelling the clustering arrival of events, such as jumps, bankruptcies, crises and catastrophes in finance, insurance and economics with both internal contagion risk and external common risk. Key distributional properties, such as the moments and probability generating functions, for this process are derived. Some special cases with explicit results and numerical examples and the motivation for further actuarial applications are also discussed. The model can be considered a generalisation of the dynamic contagion process introduced by Dassios and Zhao (2011.

  6. Revisiting Weak Simulation for Substochastic Markov Chains

    Jansen, David N.; Song, Lei; Zhang, Lijun

    2013-01-01

    The spectrum of branching-time relations for probabilistic systems has been investigated thoroughly by Baier, Hermanns, Katoen and Wolf (2003, 2005), including weak simulation for systems involving substochastic distributions. Weak simulation was proven to be sound w.r.t. the liveness fragment...... of the logic PCTL\\x, and its completeness was conjectured. We revisit this result and show that soundness does not hold in general, but only for Markov chains without divergence. It is refuted for some systems with substochastic distributions. Moreover, we provide a counterexample to completeness....... In this paper, we present a novel definition that is sound for live PCTL\\x, and a variant that is both sound and complete. A long version of this article containing full proofs is available from [11]....

  7. NONLINEAR EXPECTATIONS AND NONLINEAR MARKOV CHAINS

    PENG SHIGE

    2005-01-01

    This paper deals with nonlinear expectations. The author obtains a nonlinear generalization of the well-known Kolmogorov's consistent theorem and then use it to construct filtration-consistent nonlinear expectations via nonlinear Markov chains. Compared to the author's previous results, i.e., the theory of g-expectations introduced via BSDE on a probability space, the present framework is not based on a given probability measure. Many fully nonlinear and singular situations are covered. The induced topology is a natural generalization of Lp-norms and L∞-norm in linear situations.The author also obtains the existence and uniqueness result of BSDE under this new framework and develops a nonlinear type of von Neumann-Morgenstern representation theorem to utilities and present dynamic risk measures.

  8. Semi-Markov Chains and Hidden Semi-Markov Models toward Applications Their Use in Reliability and DNA Analysis

    Barbu, Vlad

    2008-01-01

    Semi-Markov processes are much more general and better adapted to applications than the Markov ones because sojourn times in any state can be arbitrarily distributed, as opposed to the geometrically distributed sojourn time in the Markov case. This book concerns with the estimation of discrete-time semi-Markov and hidden semi-Markov processes

  9. The Use of Markov Chains in Marketing Forecasting

    Codruţa Dura

    2006-01-01

    The Markov chains model is frequently used to describe consumers’ behavior in relation to their loyalty towards a brand, a manufacturer, a product, o a chain of stores, etc. Most frequently, this model is applied in marketing for dynamic forecasts of the market quota against a background of intense rivalry between brands. In a Markov chain, the result of a trial depends on the result of the trial that directly precedes the former. If we associate the conditional probability pjk (which means t...

  10. Modeling Uncertainty of Directed Movement via Markov Chains

    YIN Zhangcai

    2015-10-01

    Full Text Available Probabilistic time geography (PTG is suggested as an extension of (classical time geography, in order to present the uncertainty of an agent located at the accessible position by probability. This may provide a quantitative basis for most likely finding an agent at a location. In recent years, PTG based on normal distribution or Brown bridge has been proposed, its variance, however, is irrelevant with the agent's speed or divergent with the increase of the speed; so they are difficult to take into account application pertinence and stability. In this paper, a new method is proposed to model PTG based on Markov chain. Firstly, a bidirectional conditions Markov chain is modeled, the limit of which, when the moving speed is large enough, can be regarded as the Brown bridge, thus has the characteristics of digital stability. Then, the directed movement is mapped to Markov chains. The essential part is to build step length, the state space and transfer matrix of Markov chain according to the space and time position of directional movement, movement speed information, to make sure the Markov chain related to the movement speed. Finally, calculating continuously the probability distribution of the directed movement at any time by the Markov chains, it can be get the possibility of an agent located at the accessible position. Experimental results show that, the variance based on Markov chains not only is related to speed, but also is tending towards stability with increasing the agent's maximum speed.

  11. Remarks on a monotone Markov chain

    P. Todorovic

    1987-01-01

    Full Text Available In applications, considerations on stochastic models often involve a Markov chain {ζn}0∞ with state space in R+, and a transition probability Q. For each x  R+ the support of Q(x,. is [0,x]. This implies that ζ0≥ζ1≥…. Under certain regularity assumptions on Q we show that Qn(x,Bu→1 as n→∞ for all u>0 and that 1−Qn(x,Bu≤[1−Q(x,Bu]n where Bu=[0,u. Set τ0=max{k;ζk=ζ0}, τn=max{k;ζk=ζτn−1+1} and write Xn=ζτn−1+1, Tn=τn−τn−1. We investigate some properties of the imbedded Markov chain {Xn}0∞ and of {Tn}0∞. We determine all the marginal distributions of {Tn}0∞ and show that it is asymptotically stationary and that it possesses a monotonicity property. We also prove that under some mild regularity assumptions on β(x=1−Q(x,Bx, ∑1n(Ti−a/bn→dZ∼N(0,1.

  12. Determining a Class of Markov Chains by Hitting Time

    2001-01-01

    @@1 Introduction In many practical problems we often cannot observe the behavior of all states for a Markov chain (see [3-5]). A natural question is that from the observable data of a part of states, can one still obtain all statistical characteristics of the Markov chains. In this paper we give the positive answer for this question and prove the surprising result that the transition rate matrix of the birth-death chains with reflecting barriers and Markov chains on a star graph can be uniquely determined by the probability density functions (pdfs) of the sojourn times and the hitting times at a single special state. This result also suggest a new special type of statistics for Markov chains.

  13. Logics and Models for Stochastic Analysis Beyond Markov Chains

    Zeng, Kebin

    , because of the generality of ME distributions, we have to leave the world of Markov chains. To support ME distributions with multiple exits, we introduce a multi-exits ME distribution together with a process algebra MEME to express the systems having the semantics as Markov renewal processes with ME...

  14. The Laplace Functional and Moments for Markov Branching Chains in Random Environments

    HU Di-he; ZHANG Shu-lin

    2005-01-01

    The concepts of random Markov matrix, Markov branching chain in random environment (MBCRE) and Laplace functional of Markov branching chain in random environment (LFMBCRE) are introduced. The properties of LFMBCRE and the explicit formulas of moments of MBCRE are given.

  15. On the Markov Chain Monte Carlo (MCMC) method

    Rajeeva L Karandikar

    2006-04-01

    Markov Chain Monte Carlo (MCMC) is a popular method used to generate samples from arbitrary distributions, which may be specified indirectly. In this article, we give an introduction to this method along with some examples.

  16. Markov Chains as Tools for Jazz Improvisation Analysis

    Franz, David Matthew

    1998-01-01

    This thesis describes an exploratory application of a statistical analysis and modeling technique (Markov chains) for the modeling of jazz improvisation with the intended subobjective of providing increased insight into an improviser's style and creativity through the postulation of quantitative measures of style and creativity based on the constructed Markovian analysis techniques. Using Visual Basic programming language, Markov chains of orders one to three are created using transcriptio...

  17. P Systems Computing the Period of Irreducible Markov Chains

    Cardona Roca, Mónica; Colomer Cugat, M. Angeles; Riscos Núñez, Agustín; Rius Font, Miquel

    2009-01-01

    It is well known that any irreducible and aperiodic Markov chain has exactly one stationary distribution, and for any arbitrary initial distribution, the sequence of distributions at time n converges to the stationary distribution, that is, the Markov chain is approaching equilibrium as n→∞. In this paper, a characterization of the aperiodicity in existential terms of some state is given. At the same time, a P system with external output is associated with any irreducible Ma...

  18. On almost-periodic points of a topological Markov chain

    We prove that a transitive topological Markov chain has almost-periodic points of all D-periods. Moreover, every D-period is realized by continuously many distinct minimal sets. We give a simple constructive proof of the result which asserts that any transitive topological Markov chain has periodic points of almost all periods, and study the structure of the finite set of positive integers that are not periods.

  19. CONVERGENCE OF MARKOV CHAIN APPROXIMATIONS TO STOCHASTIC REACTION DIFFUSION EQUATIONS

    Kouritzin, Michael A.; Hongwei Long

    2001-01-01

    In the context of simulating the transport of a chemical or bacterial contaminant through a moving sheet of water, we extend a well-established method of approximating reaction-diffusion equations with Markov chains by allowing convection, certain Poisson measure driving sources and a larger class of reaction functions. Our alterations also feature dramatically slower Markov chain state change rates often yielding a ten to one-hundred-fold simulation speed increase over the previous version o...

  20. Markov chains of nonlinear Markov processes and an application to a winner-takes-all model for social conformity

    We discuss nonlinear Markov processes defined on discrete time points and discrete state spaces using Markov chains. In this context, special attention is paid to the distinction between linear and nonlinear Markov processes. We illustrate that the Chapman-Kolmogorov equation holds for nonlinear Markov processes by a winner-takes-all model for social conformity. (fast track communication)

  1. Markov chains of nonlinear Markov processes and an application to a winner-takes-all model for social conformity

    Frank, T D [Center for the Ecological Study of Perception and Action, Department of Psychology, University of Connecticut, 406 Babbidge Road, Storrs, CT 06269 (United States)

    2008-07-18

    We discuss nonlinear Markov processes defined on discrete time points and discrete state spaces using Markov chains. In this context, special attention is paid to the distinction between linear and nonlinear Markov processes. We illustrate that the Chapman-Kolmogorov equation holds for nonlinear Markov processes by a winner-takes-all model for social conformity. (fast track communication)

  2. Switching Markov chains for a holistic modeling of SIS unavailability

    This paper proposes a holistic approach to model the Safety Instrumented Systems (SIS). The model is based on Switching Markov Chain and integrates several parameters like Common Cause Failure, Imperfect Proof testing, partial proof testing, etc. The basic concepts of Switching Markov Chain applied to reliability analysis are introduced and a model to compute the unavailability for a case study is presented. The proposed Switching Markov Chain allows us to assess the effect of each parameter on the SIS performance. The proposed method ensures the relevance of the results. - Highlights: • A holistic approach to model the unavailability safety systems using Switching Markov chains. • The model integrates several parameters like probability of failure due to the test, the probability of not detecting a failure in a test. • The basic concepts of the Switching Markov Chains are introduced and applied to compute the unavailability for safety systems. • The proposed Switching Markov Chain allows assessing the effect of each parameter on the chemical reactor performance

  3. ON MARKOV CHAINS IN SPACE-TIME RANDOM ENVIRONMENTS

    Hu Dihe; Hu Xiaoyu

    2009-01-01

    In Section 1, the authors establish the models of two kinds of Markov chains in space-time random environments (MCSTRE and MCSTRE(+)) with Abstract state space. In Section 2, the authors construct a MCSTRE and a MCSTRE(+) by an initial distribution Ф and a random Markov kernel (RMK) p(γ). In Section 3, the authors establish several equivalence theorems on MCSTRE and MCSTRE(+). Finally, the authors give two very important examples of MCMSTRE, the random walk in spce-time random environment and the Markov branching chain in space-time random environment.

  4. Comprehensive cosmographic analysis by Markov chain method

    We study the possibility of extracting model independent information about the dynamics of the Universe by using cosmography. We intend to explore it systematically, to learn about its limitations and its real possibilities. Here we are sticking to the series expansion approach on which cosmography is based. We apply it to different data sets: Supernovae type Ia (SNeIa), Hubble parameter extracted from differential galaxy ages, gamma ray bursts, and the baryon acoustic oscillations data. We go beyond past results in the literature extending the series expansion up to the fourth order in the scale factor, which implies the analysis of the deceleration q0, the jerk j0, and the snap s0. We use the Markov chain Monte Carlo method (MCMC) to analyze the data statistically. We also try to relate direct results from cosmography to dark energy (DE) dynamical models parametrized by the Chevallier-Polarski-Linder model, extracting clues about the matter content and the dark energy parameters. The main results are: (a) even if relying on a mathematical approximate assumption such as the scale factor series expansion in terms of time, cosmography can be extremely useful in assessing dynamical properties of the Universe; (b) the deceleration parameter clearly confirms the present acceleration phase; (c) the MCMC method can help giving narrower constraints in parameter estimation, in particular for higher order cosmographic parameters (the jerk and the snap), with respect to the literature; and (d) both the estimation of the jerk and the DE parameters reflect the possibility of a deviation from the ΛCDM cosmological model.

  5. Unsupervised Segmentation of Hidden Semi-Markov Non Stationary Chains

    Lapuyade-Lahorgue, Jérôme; Pieczynski, Wojciech

    2006-11-01

    In the classical hidden Markov chain (HMC) model we have a hidden chain X, which is a Markov one and an observed chain Y. HMC are widely used; however, in some situations they have to be replaced by the more general "hidden semi-Markov chains" (HSMC) which are particular "triplet Markov chains" (TMC) T = (X, U, Y), where the auxiliary chain U models the semi-Markovianity of X. Otherwise, non stationary classical HMC can also be modeled by a triplet Markov stationary chain with, as a consequence, the possibility of parameters' estimation. The aim of this paper is to use simultaneously both properties. We consider a non stationary HSMC and model it as a TMC T = (X, U1, U2, Y), where U1 models the semi-Markovianity and U2 models the non stationarity. The TMC T being itself stationary, all parameters can be estimated by the general "Iterative Conditional Estimation" (ICE) method, which leads to unsupervised segmentation. We present some experiments showing the interest of the new model and related processing in image segmentation area.

  6. Automated generation of partial Markov chain from high level descriptions

    We propose an algorithm to generate partial Markov chains from high level implicit descriptions, namely AltaRica models. This algorithm relies on two components. First, a variation on Dijkstra's algorithm to compute shortest paths in a graph. Second, the definition of a notion of distance to select which states must be kept and which can be safely discarded. The proposed method solves two problems at once. First, it avoids a manual construction of Markov chains, which is both tedious and error prone. Second, up the price of acceptable approximations, it makes it possible to push back dramatically the exponential blow-up of the size of the resulting chains. We report experimental results that show the efficiency of the proposed approach. - Highlights: • We generate Markov chains from a higher level safety modeling language (AltaRica). • We use a variation on Dijkstra's algorithm to generate partial Markov chains. • Hence we solve two problems: the first problem is the tedious manual construction of Markov chains. • The second problem is the blow-up of the size of the chains, at the cost of decent approximations. • The experimental results highlight the efficiency of the method

  7. RS-markov Chain Model of Logistics Service Supply Chain based on Exploration Diagram

    Shi Li; Shi Yu-Zhen

    2013-01-01

    In order to achieve the forecast evaluation of logistics service supply chain, a tool of system-thinking in complex scientific management-Exploration diagram, is used to establish the index system for the forecast evaluation of logistics service supply chain. And according to the significant Markov chain property in the operation of logistics service supply chain, the predictability of the Markov chain is used to put forward a dynamic evaluation model, example ...

  8. Infinitely dimensional control Markov branching chains in random environments

    HU; Dihe

    2006-01-01

    First of all we introduce the concepts of infinitely dimensional control Markov branching chains in random environments (β-MBCRE) and prove the existence of such chains, then we introduce the concepts of conditional generating functionals and random Markov transition functions of such chains and investigate their branching property. Base on these concepts we calculate the moments of the β-MBCRE and obtain the main results of this paper such as extinction probabilities, polarization and proliferation rate. Finally we discuss the classification ofβ-MBCRE according to the different standards.

  9. Invariance principle for additive functionals of Markov chains

    Kartashov, Yuri N.; Kulik, Alexey M.

    2007-01-01

    We consider a sequence of additive functionals {\\phi_n}, set on a sequence of Markov chains {X_n} that weakly converges to a Markov process X. We give sufficient condition for such a sequence to converge in distribution, formulated in terms of the characteristics of the additive functionals, and related to the Dynkin's theorem on the convergence of W-functionals. As an application of the main theorem, the general sufficient condition for convergence of additive functionals in terms of transit...

  10. The Dynamics of Repeat Migration: A Markov Chain Analysis

    Zimmermann, Klaus F.; Amelie F. Constant

    2003-01-01

    While the literature has established that there is substantial and highly selective return migration, the growing importance of repeat migration has been largely ignored. Using Markov chain analysis, this paper provides a modeling framework for repeated moves of migrants between the host and home countries. The Markov transition matrix between the states in two consecutive periods is parameterized and estimated using a logit specification and a large panel data with 14 waves. The analysis for...

  11. Mean variance optimality in Markov decision chains

    Sladký, Karel; Sitař, Milan

    Hradec Králové : Gadeamus, 2005 - (Skalská, H.), s. 350-357 ISBN 978-80-7041-535-1. [Mathematical Methods in Economics 2005 /23./. Hradec Králové (CZ), 14.09.2005-16.09.2005] R&D Projects: GA ČR GA402/05/0115 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov reward processes * expectation and variance of cumulative rewards Subject RIV: BB - Applied Statistics, Operational Research

  12. Markov Chains For Testing Redundant Software

    White, Allan L.; Sjogren, Jon A.

    1990-01-01

    Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.

  13. On a Markov chain roulette-type game

    A Markov chain on non-negative integers which arises in a roulette-type game is discussed. The transition probabilities are p01 = ρ, pNj = δNj, pi,i+W = q, pi,i-1 = p = 1 - q, 1 ≤ W < N, 0 ≤ ρ ≤ 1, N - W < j ≤ N and i = 1, 2, ..., N - W. Using formulae for the determinant of a partitioned matrix, a closed form expression for the solution of the Markov chain roulette-type game is deduced. The present analysis is supported by two mathematical models from tumor growth and war with bargaining

  14. Target Density Normalization for Markov Chain Monte Carlo Algorithms

    Caldwell, Allen

    2014-01-01

    Techniques for evaluating the normalization integral of the target density for Markov Chain Monte Carlo algorithms are described and tested numerically. It is assumed that the Markov Chain algorithm has converged to the target distribution and produced a set of samples from the density. These are used to evaluate sample mean, harmonic mean and Laplace algorithms for the calculation of the integral of the target density. A clear preference for the sample mean algorithm applied to a reduced support region is found, and guidelines are given for implementation.

  15. Limit Theorems for the Sample Entropy of Hidden Markov Chains

    Han, Guangyue

    2011-01-01

    The Shannon-McMillan-Breiman theorem asserts that the sample entropy of a stationary and ergodic stochastic process converges to the entropy rate of the same process almost surely. In this paper, we focus our attention on the convergence behavior of the sample entropy of a hidden Markov chain. Under certain positivity assumption, we prove that a central limit theorem (CLT) with some Berry-Esseen bound for the sample entropy of a hidden Markov chain, and we use this CLT to establish a law of iterated logarithm (LIL) for the sample entropy.

  16. Dynamic modeling of presence of occupants using inhomogeneous Markov chains

    Andersen, Philip Hvidthøft Delff; Iversen, Anne; Madsen, Henrik;

    2014-01-01

    time of day, and by use of a filter of the observations it is able to capture per-employee sequence dynamics. Simulations using this method are compared with simulations using homogeneous Markov chains and show far better ability to reproduce key properties of the data. The method is based on...... inhomogeneous Markov chains with where the transition probabilities are estimated using generalized linear models with polynomials, B-splines, and a filter of passed observations as inputs. For treating the dispersion of the data series, a hierarchical model structure is used where one model is for low presence...

  17. Harmonic Oscillator Model for Radin's Markov-Chain Experiments

    The conscious observer stands as a central figure in the measurement problem of quantum mechanics. Recent experiments by Radin involving linear Markov chains driven by random number generators illuminate the role and temporal dynamics of observers interacting with quantum mechanically labile systems. In this paper a Lagrangian interpretation of these experiments indicates that the evolution of Markov chain probabilities can be modeled as damped harmonic oscillators. The results are best interpreted in terms of symmetric equicausal determinism rather than strict retrocausation, as posited by Radin. Based on the present analysis, suggestions are made for more advanced experiments

  18. On a Markov chain roulette-type game

    El-Shehawey, M A; El-Shreef, Gh A [Department of Mathematics, Damietta Faculty of Science, PO Box 6, New Damietta (Egypt)

    2009-05-15

    A Markov chain on non-negative integers which arises in a roulette-type game is discussed. The transition probabilities are p{sub 01} = {rho}, p{sub Nj} = {delta}{sub Nj}, p{sub i,i+W} = q, p{sub i,i-1} = p = 1 - q, 1 {<=} W < N, 0 {<=} {rho} {<=} 1, N - W < j {<=} N and i = 1, 2, ..., N - W. Using formulae for the determinant of a partitioned matrix, a closed form expression for the solution of the Markov chain roulette-type game is deduced. The present analysis is supported by two mathematical models from tumor growth and war with bargaining.

  19. Influence of credit scoring on the dynamics of Markov chain

    Galina, Timofeeva

    2015-11-01

    Markov processes are widely used to model the dynamics of a credit portfolio and forecast the portfolio risk and profitability. In the Markov chain model the loan portfolio is divided into several groups with different quality, which determined by presence of indebtedness and its terms. It is proposed that dynamics of portfolio shares is described by a multistage controlled system. The article outlines mathematical formalization of controls which reflect the actions of the bank's management in order to improve the loan portfolio quality. The most important control is the organization of approval procedure of loan applications. The credit scoring is studied as a control affecting to the dynamic system. Different formalizations of "good" and "bad" consumers are proposed in connection with the Markov chain model.

  20. Students' Progress throughout Examination Process as a Markov Chain

    Hlavatý, Robert; Dömeová, Ludmila

    2014-01-01

    The paper is focused on students of Mathematical methods in economics at the Czech university of life sciences (CULS) in Prague. The idea is to create a model of students' progress throughout the whole course using the Markov chain approach. Each student has to go through various stages of the course requirements where his success depends on the…

  1. Ergodic degrees for continuous-time Markov chains

    MAO; Yonghua

    2004-01-01

    This paper studies the existence of the higher orders deviation matrices for continuous time Markov chains by the moments for the hitting times. An estimate of the polynomial convergence rates for the transition matrix to the stationary measure is obtained. Finally, the explicit formulas for birth-death processes are presented.

  2. On a Markov chain roulette-type game

    El-Shehawey, M. A.; El-Shreef, Gh A.

    2009-05-01

    A Markov chain on non-negative integers which arises in a roulette-type game is discussed. The transition probabilities are p01 = ρ, pNj = δNj, pi,i+W = q, pi,i-1 = p = 1 - q, 1 game is deduced. The present analysis is supported by two mathematical models from tumor growth and war with bargaining.

  3. Markov chains with quasitoeplitz transition matrix: first zero hitting

    Alexander M. Dukhovny

    1989-01-01

    Full Text Available This paper continues the investigation of Markov Chains with a quasitoeplitz transition matrix. Generating functions of first zero hitting probabilities and mean times are found by the solution of special Riemann boundary value problems on the unit circle. Duality is discussed.

  4. Operations and support cost modeling using Markov chains

    Unal, Resit

    1989-01-01

    Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.

  5. Converging from Branching to Linear Metrics on Markov Chains

    Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand; Mardare, Radu Iulian

    We study the strong and strutter trace distances on Markov chains (MCs). Our interest in these metrics is motivated by their relation to the probabilistic LTL-model checking problem: we prove that they correspond to the maximal differences in the probability of satisfying the same LTL and LTL...

  6. Adiabatic condition and the quantum hitting time of Markov chains

    We present an adiabatic quantum algorithm for the abstract problem of searching marked vertices in a graph, or spatial search. Given a random walk (or Markov chain) P on a graph with a set of unknown marked vertices, one can define a related absorbing walk P' where outgoing transitions from marked vertices are replaced by self-loops. We build a Hamiltonian H(s) from the interpolated Markov chain P(s)=(1-s)P+sP' and use it in an adiabatic quantum algorithm to drive an initial superposition over all vertices to a superposition over marked vertices. The adiabatic condition implies that, for any reversible Markov chain and any set of marked vertices, the running time of the adiabatic algorithm is given by the square root of the classical hitting time. This algorithm therefore demonstrates a novel connection between the adiabatic condition and the classical notion of hitting time of a random walk. It also significantly extends the scope of previous quantum algorithms for this problem, which could only obtain a full quadratic speedup for state-transitive reversible Markov chains with a unique marked vertex.

  7. Exploring Mass Perception with Markov Chain Monte Carlo

    Cohen, Andrew L.; Ross, Michael G.

    2009-01-01

    Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…

  8. Using Markov Chain Analyses in Counselor Education Research

    Duys, David K.; Headrick, Todd C.

    2004-01-01

    This study examined the efficacy of an infrequently used statistical analysis in counselor education research. A Markov chain analysis was used to examine hypothesized differences between students' use of counseling skills in an introductory course. Thirty graduate students participated in the study. Independent raters identified the microskills…

  9. Building Higher-Order Markov Chain Models with EXCEL

    Ching, Wai-Ki; Fung, Eric S.; Ng, Michael K.

    2004-01-01

    Categorical data sequences occur in many applications such as forecasting, data mining and bioinformatics. In this note, we present higher-order Markov chain models for modelling categorical data sequences with an efficient algorithm for solving the model parameters. The algorithm can be implemented easily in a Microsoft EXCEL worksheet. We give a…

  10. Power plant reliability calculation with Markov chain models

    In the paper power plant operation is modelled using continuous time Markov chains with discrete state space. The model is used to compute the power plant reliability and the importance and influence of individual states, as well as the transition probabilities between states. For comparison the model is fitted to data for coal and nuclear power plants recorded over several years. (orig.)

  11. On the Total Variation Distance of Semi-Markov Chains

    Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand; Mardare, Radu Iulian

    Semi-Markov chains (SMCs) are continuous-time probabilistic transition systems where the residence time on states is governed by generic distributions on the positive real line. This paper shows the tight relation between the total variation distance on SMCs and their model checking problem over...

  12. A Parallel Solver for Large-Scale Markov Chains

    Benzi, M.; Tůma, Miroslav

    2002-01-01

    Roč. 41, - (2002), s. 135-153. ISSN 0168-9274 R&D Projects: GA AV ČR IAA2030801; GA ČR GA101/00/1035 Keywords : parallel preconditioning * iterative methods * discrete Markov chains * generalized inverses * singular matrices * graph partitioning * AINV * Bi-CGSTAB Subject RIV: BA - General Mathematics Impact factor: 0.504, year: 2002

  13. Bayesian internal dosimetry calculations using Markov Chain Monte Carlo

    A new numerical method for solving the inverse problem of internal dosimetry is described. The new method uses Markov Chain Monte Carlo and the Metropolis algorithm. Multiple intake amounts, biokinetic types, and times of intake are determined from bioassay data by integrating over the Bayesian posterior distribution. The method appears definitive, but its application requires a large amount of computing time. (author)

  14. Algebraic convergence for discrete-time ergodic Markov chains

    MAO; Yonghua(毛永华)

    2003-01-01

    This paper studies the e-ergodicity for discrete-time recurrent Markov chains. It proves that thee-order deviation matrix exists and is finite if and only if the chain is (e + 2)-ergodic, and then the algebraicdecay rates of the n-step transition probability to the stationary distribution are obtained. The criteria fore-ergodicity are given in terms of existence of solution to an equation. The main results are illustrated by some examples.

  15. Hierarchical Multiple Markov Chain Model for Unsupervised Texture Segmentation

    Scarpa, G.; Gaetano, R.; Haindl, Michal; Zerubia, J.

    2009-01-01

    Roč. 18, č. 8 (2009), s. 1830-1843. ISSN 1057-7149 R&D Projects: GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : Classification * texture analysis * segmentation * hierarchical image models * Markov process Subject RIV: BD - Theory of Information Impact factor: 2.848, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-hierarchical multiple markov chain model for unsupervised texture segmentation.pdf

  16. Markov chain analysis of single spin flip Ising simulations

    The Markov processes defined by random and loop-based schemes for single spin flip attempts in Monte Carlo simulations of the 2D Ising model are investigated, by explicitly constructing their transition matrices. Their analysis reveals that loops over all lattice sites using a Metropolis-type single spin flip probability often do not define ergodic Markov chains, and have distorted dynamical properties even if they are ergodic. The transition matrices also enable a comparison of the dynamics of random versus loop spin selection and Glauber versus Metropolis probabilities

  17. Markov Chain for Reuse Strategies of Product Families

    LUO Jia; JIANG Lan

    2007-01-01

    A methodology is presented to plan reuse strategies of common modules in a product family by using the concepts of function degradation, reliability, function requirement, cost and life time. Markov chain model is employed to predict function degradation and reliability. A utility model is used to evaluate the preference between used modules and new modules. An example of cascading-requirment product family illustrates the main ideas of our work. The Markov models are used effectively to predict function degradation and reliability. Utility theory is helpful to evaluate the reuse options of common modules.

  18. Markov chain aggregation for agent-based models

    Banisch, Sven

    2016-01-01

    This self-contained text develops a Markov chain approach that makes the rigorous analysis of a class of microscopic models that specify the dynamics of complex systems at the individual level possible. It presents a general framework of aggregation in agent-based and related computational models, one which makes use of lumpability and information theory in order to link the micro and macro levels of observation. The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent-based model (ABM), which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. An explicit formal representation of a resulting “micro-chain” including microscopic transition rates is derived for a class of models by using the random mapping representation of a Markov process. The type of probability distribution used to implement the stochastic part of the model, which defines the upd...

  19. Regularity of harmonic functions for some Markov chains with unbounded range

    Xu, Fangjun

    2012-01-01

    We consider a class of continuous time Markov chains on $\\Z^d$. These chains are the discrete space analogue of Markov processes with jumps. Under some conditions, we show that harmonic functions associated with these Markov chains are H\\"{o}lder continuous.

  20. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  1. Rapid mixing and Markov bases

    Windisch, Tobias

    2015-01-01

    The mixing behaviour of Markov chains on lattice points of polytopes using Markov bases is examined. It is shown that, in fixed dimension, these Markov chains do not mix rapidly. As a way out, a method of how to adapt Markov bases in order to achieve the fastest mixing behaviour is introduced.

  2. Markov Chain Order estimation with Conditional Mutual Information

    Papapetrou, Maria; 10.1016/j.physa.2012.12.017.

    2013-01-01

    We introduce the Conditional Mutual Information (CMI) for the estimation of the Markov chain order. For a Markov chain of $K$ symbols, we define CMI of order $m$, $I_c(m)$, as the mutual information of two variables in the chain being $m$ time steps apart, conditioning on the intermediate variables of the chain. We find approximate analytic significance limits based on the estimation bias of CMI and develop a randomization significance test of $I_c(m)$, where the randomized symbol sequences are formed by random permutation of the components of the original symbol sequence. The significance test is applied for increasing $m$ and the Markov chain order is estimated by the last order for which the null hypothesis is rejected. We present the appropriateness of CMI-testing on Monte Carlo simulations and compare it to the Akaike and Bayesian information criteria, the maximal fluctuation method (Peres-Shields estimator) and a likelihood ratio test for increasing orders using $\\phi$-divergence. The order criterion of...

  3. Optimization of Markov chains for a SUSY fitter: Fittino

    A Markov chains is a ''random walk'' algorithm which allows an efficient scan of a given profile and the search of the absolute minimum, even when this profil suffers from the presence of many secondary minima. This property makes them particularly suited to the study of Supersymmetry (SUSY) models, where minima have to be found in up-to 18-dimensional space for the general MSSM. Hence the SUSY fitter ''Fittino'' uses a Metropolis*Hastings Markov chain in a frequentist interpretation to study the impact of current low -energy measurements, as well as expected measurements from LHC and ILC, on the SUSY parameter space. The expected properties of an optimal Markov chain should be the independence of final results with respect to the starting point and a fast convergence. These two points can be achieved by optimizing the width of the proposal distribution, that is the ''average step length'' between two links in the chain. We developped an algorithm for the optimization of the proposal width, by modifying iteratively the width so that the rejection rate be around fifty percent. This optimization leads to a starting point independent chain as well as a faster convergence.

  4. Markov chain modelling of pitting corrosion in underground pipelines

    Caleyo, F. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico)], E-mail: fcaleyo@gmail.com; Velazquez, J.C. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico); Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado, 10400 La Habana (Cuba); Hallen, J.M. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico)

    2009-09-15

    A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.

  5. Markov chain modelling of pitting corrosion in underground pipelines

    A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.

  6. An Approach of Diagnosis Based On The Hidden Markov Chains Model

    Karim Bouamrane

    2008-07-01

    Full Text Available Diagnosis is a key element in industrial system maintenance process performance. A diagnosis tool is proposed allowing the maintenance operators capitalizing on the knowledge of their trade and subdividing it for better performance improvement and intervention effectiveness within the maintenance process service. The Tool is based on the Markov Chain Model and more precisely the Hidden Markov Chains (HMC which has the system failures determination advantage, taking into account the causal relations, stochastic context modeling of their dynamics and providing a relevant diagnosis help by their ability of dubious information use. Since the FMEA method is a well adapted artificial intelligence field, the modeling with Markov Chains is carried out with its assistance. Recently, a dynamic programming recursive algorithm, called 'Viterbi Algorithm', is being used in the Hidden Markov Chains field. This algorithm provides as input to the HMC a set of system observed effects and generates at exit the various causes having caused the loss from one or several system functions.

  7. Algebraic decay in self-similar Markov chains

    A continuous-time Markov chain is used to model motion in the neighborhood of a critical invariant circle for a Hamiltonian map. States in the infinite chain represent successive rational approximants to the frequency of the invariant circle. For the case of a noble frequency, the chain is self-similar and the nonlinear integral equation for the first passage time distribution is solved exactly. The asymptotic distribution is a power law times a function periodic in the logarithm of the time. For parameters relevant to the critical noble circle, the decay proceeds as t/sup -4.05/

  8. An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes

    Kapland, David

    2008-01-01

    This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…

  9. Comparison and converse comparison theorems for backward stochastic differential equations with Markov chain noise

    Yang, Zhe; Ramarimbahoaka, Dimbinirina; Robert J. Elliott

    2016-01-01

    Comparison and converse comparison theorems are important parts of the research on backward stochastic differential equations. In this paper, we obtain comparison results for one dimensional backward stochastic differential equations with Markov chain noise, adapting previous results under simplified hypotheses. We introduce a type of nonlinear expectation, the $f$-expectation, which is an interpretation of the solution to a BSDE, and use it to establish a converse comparison theorem for the ...

  10. Recursive estimation of high-order Markov chains: Approximation by finite mixtures

    Kárný, Miroslav

    2016-01-01

    Roč. 326, č. 1 (2016), s. 188-201. ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Markov chain * Approximate parameter estimation * Bayesian recursive estimation * Adaptive systems * Kullback–Leibler divergence * Forgetting Subject RIV: BC - Control Systems Theory Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2015/AS/karny-0447119.pdf

  11. Dynamics of market indices, Markov chains, and random walking problem

    Krivoruchenko, M I

    2001-01-01

    Dynamics of the major USA market indices DJIA, S&P, Nasdaq, and NYSE is analyzed from the point of view of the random walking problem with two-step correlations of the market moves. The parameters characterizing the stochastic dynamics are determined empirically from the historical quotes for the daily, weekly, and monthly series. The results show existence of statistically significant correlations between the subsequent market moves. The weekly and monthly parameters are calculated in terms of the daily parameters, assuming that the Markov chains with two-step correlations give a complete description of the market stochastic dynamics. We show that the macro- and micro-parameters obey the renorm group equation. The comparison of the parameters determined from the renorm group equation with the historical values shows that the Markov chains approach gives reasonable predictions for the weekly quotes and underestimates the probability for continuation of the down trend in the monthly quotes. The return and ...

  12. Constructing 1/ωα noise from reversible Markov chains

    Erland, Sveinung; Greenwood, Priscilla E.

    2007-09-01

    This paper gives sufficient conditions for the output of 1/ωα noise from reversible Markov chains on finite state spaces. We construct several examples exhibiting this behavior in a specified range of frequencies. We apply simple representations of the covariance function and the spectral density in terms of the eigendecomposition of the probability transition matrix. The results extend to hidden Markov chains. We generalize the results for aggregations of AR1-processes of C. W. J. Granger [J. Econometrics 14, 227 (1980)]. Given the eigenvalue function, there is a variety of ways to assign values to the states such that the 1/ωα condition is satisfied. We show that a random walk on a certain state space is complementary to the point process model of 1/ω noise of B. Kaulakys and T. Meskauskas [Phys. Rev. E 58, 7013 (1998)]. Passing to a continuous state space, we construct 1/ωα noise which also has a long memory.

  13. Bayesian Smoothing Algorithms in Partially Observed Markov Chains

    Ait-el-Fquih, Boujemaa; Desbouvries, François

    2006-11-01

    Let x = {xn}n∈N be a hidden process, y = {yn}n∈N an observed process and r = {rn}n∈N some auxiliary process. We assume that t = {tn}n∈N with tn = (xn, rn, yn-1) is a (Triplet) Markov Chain (TMC). TMC are more general than Hidden Markov Chains (HMC) and yet enable the development of efficient restoration and parameter estimation algorithms. This paper is devoted to Bayesian smoothing algorithms for TMC. We first propose twelve algorithms for general TMC. In the Gaussian case, these smoothers reduce to a set of algorithms which include, among other solutions, extensions to TMC of classical Kalman-like smoothing algorithms (originally designed for HMC) such as the RTS algorithms, the Two-Filter algorithms or the Bryson and Frazier algorithm.

  14. Geometric allocation approaches in Markov chain Monte Carlo

    The Markov chain Monte Carlo method is a versatile tool in statistical physics to evaluate multi-dimensional integrals numerically. For the method to work effectively, we must consider the following key issues: the choice of ensemble, the selection of candidate states, the optimization of transition kernel, algorithm for choosing a configuration according to the transition probabilities. We show that the unconventional approaches based on the geometric allocation of probabilities or weights can improve the dynamics and scaling of the Monte Carlo simulation in several aspects. Particularly, the approach using the irreversible kernel can reduce or sometimes completely eliminate the rejection of trial move in the Markov chain. We also discuss how the space-time interchange technique together with Walker's method of aliases can reduce the computational time especially for the case where the number of candidates is large, such as models with long-range interactions

  15. Statistical significance test for transition matrices of atmospheric Markov chains

    Vautard, Robert; Mo, Kingtse C.; Ghil, Michael

    1990-01-01

    Low-frequency variability of large-scale atmospheric dynamics can be represented schematically by a Markov chain of multiple flow regimes. This Markov chain contains useful information for the long-range forecaster, provided that the statistical significance of the associated transition matrix can be reliably tested. Monte Carlo simulation yields a very reliable significance test for the elements of this matrix. The results of this test agree with previously used empirical formulae when each cluster of maps identified as a distinct flow regime is sufficiently large and when they all contain a comparable number of maps. Monte Carlo simulation provides a more reliable way to test the statistical significance of transitions to and from small clusters. It can determine the most likely transitions, as well as the most unlikely ones, with a prescribed level of statistical significance.

  16. On Dirichlet eigenvectors for neutral two-dimensional Markov chains

    Champagnat, Nicolas; Miclo, Laurent

    2012-01-01

    We consider a general class of discrete, two-dimensional Markov chains modeling the dynamics of a population with two types, without mutation or immigration, and neutral in the sense that type has no influence on each individual's birth or death parameters. We prove that all the eigenvectors of the corresponding transition matrix or infinitesimal generator \\Pi\\ can be expressed as the product of "universal" polynomials of two variables, depending on each type's size but not on the specific transitions of the dynamics, and functions depending only on the total population size. These eigenvectors appear to be Dirichlet eigenvectors for \\Pi\\ on the complement of triangular subdomains, and as a consequence the corresponding eigenvalues are ordered in a specific way. As an application, we study the quasistationary behavior of finite, nearly neutral, two-dimensional Markov chains, absorbed in the sense that 0 is an absorbing state for each component of the process.

  17. Parallel Markov Chain Monte Carlo via Spectral Clustering

    Basse, Guillaume W.; Pillai, Natesh S.; Smith, Aaron

    2016-01-01

    As it has become common to use many computer cores in routine applications, finding good ways to parallelize popular algorithms has become increasingly important. In this paper, we present a parallelization scheme for Markov chain Monte Carlo (MCMC) methods based on spectral clustering of the underlying state space, generalizing earlier work on parallelization of MCMC methods by state space partitioning. We show empirically that this approach speeds up MCMC sampling for multimodal distributio...

  18. A control chart using copula-based Markov chain models

    Long, Ting-Hsuan; Emura, Takeshi

    2014-01-01

    Statistical process control is an important and convenient tool to stabilize the quality of manufactured goods and service operations. The traditional Shewhart control chart has been used extensively for process control, which is valid under the independence assumption of consecutive observations. In real world applications, there are many types of dependent observations in which the traditional control chart cannot be used. In this paper, we propose to apply a copula-based Markov chain to pe...

  19. Maximum entropy estimation of transition probabilities of reversible Markov chains

    Erik Van der Straeten

    2009-01-01

    In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.

  20. Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains

    Erik Van der Straeten

    2009-11-01

    Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.

  1. Computational Discrete Time Markov Chain with Correlated Transition Probabilities

    Peerayuth Charnsethikul

    2006-01-01

    This study presents a computational procedure for analyzing statistics of steady state probabilities in a discrete time Markov chain with correlations among their transition probabilities. The proposed model simply uses the first order Taylor's series expansion and statistical expected value properties to obtain the resulting linear matrix equations system. Computationally, the bottleneck is O(n4) but can be improved by distributed and parallel processing. A preliminary computational experien...

  2. Mortgages and Markov Chains: A Simplified Evaluation Model

    Paul Zipkin

    1993-01-01

    This paper has two purposes. The first is purely expository: to introduce stochastic interest-rate models and security-evaluation methods in a simple mathematical setting. Specifically, we assume the uncertainties in the model are represented by a discrete-time, finite-state Markov chain. Second, using this framework, we present a relatively simple model for the evaluation of mortgage-backed securities.

  3. Model of life insurance policies using Markov chains with rewards

    Sitař, Milan

    Bratislava : University of Economics, 2004 - (Lukáčik, M.), s. 179-186 ISBN 80-8078-012-9. [Quantitative Methods in Economics. Multiple Criteria Decision Making /12./. Virt (SK), 02.06.2004-04.06.2004] R&D Projects: GA ČR GA402/02/1015 Institutional research plan: CEZ:AV0Z1075907 Keywords : Markov chains with rewards * life insurance * mathematical reserve Subject RIV: BB - Applied Statistics, Operational Research

  4. Space system operations and support cost analysis using Markov chains

    Unal, Resit; Dean, Edwin B.; Moore, Arlene A.; Fairbairn, Robert E.

    1990-01-01

    This paper evaluates the use of Markov chain process in probabilistic life cycle cost analysis and suggests further uses of the process as a design aid tool. A methodology is developed for estimating operations and support cost and expected life for reusable space transportation systems. Application of the methodology is demonstrated for the case of a hypothetical space transportation vehicle. A sensitivity analysis is carried out to explore the effects of uncertainty in key model inputs.

  5. Risk-Sensitive Average Optimality in Markov Decision Chains

    Sladký, Karel; Montes-de-Oca, R.

    Berlin : Springer, 2008 - (Kalcsics, J.; Nickel, S.), s. 69-74 ISBN 978-3-540-77902-5. [Annual International Conference of the German Operations Research Society (GOR). Saarbruecken (DE), 05.09.2007-07.09.2007] R&D Projects: GA ČR GA402/05/0115; GA ČR GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov decision chain * risk-sensitive optimality * asymptotical behaviour Subject RIV: AH - Economics

  6. Fastest Mixing Markov Chain on Symmetric K-Partite Network

    Jafarizadeh, Saber

    2010-01-01

    Solving fastest mixing Markov chain problem (i.e. finding transition probabilities on the edges to minimize the second largest eigenvalue modulus of the transition probability matrix) over networks with different topologies is one of the primary areas of research in the context of computer science and one of the well known networks in this issue is K-partite network. Here in this work we present analytical solution for the problem of fastest mixing Markov chain by means of stratification and semidefinite programming, for four particular types of K-partite networks, namely Symmetric K-PPDR, Semi Symmetric K-PPDR, Cycle K-PPDR and Semi Cycle K-PPDR networks. Our method in this paper is based on convexity of fastest mixing Markov chain problem, and inductive comparing of the characteristic polynomials initiated by slackness conditions in order to find the optimal transition probabilities. The presented results shows that a Symmetric K-PPDR network and its equivalent Semi Symmetric K-PPDR network have the same SL...

  7. ''adding'' algorithm for the Markov chain formalism for radiation transfer

    The Markov chain radiative transfer method of Esposito and House has been shown to be both efficient and accurate for calculation of the diffuse reflection from a homogeneous scattering planetary atmosphere. The use of a new algorithm similar to the ''adding'' formula of Hansen and Travis extends the application of this formalism to an arbitrarily deep atmosphere. The basic idea for this algorithm is to consider a preceding calculation as a single state of a new Markov chain. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. The time required for the algorithm is comparable to that for a doubling calculation for a homogeneous atmosphere, but for a non-homogeneous atmosphere the new method is considerably faster than the standard ''adding'' routine. As with he standard ''adding'' method, the information on the internal radiation field is lost during the calculation. This method retains the advantage of the earlier Markov chain method that the time required is relatively insensitive to the number of illumination angles or observation angles for which the diffuse reflection is calculated. A technical write-up giving fuller details of the algorithm and a sample code are available from the author

  8. Model Perubahan Penggunaan Lahan Menggunakan Cellular Automata-Markov Chain di Kawasan Mamminasata

    Vera Damayanti Peruge, Tiur

    2012-01-01

    Telah dilakukan penelitian tentang perubahan penggunaan lahan di kawasan Mamminasata menggunakan model Cellular Automata-Markov Chain. Tujuan dari penelitian ini adalah menganalisis perubahan penggunaan lahan melalui peta penggunaan lahan kawasan Mamminasata tahun 2004 dan 2009 untuk memperoleh penggunaan lahan tahun 2012 berbasis Markov Chain dengan analisis probabilitas transisi Markov. Hasil analisis yang diperoleh dilakukan validasi dengan validasi Kappa m...

  9. Large deviations for Markov chains in the positive quadrant

    The paper deals with so-called N-partially space-homogeneous time-homogeneous Markov chains X(y,n), n=0,1,2,..., X(y,0)=y, in the positive quadrant. These Markov chains are characterized by the following property of the transition probabilities P(y,A)=P(X(y,1) element of A): for some N≥0 the measure P(y,dx) depends only on x2, y2, and x1-y1 in the domain x1>N, y1>N, and only on x1, y1, and x2-y2 in the domain x2>N, y2>N. For such chains the asymptotic behaviour is found for a fixed set B as s→∞, |x|→∞, and n→∞. Some other conditions on the growth of parameters are also considered, for example, |x-y|→∞, |y|→∞. A study is made of the structure of the most probable trajectories, which give the main contribution to this asymptotics, and a number of other results pertaining to the topic are established. Similar results are obtained for the narrower class of 0-partially homogeneous ergodic chains under less restrictive moment conditions on the transition probabilities P(y,dx). Moreover, exact asymptotic expressions for the probabilities P(X(0,n) element of x+B) are found for 0-partially homogeneous ergodic chains under some additional conditions. The interest in partially homogeneous Markov chains in positive octants is due to the mathematical aspects (new and interesting problems arise in the framework of general large deviation theory) as well as applied issues, for such chains prove to be quite accurate mathematical models for numerous basic types of queueing and communication networks such as the widely known Jackson networks, polling systems, or communication networks associated with the ALOHA algorithm. There is a vast literature dealing with the analysis of these objects. The present paper is an attempt to find the extent to which an asymptotic analysis is possible for Markov chains of this type in their general form without using any special properties of the specific applications mentioned above. It turns out that such an analysis is quite

  10. Reversible Markov chain estimation using convex-concave programming

    Trendelkamp-Schroer, Benjamin; Noe, Frank

    2016-01-01

    We present a convex-concave reformulation of the reversible Markov chain estimation problem and outline an efficient numerical scheme for the solution of the resulting problem based on a primal-dual interior point method for monotone variational inequalities. Extensions to situations in which information about the stationary vector is available can also be solved via the convex- concave reformulation. The method can be generalized and applied to the discrete transition matrix reweighting analysis method to perform inference from independent chains with specified couplings between the stationary probabilities. The proposed approach offers a significant speed-up compared to a fixed-point iteration for a number of relevant applications.

  11. Recurrence and invariant measure of Markov chains in double-infinite random environments

    XING; Xiusan

    2001-01-01

    [1]Cogburn, R., Markov chains in random environments: The case of Markovian environments, Ann. Probab., 1980, 8(3): 908—916.[2]Cogburn, R., The ergodic theory of Markov chains in random environments, Z. W., 1984, 66(2): 109—128.[3]Orey, S., Markov chains with stochastically stationary transition probabilities, Ann. Probab., 1991, 19(3): 907—928.[4]Li Yingqiu, Some notes of Markov chains in Markov environments, Advances in Mathematics(in Chinese), 1999, 28(4): 358—360.

  12. Inferring animal densities from tracking data using Markov chains.

    Hal Whitehead

    Full Text Available The distributions and relative densities of species are keys to ecology. Large amounts of tracking data are being collected on a wide variety of animal species using several methods, especially electronic tags that record location. These tracking data are effectively used for many purposes, but generally provide biased measures of distribution, because the starts of the tracks are not randomly distributed among the locations used by the animals. We introduce a simple Markov-chain method that produces unbiased measures of relative density from tracking data. The density estimates can be over a geographical grid, and/or relative to environmental measures. The method assumes that the tracked animals are a random subset of the population in respect to how they move through the habitat cells, and that the movements of the animals among the habitat cells form a time-homogenous Markov chain. We illustrate the method using simulated data as well as real data on the movements of sperm whales. The simulations illustrate the bias introduced when the initial tracking locations are not randomly distributed, as well as the lack of bias when the Markov method is used. We believe that this method will be important in giving unbiased estimates of density from the growing corpus of animal tracking data.

  13. Robust Dynamics and Control of a Partially Observed Markov Chain

    In a seminal paper, Martin Clark (Communications Systems and Random Process Theory, Darlington, 1977, pp. 721-734, 1978) showed how the filtered dynamics giving the optimal estimate of a Markov chain observed in Gaussian noise can be expressed using an ordinary differential equation. These results offer substantial benefits in filtering and in control, often simplifying the analysis and an in some settings providing numerical benefits, see, for example Malcolm et al. (J. Appl. Math. Stoch. Anal., 2007, to appear).Clark's method uses a gauge transformation and, in effect, solves the Wonham-Zakai equation using variation of constants. In this article, we consider the optimal control of a partially observed Markov chain. This problem is discussed in Elliott et al. (Hidden Markov Models Estimation and Control, Applications of Mathematics Series, vol. 29, 1995). The innovation in our results is that the robust dynamics of Clark are used to compute forward in time dynamics for a simplified adjoint process. A stochastic minimum principle is established

  14. Dynamic temperature selection for parallel-tempering in Markov chain Monte Carlo simulations

    Vousden, Will; Mandel, Ilya

    2015-01-01

    Modern problems in astronomical Bayesian inference require efficient methods for sampling from complex, high-dimensional, often multi-modal probability distributions. Most popular methods, such as Markov chain Monte Carlo sampling, perform poorly on strongly multi-modal probability distributions, rarely jumping between modes or settling on just one mode without finding others. Parallel tempering addresses this problem by sampling simultaneously with separate Markov chains from tempered versions of the target distribution with reduced contrast levels. Gaps between modes can be traversed at higher temperatures, while individual modes can be efficiently explored at lower temperatures. In this paper, we investigate how one might choose the ladder of temperatures to achieve lower autocorrelation time for the sampler (and therefore more efficient sampling). In particular, we present a simple, easily-implemented algorithm for dynamically adapting the temperature configuration of a sampler while sampling in order to ...

  15. A Markov chain model for CANDU feeder pipe degradation

    There is need for risk based approach to manage feeder pipe degradation to ensure safe operation by minimizing the nuclear safety risk. The current lack of understanding of some fundamental degradation mechanisms will result in uncertainty in predicting the rupture frequency. There are still concerns caused by uncertainties in the inspection techniques and engineering evaluations which should be addressed in the current procedures. A probabilistic approach is therefore useful in quantifying the risk and also it provides a tool for risk based decision making. This paper discusses the application of Markov chain model for feeder pipes in order to predict and manage the risks associated with the existing and future aging-related feeder degradation mechanisms. The major challenge in the approach is the lack of service data in characterizing the transition probabilities of the Markov model. The paper also discusses various approaches in estimating plant specific degradation rates. (author)

  16. Caching and interpolated likelihoods: accelerating cosmological Monte Carlo Markov chains

    We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a ''proof of concept'', and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user

  17. DREAM(D: an adaptive Markov Chain Monte Carlo simulation algorithm to solve discrete, noncontinuous, and combinatorial posterior parameter estimation problems

    C. J. F. Ter Braak

    2011-12-01

    Full Text Available Formal and informal Bayesian approaches have found widespread implementation and use in environmental modeling to summarize parameter and predictive uncertainty. Successful implementation of these methods relies heavily on the availability of efficient sampling methods that approximate, as closely and consistently as possible the (evolving posterior target distribution. Much of this work has focused on continuous variables that can take on any value within their prior defined ranges. Here, we introduce theory and concepts of a discrete sampling method that resolves the parameter space at fixed points. This new code, entitled DREAM(D uses the recently developed DREAM algorithm (Vrugt et al., 2008, 2009a, b as its main building block but implements two novel proposal distributions to help solve discrete and combinatorial optimization problems. This novel MCMC sampler maintains detailed balance and ergodicity, and is especially designed to resolve the emerging class of optimal experimental design problems. Three different case studies involving a Sudoku puzzle, soil water retention curve, and rainfall – runoff model calibration problem are used to benchmark the performance of DREAM(D. The theory and concepts developed herein can be easily integrated into other (adaptive MCMC algorithms.

  18. Orlicz integrability of additive functionals of Harris ergodic Markov chains

    Adamczak, Radosław

    2012-01-01

    For a Harris ergodic Markov chain $(X_n)_{n\\ge 0}$, on a general state space, started from the so called small measure or from the stationary distribution we provide optimal estimates for Orlicz norms of sums $\\sum_{i=0}^\\tau f(X_i)$, where $\\tau$ is the first regeneration time of the chain. The estimates are expressed in terms of other Orlicz norms of the function $f$ (wrt the stationary distribution) and the regeneration time $\\tau$ (wrt the small measure). We provide applications to tail estimates for additive functionals of the chain $(X_n)$ generated by unbounded functions as well as to classical limit theorems (CLT, LIL, Berry-Esseen).

  19. Mixed Vehicle Flow At Signalized Intersection: Markov Chain Analysis

    Gertsbakh Ilya B.

    2015-09-01

    Full Text Available We assume that a Poisson flow of vehicles arrives at isolated signalized intersection, and each vehicle, independently of others, represents a random number X of passenger car units (PCU’s. We analyze numerically the stationary distribution of the queue process {Zn}, where Zn is the number of PCU’s in a queue at the beginning of the n-th red phase, n → ∞. We approximate the number Yn of PCU’s arriving during one red-green cycle by a two-parameter Negative Binomial Distribution (NBD. The well-known fact is that {Zn} follow an infinite-state Markov chain. We approximate its stationary distribution using a finite-state Markov chain. We show numerically that there is a strong dependence of the mean queue length E[Zn] in equilibrium on the input distribution of Yn and, in particular, on the ”over dispersion” parameter γ= Var[Yn]/E[Yn]. For Poisson input, γ = 1. γ > 1 indicates presence of heavy-tailed input. In reality it means that a relatively large ”portion” of PCU’s, considerably exceeding the average, may arrive with high probability during one red-green cycle. Empirical formulas are presented for an accurate estimation of mean queue length as a function of load and g of the input flow. Using the Markov chain technique, we analyze the mean ”virtual” delay time for a car which always arrives at the beginning of the red phase.

  20. SDI and Markov Chains for Regional Drought Characteristics

    Chen-Feng Yeh

    2015-08-01

    Full Text Available In recent years, global climate change has altered precipitation patterns, causing uneven spatial and temporal distribution of precipitation that gradually induces precipitation polarization phenomena. Taiwan is located in the subtropical climate zone, with distinct wet and dry seasons, which makes the polarization phenomenon more obvious; this has also led to a large difference between river flows during the wet and dry seasons, which is significantly influenced by precipitation, resulting in hydrological drought. Therefore, to effectively address the growing issue of water shortages, it is necessary to explore and assess the drought characteristics of river systems. In this study, the drought characteristics of northern Taiwan were studied using the streamflow drought index (SDI and Markov chains. Analysis results showed that the year 2002 was a turning point for drought severity in both the Lanyang River and Yilan River basins; the severity of rain events in the Lanyang River basin increased after 2002, and the severity of drought events in the Yilan River basin exhibited a gradual upward trend. In the study of drought severity, analysis results from periods of three months (November to January and six months (November to April have shown significant drought characteristics. In addition, analysis of drought occurrence probabilities using the method of Markov chains has shown that the occurrence probabilities of drought events are higher in the Lanyang River basin than in the Yilan River basin; particularly for extreme events, the occurrence probability of an extreme drought event is 20.6% during the dry season (November to April in the Lanyang River basin, and 3.4% in the Yilan River basin. This study shows that for analysis of drought/wet occurrence probabilities, the results obtained for the drought frequency and occurrence probability using short-term data with the method of Markov chains can be used to predict the long-term occurrence

  1. LISA data analysis using Markov chain Monte Carlo methods

    The Laser Interferometer Space Antenna (LISA) is expected to simultaneously detect many thousands of low-frequency gravitational wave signals. This presents a data analysis challenge that is very different to the one encountered in ground based gravitational wave astronomy. LISA data analysis requires the identification of individual signals from a data stream containing an unknown number of overlapping signals. Because of the signal overlaps, a global fit to all the signals has to be performed in order to avoid biasing the solution. However, performing such a global fit requires the exploration of an enormous parameter space with a dimension upwards of 50 000. Markov Chain Monte Carlo (MCMC) methods offer a very promising solution to the LISA data analysis problem. MCMC algorithms are able to efficiently explore large parameter spaces, simultaneously providing parameter estimates, error analysis, and even model selection. Here we present the first application of MCMC methods to simulated LISA data and demonstrate the great potential of the MCMC approach. Our implementation uses a generalized F-statistic to evaluate the likelihoods, and simulated annealing to speed convergence of the Markov chains. As a final step we supercool the chains to extract maximum likelihood estimates, and estimates of the Bayes factors for competing models. We find that the MCMC approach is able to correctly identify the number of signals present, extract the source parameters, and return error estimates consistent with Fisher information matrix predictions

  2. Imputing unknown competitor marketing activity with a Hidden Markov Chain

    Haughton, Dominique; Hua, Guangying; Jin, Danny; Lin, John; Wei, Qizhi; Zhang, Changan

    2014-01-01

    We demonstrate on a case study with two competing products at a bank how one can use a Hidden Markov Chain (HMC) to estimate missing information on a competitor's marketing activity. The idea is that given time series with sales volumes for products A and B and marketing expenditures for product A, as well as suitable predictors of sales for products A and B, we can infer at each point in time whether it is likely or not that marketing activities took place for product B. The method is succes...

  3. Renewal Theory for Markov Chains on the Real Line

    Keener, Robert W.

    1982-01-01

    Standard renewal theory is concerned with expectations related to sums of positive i.i.d. variables, $S_n = \\sum^n_{i=1} Z_i$. We generalize this theory to the case where $\\{S_i\\}$ is a Markov chain on the real line with stationary transition probabilities satisfying a drift condition. The expectations we are concerned with satisfy generalized renewal equations, and in our main theorems, we show that these expectations are the unique solutions of the equations they satisfy.

  4. Second Order Optimality in Transient and Discounted Markov Decision Chains

    Sladký, Karel

    Plzeň: University of West Bohemia, Plzeň, 2015, s. 731-736. ISBN 978-80-261-0539-8. [Mathematical Methods in Economics 2015 /33./. Cheb (CZ), 09.09.2015-11.09.2015] R&D Projects: GA ČR GA13-14445S; GA ČR GA15-10331S Institutional support: RVO:67985556 Keywords : dynamic programming * discounted and transient Markov reward chains * reward-variance optimality Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/2015/E/sladky-0448938.pdf

  5. Topological charge evolution in the Markov chain of QCD

    The topological charge is studied on lattices of large physical volume and fine lattice spacing. We illustrate how a parity transformation on the SU(3) link-variables of lattice gauge configurations reverses the sign of the topological charge and leaves the action invariant. Random applications of the parity transformation are proposed to traverse from one topological charge sign to the other. The transformation provides an improved unbiased estimator of the ensemble average and is essential in improving the ergodicity of the Markov chain process

  6. Markov chain model for particle migration at the repository scale

    A model for particle migration at multiple scales is developed using the Markov chain probability model. The goal of the model is to enable analyses of radionuclide migration at the repository scale based on the information obtained in smaller-scale detailed analyses by other models. Geologic domain is divided into an array of compartments, and particle migration is simulated by transitions from one compartment to another based on transition probabilities. Nuclide transport in hypothetical repository with heterogeneous flow due to random connectivity between compartments is demonstrated. In the comparison with the analytical continuum model of mass transport, the results from the present model show a good agreement. (author)

  7. MARKOV CHAIN MODELING OF PERFORMANCE DEGRADATION OF PHOTOVOLTAIC SYSTEM

    E. Suresh Kumar

    2012-01-01

    Full Text Available Modern probability theory studies chance processes for which theknowledge of previous outcomes influence predictions for future experiments. In principle, when a sequence of chance experiments, all of the past outcomes could influence the predictions for the next experiment. In Markov chain type of chance, the outcome of a given experiment can affect the outcome of the next experiment. The system state changes with time and the state X and time t are two random variables. Each of these variables can be either continuous or discrete. Various degradation on photovoltaic (PV systems can be viewed as different Markov states and further degradation can be treated as the outcome of the present state. The PV system is treated as a discrete state continuous time system with four possible outcomes, namely, s1 : Good condition, s2 : System with partial degradation failures and fully operational, s3 : System with major faults and partially working and hence partial output power, s4 : System completely fails. The calculation of the reliability of the photovoltaic system is complicated since the system have elements or subsystems exhibiting dependent failures and involving repair and standby operations. Markov model is a better technique that has much appeal and works well when failure hazards and repair hazards are constant. The usual practice of reliability analysis techniques include FMEA((failure mode and effect analysis, Parts count analysis, RBD ( reliability block diagram , FTA( fault tree analysis etc. These are logical, boolean and block diagram approaches and never accounts the environmental degradation on the performance of the system. This is too relevant in the case of PV systems which are operated under harsh environmental conditions. This paper is an insight into the degradation of performance of PV systems and presenting a Markov model of the system by means of the different states and transitions between these states.

  8. Probabilistic approach of water residence time and connectivity using Markov chains with application to tidal embayments

    Bacher, C.; Filgueira, R.; Guyondet, T.

    2016-01-01

    Markov chain analysis was recently proposed to assess the time scales and preferential pathways into biological or physical networks by computing residence time, first passage time, rates of transfer between nodes and number of passages in a node. We propose to adapt an algorithm already published for simple systems to physical systems described with a high resolution hydrodynamic model. The method is applied to bays and estuaries on the Eastern Coast of Canada for their interest in shellfish aquaculture. Current velocities have been computed by using a 2 dimensional grid of elements and circulation patterns were summarized by averaging Eulerian flows between adjacent elements. Flows and volumes allow computing probabilities of transition between elements and to assess the average time needed by virtual particles to move from one element to another, the rate of transfer between two elements, and the average residence time of each system. We also combined transfer rates and times to assess the main pathways of virtual particles released in farmed areas and the potential influence of farmed areas on other areas. We suggest that Markov chain is complementary to other sets of ecological indicators proposed to analyse the interactions between farmed areas - e.g., depletion index, carrying capacity assessment. Markov chain has several advantages with respect to the estimation of connectivity between pair of sites. It makes possible to estimate transfer rates and times at once in a very quick and efficient way, without the need to perform long term simulations of particle or tracer concentration.

  9. On Markov Chains Induced by Partitioned Transition Probability Matrices

    Thomas KAIJSER

    2011-01-01

    Let S be a denumerable state space and let P be a transition probability matrix on S. If a denumerable set M of nonnegative matrices is such that the sum of the matrices is equal to P, then we call M a partition of P. Let K denote the set of probability vectors on S. With every partition M of P we can associate a transition probability function PM on K defined in such a way that if p ∈ K and M ∈ M are such that ‖pM‖ > 0, then, with probability ‖pM‖, the vector p is transferred to the vector pM/‖pM‖. Here ‖· ‖ denotes the l1-norm. In this paper we investigate the convergence in distribution for Markov chains generated by transition probability functions induced by partitions of transition probability matrices. The main motivation for this investigation is the application of the convergence results obtained to filtering processes of partially observed Markov chains with denumerable state space.

  10. Simulation of daily rainfall through markov chain modeling

    Being an agricultural country, the inhabitants of dry land in cultivated areas mainly rely on the daily rainfall for watering their fields. A stochastic model based on first order Markov Chain was developed to simulate daily rainfall data for Multan, D. I. Khan, Nawabshah, Chilas and Barkhan for the period 1981-2010. Transitional probability matrices of first order Markov Chain was utilized to generate the daily rainfall occurrence while gamma distribution was used to generate the daily rainfall amount. In order to achieve the parametric values of mentioned cities, method of moments is used to estimate the shape and scale parameters which lead to synthetic sequence generation as per gamma distribution. In this study, unconditional and conditional probabilities of wet and dry days in sum with means and standard deviations are considered as the essential parameters for the simulated stochastic generation of daily rainfalls. It has been found that the computerized synthetic rainfall series concurred pretty well with the actual observed rainfall series. (author)

  11. Efficient Parallel Learning of Hidden Markov Chain Models on SMPs

    Li, Lei; Fu, Bin; Faloutsos, Christos

    Quad-core cpus have been a common desktop configuration for today's office. The increasing number of processors on a single chip opens new opportunity for parallel computing. Our goal is to make use of the multi-core as well as multi-processor architectures to speed up large-scale data mining algorithms. In this paper, we present a general parallel learning framework, Cut-And-Stitch, for training hidden Markov chain models. Particularly, we propose two model-specific variants, CAS-LDS for learning linear dynamical systems (LDS) and CAS-HMM for learning hidden Markov models (HMM). Our main contribution is a novel method to handle the data dependencies due to the chain structure of hidden variables, so as to parallelize the EM-based parameter learning algorithm. We implement CAS-LDS and CAS-HMM using OpenMP on two supercomputers and a quad-core commercial desktop. The experimental results show that parallel algorithms using Cut-And-Stitch achieve comparable accuracy and almost linear speedups over the traditional serial version.

  12. A Markov chain model for reliability growth and decay

    Siegrist, K.

    1982-01-01

    A mathematical model is developed to describe a complex system undergoing a sequence of trials in which there is interaction between the internal states of the system and the outcomes of the trials. For example, the model might describe a system undergoing testing that is redesigned after each failure. The basic assumptions for the model are that the state of the system after a trial depends probabilistically only on the state before the trial and on the outcome of the trial and that the outcome of a trial depends probabilistically only on the state of the system before the trial. It is shown that under these basic assumptions, the successive states form a Markov chain and the successive states and outcomes jointly form a Markov chain. General results are obtained for the transition probabilities, steady-state distributions, etc. A special case studied in detail describes a system that has two possible state ('repaired' and 'unrepaired') undergoing trials that have three possible outcomes ('inherent failure', 'assignable-cause' 'failure' and 'success'). For this model, the reliability function is computed explicitly and an optimal repair policy is obtained.

  13. Radiative transfer calculated from a Markov chain formalism

    The theory of Markov chains is used to formulate the radiative transport problem in a general way by modeling the successive interactions of a photon as a stochastic process. Under the minimal requirement that the stochastic process is a Markov chain, the determination of the diffuse reflection of transmission from a scattering atmosphere is equivalent to the solution of a system of linear equations. This treatment is mathematically equivalent to and thus has many of the advantages of, Monte Carlo methods, but can be considerably more rapid than Monte Carlo algorithms for numerical calculations in particular applications. We have verified the speed and accuracy of this formalism for the stand problem of finding the intensity of scattered light from a homogeneous plane-parallel atmosphere with an arbitrary phase function for scattering. Accurate results over a wide range of parameters were obtained with computation times comparable to those of a standard ''doubling'' routine. The generality of this formalism thus allows fast, direct solutions to problems that were previously soluble only by Monte Carlo methods. Some comparisons are made with respect to integral equation methods

  14. Radiative transfer calculated from a Markov chain formalism

    Esposito, L. W.; House, L. L.

    1978-01-01

    The theory of Markov chains is used to formulate the radiative transport problem in a general way by modeling the successive interactions of a photon as a stochastic process. Under the minimal requirement that the stochastic process is a Markov chain, the determination of the diffuse reflection or transmission from a scattering atmosphere is equivalent to the solution of a system of linear equations. This treatment is mathematically equivalent to, and thus has many of the advantages of, Monte Carlo methods, but can be considerably more rapid than Monte Carlo algorithms for numerical calculations in particular applications. We have verified the speed and accuracy of this formalism for the standard problem of finding the intensity of scattered light from a homogeneous plane-parallel atmosphere with an arbitrary phase function for scattering. Accurate results over a wide range of parameters were obtained with computation times comparable to those of a standard 'doubling' routine. The generality of this formalism thus allows fast, direct solutions to problems that were previously soluble only by Monte Carlo methods. Some comparisons are made with respect to integral equation methods.

  15. SATMC: Spectral Energy Distribution Analysis Through Markov Chains

    Johnson, S P; Tang, Y; Scott, K S

    2013-01-01

    We present the general purpose spectral energy distribution (SED) fitting tool SED Analysis Through Markov Chains (SATMC). Utilizing Monte Carlo Markov Chain (MCMC) algorithms, SATMC fits an observed SED to SED templates or models of the user's choice to infer intrinsic parameters, generate confidence levels and produce the posterior parameter distribution. Here we describe the key features of SATMC from the underlying MCMC engine to specific features for handling SED fitting. We detail several test cases of SATMC, comparing results obtained to traditional least-squares methods, which highlight its accuracy, robustness and wide range of possible applications. We also present a sample of submillimetre galaxies that have been fitted using the SED synthesis routine GRASIL as input. In general, these SMGs are shown to occupy a large volume of parameter space, particularly in regards to their star formation rates which range from ~30-3000 M_sun yr^-1 and stellar masses which range from ~10^10-10^12 M_sun. Taking a...

  16. Maximum Likelihood Estimation in Gaussian Chain Graph Models under the Alternative Markov Property

    Drton, Mathias; Eichler, Michael

    2005-01-01

    The AMP Markov property is a recently proposed alternative Markov property for chain graphs. In the case of continuous variables with a joint multivariate Gaussian distribution, it is the AMP rather than the earlier introduced LWF Markov property that is coherent with data-generation by natural block-recursive regressions. In this paper, we show that maximum likelihood estimates in Gaussian AMP chain graph models can be obtained by combining generalized least squares and iterative proportiona...

  17. On the relation between recurrence and ergodicity properties in denumerable Markov decision chains

    R. Dekker (Rommert); A. Hordijk (Arie); F.M. Spieksma

    1994-01-01

    textabstractThis paper studies two properties of the set of Markov chains induced by the deterministic policies in a Markov decision chain. These properties are called μ-uniform geometric ergodicity and μ-uniform geometric recurrence. μ-uniform ergodicity generalises a quasi-compactness condition. I

  18. Asymptotics of Entropy Rate in Special Families of Hidden Markov Chains

    Han, G; Marcus, BH

    2008-01-01

    We derive an asymptotic formula for entropy rate of a hidden Markov chain under certain parameterizations. We also discuss applications of the asymptotic formula to the asymptotic behaviors of entropy rate of hidden Markov chains as outputs of certain channels, such as binary symmetric channel, binary erasure channel, and some special Gilbert-Elliot channel. © 2006 IEEE.

  19. A Markov Chain Estimator of Multivariate Volatility from High Frequency Data

    Hansen, Peter Reinhard; Horel, Guillaume; Lunde, Asger; Archakov, Ilya

    We introduce a multivariate estimator of financial volatility that is based on the theory of Markov chains. The Markov chain framework takes advantage of the discreteness of high-frequency returns. We study the finite sample properties of the estimation in a simulation study and apply it to...

  20. HYDRA: a Java library for Markov Chain Monte Carlo

    Gregory R. Warnes

    2002-03-01

    Full Text Available Hydra is an open-source, platform-neutral library for performing Markov Chain Monte Carlo. It implements the logic of standard MCMC samplers within a framework designed to be easy to use, extend, and integrate with other software tools. In this paper, we describe the problem that motivated our work, outline our goals for the Hydra pro ject, and describe the current features of the Hydra library. We then provide a step-by-step example of using Hydra to simulate from a mixture model drawn from cancer genetics, first using a variable-at-a-time Metropolis sampler and then a Normal Kernel Coupler. We conclude with a discussion of future directions for Hydra.

  1. Markov Chain Monte Carlo Bayesian Learning for Neural Networks

    Goodrich, Michael S.

    2011-01-01

    Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.

  2. Solution of the Markov chain for the dead time problem

    A method for solving the equation for the Markov chain, describing the effect of a non-extendible dead time on the statistics of time correlated pulses, is discussed. The equation, which was derived in an earlier paper, describes a non-linear process and is not amenable to exact solution. The present method consists of representing the probability generating function as a factorial cumulant expansion and neglecting factorial cumulants beyond the second. This results in a closed set of non-linear equations for the factorial moments. Stationary solutions of these equations, which are of interest for calculating the count rate, are obtained iteratively. The method is applied to the variable dead time counter technique for estimation of system parameters in passive neutron assay of Pu and reactor noise analysis. Comparisons of results by this method with Monte Carlo calculations are presented. (author)

  3. Rate-Distortion via Markov Chain Monte Carlo

    Jalali, Shirin

    2008-01-01

    We propose an approach to lossy source coding, utilizing ideas from Gibbs sampling, simulated annealing, and Markov Chain Monte Carlo (MCMC). The idea is to sample a reconstruction sequence from a Boltzmann distribution associated with an energy function that incorporates the distortion between the source and reconstruction, the compressibility of the reconstruction, and the point sought on the rate-distortion curve. To sample from this distribution, we use a 'heat bath algorithm': Starting from an initial candidate reconstruction (say the original source sequence), at every iteration, an index i is chosen and the i-th sequence component is replaced by drawing from the conditional probability distribution for that component given all the rest. At the end of this process, the encoder conveys the reconstruction to the decoder using universal lossless compression. The complexity of each iteration is independent of the sequence length and only linearly dependent on a certain context parameter (which grows sub-log...

  4. Analysis of Users Web Browsing Behavior Using Markov chain Model

    Diwakar Shukla

    2011-03-01

    Full Text Available In present days of growing information technology, many browsers available for surfing and web mining. A user has option to use any of them at a time to mine out the desired website. Every browser has pre-defined level of popularity and reputation in the market. This paper considers the setup of only two browsers in a computer system and a user prefers to any one, if fails, switches to the other one .The behavior of user is modeled through Markov chain procedure and transition probabilities are calculated. The quitting to browsing is treated as a parameter of variation over the popularity. Graphical study is performed to explain the inter relationship between user behavior parameters and browser market popularity parameters. If rate of a company is lowest in terms of browser failure and lowest in terms of quitting probability then company enjoys better popularity and larger user proportion

  5. On the multi-level solution algorithm for Markov chains

    Horton, G. [Univ. of Erlangen, Nuernberg (Germany)

    1996-12-31

    We discuss the recently introduced multi-level algorithm for the steady-state solution of Markov chains. The method is based on the aggregation principle, which is well established in the literature. Recursive application of the aggregation yields a multi-level method which has been shown experimentally to give results significantly faster than the methods currently in use. The algorithm can be reformulated as an algebraic multigrid scheme of Galerkin-full approximation type. The uniqueness of the scheme stems from its solution-dependent prolongation operator which permits significant computational savings in the evaluation of certain terms. This paper describes the modeling of computer systems to derive information on performance, measured typically as job throughput or component utilization, and availability, defined as the proportion of time a system is able to perform a certain function in the presence of component failures and possibly also repairs.

  6. Kinetics and thermodynamics of first-order Markov chain copolymerization

    Gaspard, P.; Andrieux, D.

    2014-07-01

    We report a theoretical study of stochastic processes modeling the growth of first-order Markov copolymers, as well as the reversed reaction of depolymerization. These processes are ruled by kinetic equations describing both the attachment and detachment of monomers. Exact solutions are obtained for these kinetic equations in the steady regimes of multicomponent copolymerization and depolymerization. Thermodynamic equilibrium is identified as the state at which the growth velocity is vanishing on average and where detailed balance is satisfied. Away from equilibrium, the analytical expression of the thermodynamic entropy production is deduced in terms of the Shannon disorder per monomer in the copolymer sequence. The Mayo-Lewis equation is recovered in the fully irreversible growth regime. The theory also applies to Bernoullian chains in the case where the attachment and detachment rates only depend on the reacting monomer.

  7. Kinetics and thermodynamics of first-order Markov chain copolymerization

    Gaspard, P.; Andrieux, D. [Center for Nonlinear Phenomena and Complex Systems, Université Libre de Bruxelles, Code Postal 231, Campus Plaine, B-1050 Brussels (Belgium)

    2014-07-28

    We report a theoretical study of stochastic processes modeling the growth of first-order Markov copolymers, as well as the reversed reaction of depolymerization. These processes are ruled by kinetic equations describing both the attachment and detachment of monomers. Exact solutions are obtained for these kinetic equations in the steady regimes of multicomponent copolymerization and depolymerization. Thermodynamic equilibrium is identified as the state at which the growth velocity is vanishing on average and where detailed balance is satisfied. Away from equilibrium, the analytical expression of the thermodynamic entropy production is deduced in terms of the Shannon disorder per monomer in the copolymer sequence. The Mayo-Lewis equation is recovered in the fully irreversible growth regime. The theory also applies to Bernoullian chains in the case where the attachment and detachment rates only depend on the reacting monomer.

  8. Kinetics and thermodynamics of first-order Markov chain copolymerization

    We report a theoretical study of stochastic processes modeling the growth of first-order Markov copolymers, as well as the reversed reaction of depolymerization. These processes are ruled by kinetic equations describing both the attachment and detachment of monomers. Exact solutions are obtained for these kinetic equations in the steady regimes of multicomponent copolymerization and depolymerization. Thermodynamic equilibrium is identified as the state at which the growth velocity is vanishing on average and where detailed balance is satisfied. Away from equilibrium, the analytical expression of the thermodynamic entropy production is deduced in terms of the Shannon disorder per monomer in the copolymer sequence. The Mayo-Lewis equation is recovered in the fully irreversible growth regime. The theory also applies to Bernoullian chains in the case where the attachment and detachment rates only depend on the reacting monomer

  9. Application of the Markov chain approximation to the sunspot observations

    The positions of the 13,588 sunspot groups observed during the cycle of 1950-1960 at the Istanbul University Observatory have been corrected for the effect of differential rotation. The evolution probability of a sunspot group to the other one in the same region have been determined. By using the Markov chain approximation, the types of these groups and their transition probabilities during the following activity cycle (1950-1960), and the concentration of active regions during 1950-1960 have been estimated. The transition probabilities from the observations of the activity cycle 1960-1970 have been compared with the predicted transition probabilities and a good correlation has been noted. 5 refs.; 2 tabs

  10. On the Multilevel Solution Algorithm for Markov Chains

    Horton, Graham

    1997-01-01

    We discuss the recently introduced multilevel algorithm for the steady-state solution of Markov chains. The method is based on an aggregation principle which is well established in the literature and features a multiplicative coarse-level correction. Recursive application of the aggregation principle, which uses an operator-dependent coarsening, yields a multi-level method which has been shown experimentally to give results significantly faster than the typical methods currently in use. When cast as a multigrid-like method, the algorithm is seen to be a Galerkin-Full Approximation Scheme with a solution-dependent prolongation operator. Special properties of this prolongation lead to the cancellation of the computationally intensive terms of the coarse-level equations.

  11. Projection methods for the numerical solution of Markov chain models

    Saad, Youcef

    1989-01-01

    Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.

  12. Dinamika Pada Rantai Markov Dengan Dua Komponen (Dinamika On Two Compotent Markov Chains)

    Yakub, Riki

    2010-01-01

    Dinamika pada rantai Markov dengan dua komponen dipengaruhi oleh nilai eigen dari matriks probabilitas transisinya serta keadaan awal yang diberikan. Berdasarkan nilai λ2 yang diperoleh, dinamika pada rantai Markov dengan dua komponen dapat dikelompokkan menjadi 3 bagian utama. Yaitu: a. Dinamika pada rantai Markov dengan dua komponen jika nilai 0

  13. Random billiards with wall temperature and associated Markov chains

    By a random billiard we mean a billiard system in which the standard rule of specular reflection is replaced with a Markov transition probabilities operator P that gives, at each collision of the billiard particle with the boundary of the billiard domain, the probability distribution of the post-collision velocity for a given pre-collision velocity. A random billiard with microstructure, or RBM for short, is a random billiard for which P is derived from a choice of geometric/mechanical structure on the boundary of the billiard domain, as explained in the text. Such systems provide simple and explicit mechanical models of particle–surface interaction that can incorporate thermal effects and permit a detailed study of thermostatic action from the perspective of the standard theory of Markov chains on general state spaces. The main focus of this paper is on the operator P itself and how it relates to the mechanical and geometric features of the microstructure, such as mass ratios, curvatures, and potentials. The main results are as follows: (1) we give a characterization of the stationary probabilities (equilibrium states) of P and show how standard equilibrium distributions studied in classical statistical mechanics such as the Maxwell–Boltzmann distribution and the Knudsen cosine law arise naturally as generalized invariant billiard measures; (2) we obtain some of the more basic functional theoretic properties of P, in particular that P is under very general conditions a self-adjoint operator of norm 1 on a Hilbert space to be defined below, and show in a simple but somewhat typical example that P is a compact (Hilbert–Schmidt) operator. This leads to the issue of relating the spectrum of eigenvalues of P to the geometric/mechanical features of the billiard microstructure; (3) we explore the latter issue, both analytically and numerically in a few representative examples. Additionally, (4) a general algorithm for simulating the Markov chains is given based on

  14. Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions

    Masuyama, Hiroyuki

    2014-01-01

    In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally,...

  15. Accelerating Monte Carlo Markov chains with proxy and error models

    Josset, Laureline; Demyanov, Vasily; Elsheikh, Ahmed H.; Lunati, Ivan

    2015-12-01

    In groundwater modeling, Monte Carlo Markov Chain (MCMC) simulations are often used to calibrate aquifer parameters and propagate the uncertainty to the quantity of interest (e.g., pollutant concentration). However, this approach requires a large number of flow simulations and incurs high computational cost, which prevents a systematic evaluation of the uncertainty in the presence of complex physical processes. To avoid this computational bottleneck, we propose to use an approximate model (proxy) to predict the response of the exact model. Here, we use a proxy that entails a very simplified description of the physics with respect to the detailed physics described by the "exact" model. The error model accounts for the simplification of the physical process; and it is trained on a learning set of realizations, for which both the proxy and exact responses are computed. First, the key features of the set of curves are extracted using functional principal component analysis; then, a regression model is built to characterize the relationship between the curves. The performance of the proposed approach is evaluated on the Imperial College Fault model. We show that the joint use of the proxy and the error model to infer the model parameters in a two-stage MCMC set-up allows longer chains at a comparable computational cost. Unnecessary evaluations of the exact responses are avoided through a preliminary evaluation of the proposal made on the basis of the corrected proxy response. The error model trained on the learning set is crucial to provide a sufficiently accurate prediction of the exact response and guide the chains to the low misfit regions. The proposed methodology can be extended to multiple-chain algorithms or other Bayesian inference methods. Moreover, FPCA is not limited to the specific presented application and offers a general framework to build error models.

  16. Some Limit Properties of Random Transition Probability for Second-Order Nonhomogeneous Markov Chains Indexed by a Tree

    Zhiyan Shi; Weiguo Yang

    2009-01-01

    We study some limit properties of the harmonic mean of random transition probability for a second-order nonhomogeneous Markov chain and a nonhomogeneous Markov chain indexed by a tree. As corollary, we obtain the property of the harmonic mean of random transition probability for a nonhomogeneous Markov chain.

  17. Prediction of Synchrostate Transitions in EEG Signals Using Markov Chain Models

    Jamal, Wasifa; Oprescu, Ioana-Anastasia; Maharatna, Koushik

    2014-01-01

    This paper proposes a stochastic model using the concept of Markov chains for the inter-state transitions of the millisecond order quasi-stable phase synchronized patterns or synchrostates, found in multi-channel Electroencephalogram (EEG) signals. First and second order transition probability matrices are estimated for Markov chain modelling from 100 trials of 128-channel EEG signals during two different face perception tasks. Prediction accuracies with such finite Markov chain models for synchrostate transition are also compared, under a data-partitioning based cross-validation scheme.

  18. Markov chain-based numerical method for degree distributions of growing networks

    In this paper, we establish a relation between growing networks and Markov chains, and propose a computational approach for network degree distributions. Using the Barabasi-Albert model as an example, we first show that the degree evolution of a node in a growing network follows a nonhomogeneous Markov chain. Exploring the special structure of these Markov chains, we develop an efficient algorithm to compute the degree distribution numerically with a computation complexity of O(t2), where t is the number of time steps. We use three examples to demonstrate the computation procedure and compare the results with those from existing methods

  19. Large Deviations for Empirical Measures of Not Necessarily Irreducible Countable Markov Chains with Arbitrary Initial Measures

    Yi Wen JIANG; Li Ming WU

    2005-01-01

    All known results on large deviations of occupation measures of Markov processes are based on the assumption of (essential) irreducibility. In this paper we establish the weak* large deviation principle of occupation measures for any countable Markov chain with arbitrary initial measures. The new rate function that we obtain is not convex and depends on the initial measure, contrary to the (essentially) irreducible case.

  20. Descriptive and predictive evaluation of high resolution Markov chain precipitation models

    Sørup, Hjalte Jomo Danielsen; Madsen, Henrik; Arnbjerg-Nielsen, Karsten

    2012-01-01

    first‐order Markov model seems to capture most of the properties of precipitation, but inclusion of seasonal and diurnal variation improves the model. Including a second‐order Markov Chain component does improve the descriptive capabilities of the model, but is very expensive in its parameter use...... and necessary tools when evaluating model fit and performance. Copyright © 2012 John Wiley & Sons, Ltd....

  1. Seriation in paleontological data using markov chain Monte Carlo methods.

    Kai Puolamäki

    2006-02-01

    Full Text Available Given a collection of fossil sites with data about the taxa that occur in each site, the task in biochronology is to find good estimates for the ages or ordering of sites. We describe a full probabilistic model for fossil data. The parameters of the model are natural: the ordering of the sites, the origination and extinction times for each taxon, and the probabilities of different types of errors. We show that the posterior distributions of these parameters can be estimated reliably by using Markov chain Monte Carlo techniques. The posterior distributions of the model parameters can be used to answer many different questions about the data, including seriation (finding the best ordering of the sites and outlier detection. We demonstrate the usefulness of the model and estimation method on synthetic data and on real data on large late Cenozoic mammals. As an example, for the sites with large number of occurrences of common genera, our methods give orderings, whose correlation with geochronologic ages is 0.95.

  2. Real time Markov chains: Wind states in anemometric data

    Sanchez, P A; Jaramillo, O A

    2015-01-01

    The description of wind phenomena is frequently based on data obtained from anemometers, which usually report the wind speed and direction only in a horizontal plane. Such measurements are commonly used either to develop wind generation farms or to forecast weather conditions in a geographical region. Beyond these standard applications, the information contained in the data may be richer than expected and may lead to a better understanding of the wind dynamics in a geographical area. In this work we propose a statistical analysis based on the wind velocity vectors, which we propose may be grouped in "wind states" associated to binormal distribution functions. We found that the velocity plane defined by the anemometric velocity data may be used as a phase space, where a finite number of states may be found and sorted using standard clustering methods. The main result is a discretization technique useful to model the wind with Markov chains. We applied such ideas in anemometric data for two different sites in M...

  3. Hidden Markov chain modeling for epileptic networks identification.

    Le Cam, Steven; Louis-Dorr, Valérie; Maillard, Louis

    2013-01-01

    The partial epileptic seizures are often considered to be caused by a wrong balance between inhibitory and excitatory interneuron connections within a focal brain area. These abnormal balances are likely to result in loss of functional connectivities between remote brain structures, while functional connectivities within the incriminated zone are enhanced. The identification of the epileptic networks underlying these hypersynchronies are expected to contribute to a better understanding of the brain mechanisms responsible for the development of the seizures. In this objective, threshold strategies are commonly applied, based on synchrony measurements computed from recordings of the electrophysiologic brain activity. However, such methods are reported to be prone to errors and false alarms. In this paper, we propose a hidden Markov chain modeling of the synchrony states with the aim to develop a reliable machine learning methods for epileptic network inference. The method is applied on a real Stereo-EEG recording, demonstrating consistent results with the clinical evaluations and with the current knowledge on temporal lobe epilepsy. PMID:24110697

  4. ENSO informed Drought Forecasting Using Nonhomogeneous Hidden Markov Chain Model

    Kwon, H.; Yoo, J.; Kim, T.

    2013-12-01

    The study aims at developing a new scheme to investigate the potential use of ENSO (El Niño/Southern Oscillation) for drought forecasting. In this regard, objective of this study is to extend a previously developed nonhomogeneous hidden Markov chain model (NHMM) to identify climate states associated with drought that can be potentially used to forecast drought conditions using climate information. As a target variable for forecasting, SPI(standardized precipitation index) is mainly utilized. This study collected monthly precipitation data over 56 stations that cover more than 30 years and K-means cluster analysis using drought properties was applied to partition regions into mutually exclusive clusters. In this study, six main clusters were distinguished through the regionalization procedure. For each cluster, the NHMM was applied to estimate the transition probability of hidden states as well as drought conditions informed by large scale climate indices (e.g. SOI, Nino1.2, Nino3, Nino3.4, MJO and PDO). The NHMM coupled with large scale climate information shows promise as a technique for forecasting drought scenarios. A more detailed explanation of large scale climate patterns associated with the identified hidden states will be provided with anomaly composites of SSTs and SLPs. Acknowledgement This research was supported by a grant(11CTIPC02) from Construction Technology Innovation Program (CTIP) funded by Ministry of Land, Transport and Maritime Affairs of Korean government.

  5. Threshold partitioning of sparse matrices and applications to Markov chains

    Choi, Hwajeong; Szyld, D.B. [Temple Univ., Philadelphia, PA (United States)

    1996-12-31

    It is well known that the order of the variables and equations of a large, sparse linear system influences the performance of classical iterative methods. In particular if, after a symmetric permutation, the blocks in the diagonal have more nonzeros, classical block methods have a faster asymptotic rate of convergence. In this paper, different ordering and partitioning algorithms for sparse matrices are presented. They are modifications of PABLO. In the new algorithms, in addition to the location of the nonzeros, the values of the entries are taken into account. The matrix resulting after the symmetric permutation has dense blocks along the diagonal, and small entries in the off-diagonal blocks. Parameters can be easily adjusted to obtain, for example, denser blocks, or blocks with elements of larger magnitude. In particular, when the matrices represent Markov chains, the permuted matrices are well suited for block iterative methods that find the corresponding probability distribution. Applications to three types of methods are explored: (1) Classical block methods, such as Block Gauss Seidel. (2) Preconditioned GMRES, where a block diagonal preconditioner is used. (3) Iterative aggregation method (also called aggregation/disaggregation) where the partition obtained from the ordering algorithm with certain parameters is used as an aggregation scheme. In all three cases, experiments are presented which illustrate the performance of the methods with the new orderings. The complexity of the new algorithms is linear in the number of nonzeros and the order of the matrix, and thus adding little computational effort to the overall solution.

  6. Markov chain Monte Carlo methods: an introductory example

    Klauenberg, Katy; Elster, Clemens

    2016-02-01

    When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.

  7. Applying Markov Chains for NDVI Time Series Forecasting of Latvian Regions

    Stepchenko Arthur

    2015-12-01

    Full Text Available Time series of earth observation based estimates of vegetation inform about variations in vegetation at the scale of Latvia. A vegetation index is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation. NDVI index is an important variable for vegetation forecasting and management of various problems, such as climate change monitoring, energy usage monitoring, managing the consumption of natural resources, agricultural productivity monitoring, drought monitoring and forest fire detection. In this paper, we make a one-step-ahead prediction of 7-daily time series of NDVI index using Markov chains. The choice of a Markov chain is due to the fact that a Markov chain is a sequence of random variables where each variable is located in some state. And a Markov chain contains probabilities of moving from one state to other.

  8. The evolution of tax evasion in the Czech Republic: a Markov chain analysis

    Hanousek, Jan; Palda, F.

    Bern: Peter Lang, 2007 - (Hayoz, N.; Hug, S.), s. 327-360 ISBN 978-3-03910-651-6 Institutional research plan: CEZ:MSM0021620846 Keywords : tax evasion * Markov chain analysis * Czech Republic Subject RIV: AH - Economics

  9. Technical manual for basic version of the Markov chain nest productivity model (MCnest)

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  10. Asymptotics of Entropy Rate in Special Families of Hidden Markov Chains

    Han, Guangyue

    2008-01-01

    We derive an asymptotic formula for entropy rate of a hidden Markov chain around a "weak Black Hole". We also discuss applications of the asymptotic formula to the asymptotic behaviors of certain channels.

  11. User’s manual for basic version of MCnest Markov chain nest productivity model

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  12. A comparison of strategies for Markov chain Monte Carlo computation in quantitative genetics

    Waagepetersen, Rasmus; Ibanez-Escriche, Noelia; Sorensen, Daniel

    2008-01-01

    In quantitative genetics, Markov chain Monte Carlo (MCMC) methods are indispensable for statistical inference in non-standard models like generalized linear models with genetic random effects or models with genetically structured variance heterogeneity. A particular challenge for MCMC applications...

  13. On finding the fundamental matrix of finite state homogeneous Markov chains in special case

    Gaiduk, A. N.

    2010-01-01

    For a Їnite state homogeneous Markov chain with circulant transition matrix that describes shift register that clocks 1,2 times with probabilities p; q we have found fundamental matrix. From fundamental matrix we derive hitting time matrix.

  14. Markov chains with transition delta-matrix: ergodicity conditions, invariant probability measures and applications

    Lev Abolnikov

    1991-01-01

    Full Text Available A large class of Markov chains with so-called Δm,n-and Δ′m,n-transition matrices (“delta-matrices” which frequently occur in applications (queues, inventories, dams is analyzed.

  15. Maps of sparse Markov chains efficiently reveal community structure in network flows with memory

    Persson, Christian; Edler, Daniel; Rosvall, Martin

    2016-01-01

    To better understand the flows of ideas or information through social and biological systems, researchers develop maps that reveal important patterns in network flows. In practice, network flow models have implied memoryless first-order Markov chains, but recently researchers have introduced higher-order Markov chain models with memory to capture patterns in multi-step pathways. Higher-order models are particularly important for effectively revealing actual, overlapping community structure, but higher-order Markov chain models suffer from the curse of dimensionality: their vast parameter spaces require exponentially increasing data to avoid overfitting and therefore make mapping inefficient already for moderate-sized systems. To overcome this problem, we introduce an efficient cross-validated mapping approach based on network flows modeled by sparse Markov chains. To illustrate our approach, we present a map of citation flows in science with research fields that overlap in multidisciplinary journals. Compared...

  16. Continuous-time block-monotone Markov chains and their block-augmented truncations

    Masuyama, Hiroyuki

    2015-01-01

    This paper considers continuous-time block-monotone Markov chains (BMMCs) and their block-augmented truncations. We first introduce the block-monotonicity and block-wise dominance relation for continuous-time Markov chains and then provide some fundamental results on the two notions. Using these results, we show that the stationary probability vectors obtained by the block-augmented truncation converge to the stationary probability vector of the original BMMC. We also show that the last-colum...

  17. Dynamic temperature selection for parallel-tempering in Markov chain Monte Carlo simulations

    Vousden, Will; Farr, Will M.; Mandel, Ilya

    2015-01-01

    Modern problems in astronomical Bayesian inference require efficient methods for sampling from complex, high-dimensional, often multi-modal probability distributions. Most popular methods, such as Markov chain Monte Carlo sampling, perform poorly on strongly multi-modal probability distributions, rarely jumping between modes or settling on just one mode without finding others. Parallel tempering addresses this problem by sampling simultaneously with separate Markov chains from tempered versio...

  18. Characterizing the Aperiodicity of Irreducible Markov Chains by Using P Systems

    Cardona, Mónica; Colomer, M. Angels; Pérez Jiménez, Mario de Jesús

    2009-01-01

    It is well known that any irreducible and aperiodic Markov chain has exactly one stationary distribution, and for any arbitrary initial distribution, the sequence of distributions at time n converges to the stationary distribution, that is, the Markov chain is approaching equilibrium as n ! 1. In this paper, a characterization of the aperiodicity in existential terms of some state is given. At the same time, a P system with external output is associated with any irreducible ...

  19. THE TRANSITION PROBABILITY MATRIX OF A MARKOV CHAIN MODEL IN AN ATM NETWORK

    YUE Dequan; ZHANG Huachen; TU Fengsheng

    2003-01-01

    In this paper we consider a Markov chain model in an ATM network, which has been studied by Dag and Stavrakakis. On the basis of the iterative formulas obtained by Dag and Stavrakakis, we obtain the explicit analytical expression of the transition probability matrix. It is very simple to calculate the transition probabilities of the Markov chain by these expressions. In addition, we obtain some results about the structure of the transition probability matrix, which are helpful in numerical calculation and theoretical analysis.

  20. Markov chain modeling of evolution of strains in reinforced concrete flexural beams

    Anoop, M. B.; Balaji Rao, K.; Lakshmanan, N.; Raghuprasad, B. K.

    2012-01-01

    From the analysis of experimentally observed variations in surface strains with loading in reinforced concrete beams, it is noted that there is a need to consider the evolution of strains (with loading) as a stochastic process. Use of Markov Chains for modeling stochastic evolution of strains with loading in reinforced concrete flexural beams is studied in this paper. A simple, yet practically useful, bi-level homogeneous Gaussian Markov Chain (BLHGMC) model is proposed for determining the st...

  1. Mixing Times of Markov Chains on Degree Constrained Orientations of Planar Graphs

    Felsner, Stefan; Heldt, Daniel

    2016-01-01

    We study Markov chains for $\\alpha$-orientations of plane graphs, these are orientations where the outdegree of each vertex is prescribed by the value of a given function $\\alpha$. The set of $\\alpha$-orientations of a plane graph has a natural distributive lattice structure. The moves of the up-down Markov chain on this distributive lattice corresponds to reversals of directed facial cycles in the $\\alpha$-orientation. We have a positive and several negative results regarding the mixing time...

  2. Robust filtering and prediction for systems with embedded finite-state Markov-Chain dynamics

    This research developed new methodologies for the design of robust near-optimal filters/predictors for a class of system models that exhibit embedded finite-state Markov-chain dynamics. These methodologies are developed through the concepts and methods of stochastic model building (including time-series analysis), game theory, decision theory, and filtering/prediction for linear dynamic systems. The methodology is based on the relationship between the robustness of a class of time-series models and quantization which is applied to the time series as part of the model identification process. This relationship is exploited by utilizing the concept of an equivalence, through invariance of spectra, between the class of Markov-chain models and the class of autoregressive moving average (ARMA) models. This spectral equivalence permits a straightforward implementation of the desirable robust properties of the Markov-chain approximation in a class of models which may be applied in linear-recursive form in a linear Kalman filter/predictor structure. The linear filter/predictor structure is shown to provide asymptotically optimal estimates of states which represent one or more integrations of the Markov-chain state. The development of a new saddle-point theorem for a game based on the Markov-chain model structure gives rise to a technique for determining a worst case Markov-chain process, upon which a robust filter/predictor design if based

  3. Markov Chain Computation for Homogeneous and Non-homogeneous Data: MARCH 1.1 Users Guide

    Andre Berchtold

    2001-03-01

    Full Text Available MARCH is a free software for the computation of different types of Markovian models including homogeneous Markov Chains, Hidden Markov Models (HMMs and Double Chain Markov Models (DCMMs. The main characteristic of this software is the implementation of a powerful optimization method for HMMs and DCMMs combining a genetic algorithm with the standard Baum-Welch procedure. MARCH is distributed as a set of Matlab functions running under Matlab 5 or higher on any computing platform. A PC Windows version running independently from Matlab is also available.

  4. A Markov chain Monte Carlo analysis of the CMSSM

    We perform a comprehensive exploration of the Constrained MSSM parameter space employing a Markov Chain Monte Carlo technique and a Bayesian analysis. We compute superpartner masses and other collider observables, as well as a cold dark matter abundance, and compare them with experimental data. We include uncertainties arising from theoretical approximations as well as from residual experimental errors of relevant Standard Model parameters. We delineate probability distributions of the CMSSM parameters, the collider and cosmological observables as well as a dark matter direct detection cross section. The 68% probability intervals of the CMSSM parameters are: 0.52TeV 1/2 0 0 g-tilde q-tildeR χ1± -9 s→μ+μ-) -8, 1.9 x 10-10 μSUSY -10 and 1 x 10-10 pb SIp -8 pb for direct WIMP detection. We highlight a complementarity between LHC and WIMP dark matter searches in exploring the CMSSM parameter space. We further expose a number of correlations among the observables, in particular between BR(Bs→μ+μ-) and BR(B-bar →Xsγ) or σSIp. Once SUSY is discovered, this and other correlations may prove helpful in distinguishing the CMSSM from other supersymmetric models. We investigate the robustness of our results in terms of the assumed ranges of CMSSM parameters and the effect of the (g-2)μ anomaly which shows some tension with the other observables. We find that the results for m0, and the observables which strongly depend on it, are sensitive to our assumptions, while our conclusions for the other variables are robust

  5. Dynamic temperature selection for parallel tempering in Markov chain Monte Carlo simulations

    Vousden, W. D.; Farr, W. M.; Mandel, I.

    2016-01-01

    Modern problems in astronomical Bayesian inference require efficient methods for sampling from complex, high-dimensional, often multimodal probability distributions. Most popular methods, such as MCMC sampling, perform poorly on strongly multimodal probability distributions, rarely jumping between modes or settling on just one mode without finding others. Parallel tempering addresses this problem by sampling simultaneously with separate Markov chains from tempered versions of the target distribution with reduced contrast levels. Gaps between modes can be traversed at higher temperatures, while individual modes can be efficiently explored at lower temperatures. In this paper, we investigate how one might choose the ladder of temperatures to achieve more efficient sampling, as measured by the autocorrelation time of the sampler. In particular, we present a simple, easily implemented algorithm for dynamically adapting the temperature configuration of a sampler while sampling. This algorithm dynamically adjusts the temperature spacing to achieve a uniform rate of exchanges between chains at neighbouring temperatures. We compare the algorithm to conventional geometric temperature configurations on a number of test distributions and on an astrophysical inference problem, reporting efficiency gains by a factor of 1.2-2.5 over a well-chosen geometric temperature configuration and by a factor of 1.5-5 over a poorly chosen configuration. On all of these problems, a sampler using the dynamical adaptations to achieve uniform acceptance ratios between neighbouring chains outperforms one that does not.

  6. 3D+t brain MRI segmentation using robust 4D Hidden Markov Chain.

    Lavigne, François; Collet, Christophe; Armspach, Jean-Paul

    2014-01-01

    In recent years many automatic methods have been developed to help physicians diagnose brain disorders, but the problem remains complex. In this paper we propose a method to segment brain structures on two 3D multi-modal MR images taken at different times (longitudinal acquisition). A bias field correction is performed with an adaptation of the Hidden Markov Chain (HMC) allowing us to take into account the temporal correlation in addition to spatial neighbourhood information. To improve the robustness of the segmentation of the principal brain structures and to detect Multiple Sclerosis Lesions as outliers the Trimmed Likelihood Estimator (TLE) is used during the process. The method is validated on 3D+t brain MR images. PMID:25571045

  7. Modeling and Computing of Stock Index Forecasting Based on Neural Network and Markov Chain

    Yonghui Dai

    2014-01-01

    Full Text Available The stock index reflects the fluctuation of the stock market. For a long time, there have been a lot of researches on the forecast of stock index. However, the traditional method is limited to achieving an ideal precision in the dynamic market due to the influences of many factors such as the economic situation, policy changes, and emergency events. Therefore, the approach based on adaptive modeling and conditional probability transfer causes the new attention of researchers. This paper presents a new forecast method by the combination of improved back-propagation (BP neural network and Markov chain, as well as its modeling and computing technology. This method includes initial forecasting by improved BP neural network, division of Markov state region, computing of the state transition probability matrix, and the prediction adjustment. Results of the empirical study show that this method can achieve high accuracy in the stock index prediction, and it could provide a good reference for the investment in stock market.

  8. Markov chain order estimation with parametric significance tests of conditional mutual information

    Papapetrou, Maria

    2015-01-01

    Besides the different approaches suggested in the literature, accurate estimation of the order of a Markov chain from a given symbol sequence is an open issue, especially when the order is moderately large. Here, parametric significance tests of conditional mutual information (CMI) of increasing order $m$, $I_c(m)$, on a symbol sequence are conducted for increasing orders $m$ in order to estimate the true order $L$ of the underlying Markov chain. CMI of order $m$ is the mutual information of two variables in the Markov chain being $m$ time steps apart, conditioning on the intermediate variables of the chain. The null distribution of CMI is approximated with a normal and gamma distribution deriving analytic expressions of their parameters, and a gamma distribution deriving its parameters from the mean and variance of the normal distribution. The accuracy of order estimation is assessed with the three parametric tests, and the parametric tests are compared to the randomization significance test and other known ...

  9. Enhancement of Markov chain model by integrating exponential smoothing: A case study on Muslims marriage and divorce

    Jamaluddin, Fadhilah; Rahim, Rahela Abdul

    2015-12-01

    Markov Chain has been introduced since the 1913 for the purpose of studying the flow of data for a consecutive number of years of the data and also forecasting. The important feature in Markov Chain is obtaining the accurate Transition Probability Matrix (TPM). However to obtain the suitable TPM is hard especially in involving long-term modeling due to unavailability of data. This paper aims to enhance the classical Markov Chain by introducing Exponential Smoothing technique in developing the appropriate TPM.

  10. MARKOV CHAIN-BASED ANALYSIS OF THE DEGREE DISTRIBUTION FOR A GROWING NETWORK

    Hou Zhenting; Tong Jinying; Shi Dinghua

    2011-01-01

    In this article, we focus on discussing the degree distribution of the DMS model from the perspective of probability. On the basis of the concept and technique of first-passage probability in Markov theory, we provide a rigorous proof for existence of the steady-state degree distribution, mathematically re-deriving the exact formula of the distribution. The approach based on Markov chain theory is universal and performs well in a large class of growing networks.