WorldWideScience

Sample records for adaptive markov chain

  1. On adaptive Markov chain Monte Carlo algorithms

    OpenAIRE

    Atchadé, Yves F.; Rosenthal, Jeffrey S.

    2005-01-01

    We look at adaptive Markov chain Monte Carlo algorithms that generate stochastic processes based on sequences of transition kernels, where each transition kernel is allowed to depend on the history of the process. We show under certain conditions that the stochastic process generated is ergodic, with appropriate stationary distribution. We use this result to analyse an adaptive version of the random walk Metropolis algorithm where the scale parameter σ is sequentially adapted using a Robbins-...

  2. An Adaptively Constructed Algebraic Multigrid Preconditioner for Irreducible Markov Chains

    OpenAIRE

    Brannick, James; Kahl, Karsten; Sokolovic, Sonja

    2014-01-01

    The computation of stationary distributions of Markov chains is an important task in the simulation of stochastic models. The linear systems arising in such applications involve non-symmetric M-matrices, making algebraic multigrid methods a natural choice for solving these systems. In this paper we investigate extensions and improvements of the bootstrap algebraic multigrid framework for solving these systems. This is achieved by reworking the bootstrap setup process to use singular vectors i...

  3. Graphs: Associated Markov Chains

    OpenAIRE

    Murthy, Garimella Rama

    2012-01-01

    In this research paper, weighted / unweighted, directed / undirected graphs are associated with interesting Discrete Time Markov Chains (DTMCs) as well as Continuous Time Markov Chains (CTMCs). The equilibrium / transient behaviour of such Markov chains is studied. Also entropy dynamics (Shannon entropy) of certain structured Markov chains is investigated. Finally certain structured graphs and the associated Markov chains are studied.

  4. Adaptive continuous time Markov chain approximation model to general jump-diffusions

    OpenAIRE

    Mario Cerrato; Chia Chun Lo; Konstantinos Skindilias

    2011-01-01

    We propose a non-equidistant Q rate matrix formula and an adaptive numerical algorithm for a continuous time Markov chain to approximate jump-diffusions with affine or non-affine functional specifications. Our approach also accommodates state-dependent jump intensity and jump distribution, a flexibility that is very hard to achieve with other numerical methods. The Kolmogorov-Smirnov test shows that the proposed Markov chain transition density converges to the one given by the likelihood expa...

  5. Discrete Quantum Markov Chains

    CERN Document Server

    Faigle, Ulrich

    2010-01-01

    A framework for finite-dimensional quantum Markov chains on Hilbert spaces is introduced. Quantum Markov chains generalize both classical Markov chains with possibly hidden states and existing models of quantum walks on finite graphs. Quantum Markov chains are based on Markov operations that may be applied to quantum systems and include quantum measurements, for example. It is proved that quantum Markov chains are asymptotically stationary and hence possess ergodic and entropic properties. With a quantum Markov chain one may associate a quantum Markov process, which is a stochastic process in the classical sense. Generalized Markov chains allow a representation with respect to a generalized Markov source model with definite (but possibly hidden) states relative to which observables give rise to classical stochastic processes. It is demonstrated that this model allows for observables to violate Bell's inequality.

  6. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM

    2008-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.

  7. Fields From Markov Chains

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2005-01-01

    A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly.......A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly....

  8. Adaptive relaxation for the steady-state analysis of Markov chains

    Science.gov (United States)

    Horton, Graham

    1994-01-01

    We consider a variant of the well-known Gauss-Seidel method for the solution of Markov chains in steady state. Whereas the standard algorithm visits each state exactly once per iteration in a predetermined order, the alternative approach uses a dynamic strategy. A set of states to be visited is maintained which can grow and shrink as the computation progresses. In this manner, we hope to concentrate the computational work in those areas of the chain in which maximum improvement in the solution can be achieved. We consider the adaptive approach both as a solver in its own right and as a relaxation method within the multi-level algorithm. Experimental results show significant computational savings in both cases.

  9. Fuzzy Markov chains: uncertain probabilities

    OpenAIRE

    James J. Buckley; Eslami, Esfandiar

    2002-01-01

    We consider finite Markov chains where there are uncertainties in some of the transition probabilities. These uncertainties are modeled by fuzzy numbers. Using a restricted fuzzy matrix multiplication we investigate the properties of regular, and absorbing, fuzzy Markov chains and show that the basic properties of these classical Markov chains generalize to fuzzy Markov chains.

  10. Markov processes and controlled Markov chains

    CERN Document Server

    Filar, Jerzy; Chen, Anyue

    2002-01-01

    The general theory of stochastic processes and the more specialized theory of Markov processes evolved enormously in the second half of the last century. In parallel, the theory of controlled Markov chains (or Markov decision processes) was being pioneered by control engineers and operations researchers. Researchers in Markov processes and controlled Markov chains have been, for a long time, aware of the synergies between these two subject areas. However, this may be the first volume dedicated to highlighting these synergies and, almost certainly, it is the first volume that emphasizes the contributions of the vibrant and growing Chinese school of probability. The chapters that appear in this book reflect both the maturity and the vitality of modern day Markov processes and controlled Markov chains. They also will provide an opportunity to trace the connections that have emerged between the work done by members of the Chinese school of probability and the work done by the European, US, Central and South Ameri...

  11. Phasic Triplet Markov Chains.

    Science.gov (United States)

    El Yazid Boudaren, Mohamed; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar

    2014-11-01

    Hidden Markov chains have been shown to be inadequate for data modeling under some complex conditions. In this work, we address the problem of statistical modeling of phenomena involving two heterogeneous system states. Such phenomena may arise in biology or communications, among other fields. Namely, we consider that a sequence of meaningful words is to be searched within a whole observation that also contains arbitrary one-by-one symbols. Moreover, a word may be interrupted at some site to be carried on later. Applying plain hidden Markov chains to such data, while ignoring their specificity, yields unsatisfactory results. The Phasic triplet Markov chain, proposed in this paper, overcomes this difficulty by means of an auxiliary underlying process in accordance with the triplet Markov chains theory. Related Bayesian restoration techniques and parameters estimation procedures according to the new model are then described. Finally, to assess the performance of the proposed model against the conventional hidden Markov chain model, experiments are conducted on synthetic and real data. PMID:26353069

  12. Markov chain Monte Carlo test of toric homogeneous Markov chains

    OpenAIRE

    Takemura, Akimichi; Hara, Hisayuki

    2010-01-01

    Markov chain models are used in various fields, such behavioral sciences or econometrics. Although the goodness of fit of the model is usually assessed by large sample approximation, it is desirable to use conditional tests if the sample size is not large. We study Markov bases for performing conditional tests of the toric homogeneous Markov chain model, which is the envelope exponential family for the usual homogeneous Markov chain model. We give a complete description of a Markov basis for ...

  13. Putting Markov Chains Back into Markov Chain Monte Carlo

    OpenAIRE

    Barker, Richard J.; Schofield, Matthew R.

    2007-01-01

    Markov chain theory plays an important role in statistical inference both in the formulation of models for data and in the construction of efficient algorithms for inference. The use of Markov chains in modeling data has a long history, however the use of Markov chain theory in developing algorithms for statistical inference has only become popular recently. Using mark-recapture models as an illustration, we show how Markov chains can be used for developing demographic models and also ...

  14. Variance bounding Markov chains

    OpenAIRE

    Roberts, Gareth O.; Jeffrey S. Rosenthal

    2008-01-01

    We introduce a new property of Markov chains, called variance bounding. We prove that, for reversible chains at least, variance bounding is weaker than, but closely related to, geometric ergodicity. Furthermore, variance bounding is equivalent to the existence of usual central limit theorems for all L2 functionals. Also, variance bounding (unlike geometric ergodicity) is preserved under the Peskun order. We close with some applications to Metropolis–Hastings algorithms.

  15. An adaptive Monte-Carlo Markov chain algorithm for inference from mixture signals

    International Nuclear Information System (INIS)

    Adaptive Metropolis (AM) is a powerful recent algorithmic tool in numerical Bayesian data analysis. AM builds on a well-known Markov Chain Monte Carlo algorithm but optimizes the rate of convergence to the target distribution by automatically tuning the design parameters of the algorithm on the fly. Label switching is a major problem in inference on mixture models because of the invariance to symmetries. The simplest (non-adaptive) solution is to modify the prior in order to make it select a single permutation of the variables, introducing an identifiability constraint. This solution is known to cause artificial biases by not respecting the topology of the posterior. In this paper we describe an online relabeling procedure which can be incorporated into the AM algorithm. We give elements of convergence of the algorithm and identify the link between its modified target measure and the original posterior distribution of interest. We illustrate the algorithm on a synthetic mixture model inspired by the muonic water Cherenkov signal of the surface detectors in the Pierre Auger Experiment.

  16. On Markov Chains and Filtrations

    OpenAIRE

    Spreij, Peter

    1997-01-01

    In this paper we rederive some well known results for continuous time Markov processes that live on a finite state space.Martingale techniques are used throughout the paper. Special attention is paid to the construction of a continuous timeMarkov process, when we start from a discrete time Markov chain. The Markov property here holds with respect tofiltrations that need not be minimal.

  17. Markov chains theory and applications

    CERN Document Server

    Sericola, Bruno

    2013-01-01

    Markov chains are a fundamental class of stochastic processes. They are widely used to solve problems in a large number of domains such as operational research, computer science, communication networks and manufacturing systems. The success of Markov chains is mainly due to their simplicity of use, the large number of available theoretical results and the quality of algorithms developed for the numerical evaluation of many metrics of interest.The author presents the theory of both discrete-time and continuous-time homogeneous Markov chains. He carefully examines the explosion phenomenon, the

  18. Quadratic Variation by Markov Chains

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Horel, Guillaume

    We introduce a novel estimator of the quadratic variation that is based on the the- ory of Markov chains. The estimator is motivated by some general results concerning filtering contaminated semimartingales. Specifically, we show that filtering can in prin- ciple remove the effects of market...... microstructure noise in a general framework where little is assumed about the noise. For the practical implementation, we adopt the dis- crete Markov chain model that is well suited for the analysis of financial high-frequency prices. The Markov chain framework facilitates simple expressions and elegant analyti...

  19. Bibliometric Application of Markov Chains.

    Science.gov (United States)

    Pao, Miranda Lee; McCreery, Laurie

    1986-01-01

    A rudimentary description of Markov Chains is presented in order to introduce its use to describe and to predict authors' movements among subareas of the discipline of ethnomusicology. Other possible applications are suggested. (Author)

  20. Hidden hybrid Markov/semi-Markov chains.

    OpenAIRE

    GUÉDON, YANN

    2005-01-01

    http://www.sciencedirect.com/science?ₒb=IssueURL&_tockey=%23TOC%235880%232005%23999509996%23596026%23FLA%23&ₐuth=y&view=c&ₐcct=C000056834&_version=1&_urlVersion=0&_userid=2292769&md5=87e7f8be94f92a8574da566c600ce631 International audience Models that combine Markovian states with implicit geometric state occupancy distributions and semi-Markovian states with explicit state occupancy distributions, are investigated. This type of model retains the flexibility of hidden semi-Markov chains ...

  1. Compressing redundant information in Markov chains

    OpenAIRE

    Aletti, Giacomo

    2006-01-01

    Given a strongly stationary Markov chain and a finite set of stopping rules, we prove the existence of a polynomial algorithm which projects the Markov chain onto a minimal Markov chain without redundant information. Markov complexity is hence defined and tested on some classical problems.

  2. DREAM(D: an adaptive markov chain monte carlo simulation algorithm to solve discrete, noncontinuous, posterior parameter estimation problems

    Directory of Open Access Journals (Sweden)

    J. A. Vrugt

    2011-04-01

    Full Text Available Formal and informal Bayesian approaches are increasingly being used to treat forcing, model structural, parameter and calibration data uncertainty, and summarize hydrologic prediction uncertainty. This requires posterior sampling methods that approximate the (evolving posterior distribution. We recently introduced the DiffeRential Evolution Adaptive Metropolis (DREAM algorithm, an adaptive Markov Chain Monte Carlo (MCMC method that is especially designed to solve complex, high-dimensional and multimodal posterior probability density functions. The method runs multiple chains in parallel, and maintains detailed balance and ergodicity. Here, I present the latest algorithmic developments, and introduce a discrete sampling variant of DREAM that samples the parameter space at fixed points. The development of this new code, DREAM(D, has been inspired by the existing class of integer optimization problems, and emerging class of experimental design problems. Such non-continuous parameter estimation problems are of considerable theoretical and practical interest. The theory developed herein is applicable to DREAM(ZS (Vrugt et al., 2011 and MT-DREAM(ZS (Laloy and Vrugt, 2011 as well. Two case studies involving a sudoku puzzle and rainfall – runoff model calibration problem are used to illustrate DREAM(D.

  3. Beyond Markov Chains, Towards Adaptive Memristor Network-based Music Generation

    OpenAIRE

    Gale, Ella; Matthews, Oliver; Costello, Ben de Lacy; Adamatzky, Andrew

    2013-01-01

    We undertook a study of the use of a memristor network for music generation, making use of the memristor's memory to go beyond the Markov hypothesis. Seed transition matrices are created and populated using memristor equations, and which are shown to generate musical melodies and change in style over time as a result of feedback into the transition matrix. The spiking properties of simple memristor networks are demonstrated and discussed with reference to applications of music making. The lim...

  4. On a Result for Finite Markov Chains

    Science.gov (United States)

    Kulathinal, Sangita; Ghosh, Lagnojita

    2006-01-01

    In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M…

  5. Intricacies of Dependence between Components of Multivariate Markov Chains: Weak Markov Consistency and Markov Copulae

    OpenAIRE

    Bielecki, Tomasz R.; Jakubowski, Jacek; Niewęgłowski, Mariusz

    2011-01-01

    This article continues our study of Markovian consistency and Markov copulae. In particular, we characterize the weak Markovian consistency for finite Markov chains. We discuss some aspects of dependence between the components of a multivariate Markov chain in the context of weak Markovian consistency and strong Markovian consistency. In this connection, we also introduce and discuss the concept of weak Markov copulae.

  6. Bayesian M-T clustering for reduced parameterisation of Markov chains used for non-linear adaptive elements

    Czech Academy of Sciences Publication Activity Database

    Valečková, Markéta; Kárný, Miroslav; Sutanto, E. L.

    2001-01-01

    Roč. 37, č. 6 (2001), s. 1071-1078. ISSN 0005-1098 R&D Projects: GA ČR GA102/99/1564 Grant ostatní: IST(XE) 1999/12058 Institutional research plan: AV0Z1075907 Keywords : Markov chain * clustering * Bayesian mixture estimation Subject RIV: BC - Control Systems Theory Impact factor: 1.449, year: 2001

  7. Spectral methods for quantum Markov chains

    International Nuclear Information System (INIS)

    The aim of this project is to contribute to our understanding of quantum time evolutions, whereby we focus on quantum Markov chains. The latter constitute a natural generalization of the ubiquitous concept of a classical Markov chain to describe evolutions of quantum mechanical systems. We contribute to the theory of such processes by introducing novel methods that allow us to relate the eigenvalue spectrum of the transition map to convergence as well as stability properties of the Markov chain.

  8. Using Games to Teach Markov Chains

    Science.gov (United States)

    Johnson, Roger W.

    2003-01-01

    Games are promoted as examples for classroom discussion of stationary Markov chains. In a game context Markov chain terminology and results are made concrete, interesting, and entertaining. Game length for several-player games such as "Hi Ho! Cherry-O" and "Chutes and Ladders" is investigated and new, simple formulas are given. Slight…

  9. Transition Probability Estimates for Reversible Markov Chains

    OpenAIRE

    Telcs, Andras

    2000-01-01

    This paper provides transition probability estimates of transient reversible Markov chains. The key condition of the result is the spatial symmetry and polynomial decay of the Green's function of the chain.

  10. Revisiting Causality in Markov Chains

    CERN Document Server

    Shojaee, Abbas

    2016-01-01

    Identifying causal relationships is a key premise of scientific research. The growth of observational data in different disciplines along with the availability of machine learning methods offers the possibility of using an empirical approach to identifying potential causal relationships, to deepen our understandings of causal behavior and to build theories accordingly. Conventional methods of causality inference from observational data require a considerable length of time series data to capture cause-effect relationship. We find that potential causal relationships can be inferred from the composition of one step transition rates to and from an event. Also known as Markov chain, one step transition rates are a commonly available resource in different scientific disciplines. Here we introduce a simple, effective and computationally efficient method that we termed 'Causality Inference using Composition of Transitions CICT' to reveal causal structure with high accuracy. We characterize the differences in causes,...

  11. REPRESENTING MARKOV CHAINS WITH TRANSITION DIAGRAMS

    Directory of Open Access Journals (Sweden)

    Farida Kachapova

    2013-01-01

    Full Text Available Stochastic processes have many useful applications and are taught in several university programmes. Students often encounter difficulties in learning stochastic processes and Markov chains, in particular. In this article we describe a teaching strategy that uses transition diagrams to represent a Markov chain and to re-define properties of its states in simple terms of directed graphs. This strategy utilises the students’ intuition and makes the learning of complex concepts about Markov chains faster and easier. The method is illustrated by worked examples. The described strategy helps students to master properties of finite Markov chains, so they have a solid basis for the study of infinite Markov chains and other stochastic processes.

  12. Entropy Rate for Hidden Markov Chains with rare transitions

    OpenAIRE

    Peres, Yuval; Quas, Anthony

    2010-01-01

    We consider Hidden Markov Chains obtained by passing a Markov Chain with rare transitions through a noisy memoryless channel. We obtain asymptotic estimates for the entropy of the resulting Hidden Markov Chain as the transition rate is reduced to zero.

  13. Estimating hidden semi-Markov chains from discrete sequences.

    OpenAIRE

    Guédon, Yann

    2003-01-01

    International audience This article addresses the estimation of hidden semi-Markov chains from nonstationary discrete sequences. Hidden semi-Markov chains are particularly useful to model the succession of homogeneous zones or segments along sequences. A discrete hidden semi-Markov chain is composed of a nonobservable state process, which is a semi-Markov chain, and a discrete output process. Hidden semi-Markov chains generalize hidden Markov chains and enable the modeling of various durat...

  14. Markov chains models, algorithms and applications

    CERN Document Server

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  15. Markov chains analytic and Monte Carlo computations

    CERN Document Server

    Graham, Carl

    2014-01-01

    Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec

  16. Generalized crested products of Markov chains

    CERN Document Server

    D'Angeli, Daniele

    2010-01-01

    We define a finite Markov chain, called generalized crested product, which naturally appears as a generalization of the first crested product of Markov chains. A complete spectral analysis is developed and the $k$-step transition probability is given. It is important to remark that this Markov chain describes a more general version of the classical Ehrenfest diffusion model. As a particular case, one gets a generalization of the classical Insect Markov chain defined on the ultrametric space. Finally, an interpretation in terms of representation group theory is given, by showing the correspondence between the spectral decomposition of the generalized crested product and the Gelfand pairs associated with the generalized wreath product of permutation groups.

  17. Interacting Particle Markov Chain Monte Carlo

    OpenAIRE

    Rainforth, Tom; Naesseth, Christian A.; Lindsten, Fredrik; Paige, Brooks; van de Meent, Jan-Willem; Doucet, Arnaud; Wood, Frank

    2016-01-01

    We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a PMCMC method that introduces a coupling between multiple standard and conditional sequential Monte Carlo samplers. Like related methods, iPMCMC is a Markov chain Monte Carlo sampler on an extended space. We present empirical results that show significant improvements in mixing rates relative to both non-interacting PMCMC samplers and a single PMCMC sampler with an equivalent total computational budget. An additional advant...

  18. Quantum Markov Chain Mixing and Dissipative Engineering

    DEFF Research Database (Denmark)

    Kastoryano, Michael James

    2012-01-01

    This thesis is the fruit of investigations on the extension of ideas of Markov chain mixing to the quantum setting, and its application to problems of dissipative engineering. A Markov chain describes a statistical process where the probability of future events depends only on the state of the sy....... Finally, we consider three independent tasks of dissipative engineering: dissipatively preparing a maximally entangled state of two atoms trapped in an optical cavity, dissipative preparation of graph states, and dissipative quantum computing construction.......This thesis is the fruit of investigations on the extension of ideas of Markov chain mixing to the quantum setting, and its application to problems of dissipative engineering. A Markov chain describes a statistical process where the probability of future events depends only on the state...... (stationary states). The aim of Markov chain mixing is to obtain (upper and/or lower) bounds on the number of steps it takes for the Markov chain to reach a stationary state. The natural quantum extensions of these notions are density matrices and quantum channels. We set out to develop a general mathematical...

  19. Stationary Probability Vectors of Higher-order Markov Chains

    OpenAIRE

    Li, Chi-Kwong; Zhang, Shixiao

    2013-01-01

    We consider the higher-order Markov Chain, and characterize the second order Markov chains admitting every probability distribution vector as a stationary vector. The result is used to construct Markov chains of higher-order with the same property. We also study conditions under which the set of stationary vectors of the Markov chain has a certain affine dimension.

  20. Markov chains and decision processes for engineers and managers

    CERN Document Server

    Sheskin, Theodore J

    2010-01-01

    Markov Chain Structure and ModelsHistorical NoteStates and TransitionsModel of the WeatherRandom WalksEstimating Transition ProbabilitiesMultiple-Step Transition ProbabilitiesState Probabilities after Multiple StepsClassification of StatesMarkov Chain StructureMarkov Chain ModelsProblemsReferencesRegular Markov ChainsSteady State ProbabilitiesFirst Passage to a Target StateProblemsReferencesReducible Markov ChainsCanonical Form of the Transition MatrixTh

  1. Analysis of a quantum Markov chain

    International Nuclear Information System (INIS)

    A quantum chain is analogous to a classical stationary Markov chain except that the probability measure is replaced by a complex amplitude measure and the transition probability matrix is replaced by a transition amplitude matrix. After considering the general situation, we study a particular example of a quantum chain whose transition amplitude matrix has the form of a Dirichlet matrix. Such matrices generate a discrete analog of the usual continuum Feynman amplitude. We then compute the probability distribution for these quantum chains

  2. Conditional Markov Chains Part II: Consistency and Copulae

    OpenAIRE

    Bielecki, Tomasz R.; Jakubowski, Jacek; Niewęgłowski, Mariusz

    2015-01-01

    In this paper we continue the study of conditional Markov chains (CMCs) with finite state spaces, that we initiated in Bielecki, Jakubowski and Niew\\k{e}g\\l owski (2015). Here, we turn our attention to the study of Markov consistency and Markov copulae with regard to CMCs, and thus we follow up on the study of Markov consistency and Markov copulae for ordinary Markov chains that we presented in Bielecki, Jakubowski and Niew\\k{e}g\\l owski (2013).

  3. Markov chains for testing redundant software

    Science.gov (United States)

    White, Allan L.; Sjogren, Jon A.

    1988-01-01

    A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.

  4. Entropy Computation in Partially Observed Markov Chains

    Science.gov (United States)

    Desbouvries, François

    2006-11-01

    Let X = {Xn}n∈N be a hidden process and Y = {Yn}n∈N be an observed process. We assume that (X,Y) is a (pairwise) Markov Chain (PMC). PMC are more general than Hidden Markov Chains (HMC) and yet enable the development of efficient parameter estimation and Bayesian restoration algorithms. In this paper we propose a fast (i.e., O(N)) algorithm for computing the entropy of {Xn}n=0N given an observation sequence {yn}n=0N.

  5. Markov Chain Approximations to Singular Stable-like Processes

    OpenAIRE

    Xu, Fangjun

    2012-01-01

    We consider the Markov chain approximations for singular stable-like processes. First we obtain properties of some Markov chains. Then we construct the approximating Markov chains and give a necessary condition for weak convergence of these chains to singular stable-like processes.

  6. Differential evolution Markov chain with snooker updater and fewer chains

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Ter Braak, Cajo J F [NON LANL

    2008-01-01

    Differential Evolution Markov Chain (DE-MC) is an adaptive MCMC algorithm, in which multiple chains are run in parallel. Standard DE-MC requires at least N=2d chains to be run in parallel, where d is the dimensionality of the posterior. This paper extends DE-MC with a snooker updater and shows by simulation and real examples that DE-MC can work for d up to 50--100 with fewer parallel chains (e.g. N=3) by exploiting information from their past by generating jumps from differences of pairs of past states. This approach extends the practical applicability of DE-MC and is shown to be about 5--26 times more efficient than the optimal Normal random walk Metropolis sampler for the 97.5% point of a variable from a 25--50 dimensional Student T{sub 3} distribution. In a nonlinear mixed effects model example the approach outperformed a block-updater geared to the specific features of the model.

  7. Performance Modeling of Communication Networks with Markov Chains

    CERN Document Server

    Mo, Jeonghoon

    2010-01-01

    This book is an introduction to Markov chain modeling with applications to communication networks. It begins with a general introduction to performance modeling in Chapter 1 where we introduce different performance models. We then introduce basic ideas of Markov chain modeling: Markov property, discrete time Markov chain (DTMe and continuous time Markov chain (CTMe. We also discuss how to find the steady state distributions from these Markov chains and how they can be used to compute the system performance metric. The solution methodologies include a balance equation technique, limiting probab

  8. Markov chains with quasitoeplitz transition matrix

    Directory of Open Access Journals (Sweden)

    Alexander M. Dukhovny

    1989-01-01

    Full Text Available This paper investigates a class of Markov chains which are frequently encountered in various applications (e.g. queueing systems, dams and inventories with feedback. Generating functions of transient and steady state probabilities are found by solving a special Riemann boundary value problem on the unit circle. A criterion of ergodicity is established.

  9. Markov Chains with Stochastically Stationary Transition Probabilities

    OpenAIRE

    Orey, Steven

    1991-01-01

    Markov chains on a countable state space are studied under the assumption that the transition probabilities $(P_n(x,y))$ constitute a stationary stochastic process. An introductory section exposing some basic results of Nawrotzki and Cogburn is followed by four sections of new results.

  10. Metric on state space of Markov chain

    OpenAIRE

    Rozinas, M. R.

    2010-01-01

    We consider finite irreducible Markov chains. It was shown that mean hitting time from one state to another satisfies the triangle inequality. Hence, sum of mean hitting time between couple of states in both directions is a metric on the space of states.

  11. Document Ranking Based upon Markov Chains.

    Science.gov (United States)

    Danilowicz, Czeslaw; Balinski, Jaroslaw

    2001-01-01

    Considers how the order of documents in information retrieval responses are determined and introduces a method that uses a probabilistic model of a document set where documents are regarded as states of a Markov chain and where transition probabilities are directly proportional to similarities between documents. (Author/LRW)

  12. Asymptotic properties of quantum Markov chains

    International Nuclear Information System (INIS)

    The asymptotic dynamics of discrete quantum Markov chains generated by the most general physically relevant quantum operations is investigated. It is shown that it is confined to an attractor space in which the resulting quantum Markov chain is diagonalizable. A construction procedure of a basis of this attractor space and its associated dual basis of 1-forms is presented. It is applicable whenever a strictly positive quantum state exists which is contracted or left invariant by the generating quantum operation. Moreover, algebraic relations between the attractor space and Kraus operators involved in the definition of a quantum Markov chain are derived. This construction is not only expected to offer significant computational advantages in cases in which the dimension of the Hilbert space is large and the dimension of the attractor space is small, but it also sheds new light onto the relation between the asymptotic dynamics of discrete quantum Markov chains and fixed points of their generating quantum operations. Finally, we show that without any restriction our construction applies to all initial states whose support belongs to the so-called recurrent subspace. (paper)

  13. Markov Chain Estimation of Avian Seasonal Fecundity

    Science.gov (United States)

    To explore the consequences of modeling decisions on inference about avian seasonal fecundity we generalize previous Markov chain (MC) models of avian nest success to formulate two different MC models of avian seasonal fecundity that represent two different ways to model renestin...

  14. A Martingale Decomposition of Discrete Markov Chains

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard

    We consider a multivariate time series whose increments are given from a homogeneous Markov chain. We show that the martingale component of this process can be extracted by a filtering method and establish the corresponding martingale decomposition in closed-form. This representation is useful for...

  15. Denumerable Markov decision chains: sensitive optimality criteria

    NARCIS (Netherlands)

    A. Hordijk (Arie); R. Dekker (Rommert)

    1991-01-01

    textabstractIn this paper we investigate denumerable state semi-Markov decision chains with small interest rates. We consider average and Blackwell optimality and allow for multiple closed sets and unbounded immediate rewards. Our analysis uses the existence of a Laurent series expansion for the tot

  16. Local stability in a transient Markov chain

    OpenAIRE

    Adan, Ivo; Foss, Sergey; Shneer, Seva; Weiss, Gideon

    2015-01-01

    We prove two lemmas with conditions that a system, which is described by a transient Markov chain, will display local stability. Examples of such systems include partly overloaded Jackson networks, partly overloaded polling systems, and overloaded multi-server queues with skill based service, under first come first served policy.

  17. One-Dimensional Markov Random Fields, Markov Chains and Topological Markov Fields

    OpenAIRE

    Chandgotia, N; G. Han; Marcus, B; Meyerovitch, T; Pavlov, R

    2014-01-01

    In this paper we show that any one-dimensional stationary, finite-valued Markov Random Field (MRF) is a Markov chain, without any mixing condition or condition on the support. Our proof makes use of two properties of the support $X$ of a finite-valued stationary MRF: 1) $X$ is non-wandering (this is a property of the support of any finite-valued stationary process) and 2) $X$ is a topological Markov field (TMF). The latter is a new property that sits in between the classes of shifts of finite...

  18. Entropy rate of continuous-state hidden Markov chains

    OpenAIRE

    Han, G; Marcus, B

    2010-01-01

    We prove that under mild positivity assumptions, the entropy rate of a continuous-state hidden Markov chain, observed when passing a finite-state Markov chain through a discrete-time continuous-output channel, is analytic as a function of the transition probabilities of the underlying Markov chain. We further prove that the entropy rate of a continuous-state hidden Markov chain, observed when passing a mixing finite-type constrained Markov chain through a discrete-time Gaussian channel, is sm...

  19. Parallel algorithms for simulating continuous time Markov chains

    Science.gov (United States)

    Nicol, David M.; Heidelberger, Philip

    1992-01-01

    We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.

  20. MARKOV CHAIN PORTFOLIO LIQUIDITY OPTIMIZATION MODEL

    Directory of Open Access Journals (Sweden)

    Eder Oliveira Abensur

    2014-05-01

    Full Text Available The international financial crisis of September 2008 and May 2010 showed the importance of liquidity as an attribute to be considered in portfolio decisions. This study proposes an optimization model based on available public data, using Markov chain and Genetic Algorithms concepts as it considers the classic duality of risk versus return and incorporating liquidity costs. The work intends to propose a multi-criterion non-linear optimization model using liquidity based on a Markov chain. The non-linear model was tested using Genetic Algorithms with twenty five Brazilian stocks from 2007 to 2009. The results suggest that this is an innovative development methodology and useful for developing an efficient and realistic financial portfolio, as it considers many attributes such as risk, return and liquidity.

  1. An interlacing theorem for reversible Markov chains

    International Nuclear Information System (INIS)

    Reversible Markov chains are an indispensable tool in the modeling of a vast class of physical, chemical, biological and statistical problems. Examples include the master equation descriptions of relaxing physical systems, stochastic optimization algorithms such as simulated annealing, chemical dynamics of protein folding and Markov chain Monte Carlo statistical estimation. Very often the large size of the state spaces requires the coarse graining or lumping of microstates into fewer mesoscopic states, and a question of utmost importance for the validity of the physical model is how the eigenvalues of the corresponding stochastic matrix change under this operation. In this paper we prove an interlacing theorem which gives explicit bounds on the eigenvalues of the lumped stochastic matrix. (fast track communication)

  2. Handbook of Markov chain Monte Carlo

    CERN Document Server

    Brooks, Steve

    2011-01-01

    ""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.

  3. Numerical methods in Markov chain modeling

    Science.gov (United States)

    Philippe, Bernard; Saad, Youcef; Stewart, William J.

    1989-01-01

    Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.

  4. Constrained Risk-Sensitive Markov Decision Chains

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    Berlin : Springer, 2009 - (Fleischmann, B.; Borgwardt, K.; Klein, R.; Tuma, A.), s. 363-368 ISBN 978-3-642-00141-3. [Operations Research 2008. Augsburg (DE), 03.09.2008-05.09.2008] R&D Projects: GA ČR(CZ) GA402/08/0107; GA ČR GA402/07/1113 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov decision chain s * exponential utility functions * constraints Subject RIV: BB - Applied Statistics, Operational Research

  5. The Engel algorithm for absorbing Markov chains

    OpenAIRE

    Snell, J. Laurie

    2009-01-01

    In this module, suitable for use in an introductory probability course, we present Engel's chip-moving algorithm for finding the basic descriptive quantities for an absorbing Markov chain, and prove that it works. The tricky part of the proof involves showing that the initial distribution of chips recurs. At the time of writing (circa 1979) no published proof of this was available, though Engel had stated that such a proof had been found by L. Scheller.

  6. Test automation for Markov Chain Usage Models

    OpenAIRE

    Bettinotti, Adriana M.; Garavaglia, Mauricio

    2011-01-01

    Statistical testing with Markov Chain Usage Models is an effective method to be used by programmers and testers during web sites development, to guarantee the software reliability. The JUMBL software works on this models; it supports model construction with the TML language and analysis, tests generation and execution and analysis of tests results. This paper is targeted at test automation for web sites development with JUMBL and JWebUnit.

  7. Bayesian Posterior Distributions Without Markov Chains

    OpenAIRE

    Cole, Stephen R.; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B.

    2012-01-01

    Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential ex...

  8. Monotone measures of ergodicity for Markov chains

    Directory of Open Access Journals (Sweden)

    J. Keilson

    1998-01-01

    Full Text Available The following paper, first written in 1974, was never published other than as part of an internal research series. Its lack of publication is unrelated to the merits of the paper and the paper is of current importance by virtue of its relation to the relaxation time. A systematic discussion is provided of the approach of a finite Markov chain to ergodicity by proving the monotonicity of an important set of norms, each measures of egodicity, whether or not time reversibility is present. The paper is of particular interest because the discussion of the relaxation time of a finite Markov chain [2] has only been clean for time reversible chains, a small subset of the chains of interest. This restriction is not present here. Indeed, a new relaxation time quoted quantifies the relaxation time for all finite ergodic chains (cf. the discussion of Q1(t below Equation (1.7]. This relaxation time was developed by Keilson with A. Roy in his thesis [6], yet to be published.

  9. Adaptive Partially Hidden Markov Models

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Rasmussen, Tage

    1996-01-01

    Partially Hidden Markov Models (PHMM) have recently been introduced. The transition and emission probabilities are conditioned on the past. In this report, the PHMM is extended with a multiple token version. The different versions of the PHMM are applied to bi-level image coding.......Partially Hidden Markov Models (PHMM) have recently been introduced. The transition and emission probabilities are conditioned on the past. In this report, the PHMM is extended with a multiple token version. The different versions of the PHMM are applied to bi-level image coding....

  10. On the embedding problem for discrete-time Markov chains

    OpenAIRE

    Guerry, Marie-Anne

    2013-01-01

    When a discrete-time homogenous Markov chain is observed at time intervals that correspond to its time unit, then the transition probabilities of the chain can be estimated using known maximum likelihood estimators. In this paper we consider a situation when a Markov chain is observed on time intervals with length equal to twice the time unit of the Markov chain. The issue then arises of characterizing probability matrices whose square root(s) are also probability matrices. ...

  11. A Bootstrap Algebraic Multilevel method for Markov Chains

    CERN Document Server

    Bolten, M; Brannick, J; Frommer, A; Kahl, K; Livshits, I

    2010-01-01

    This work concerns the development of an Algebraic Multilevel method for computing stationary vectors of Markov chains. We present an efficient Bootstrap Algebraic Multilevel method for this task. In our proposed approach, we employ a multilevel eigensolver, with interpolation built using ideas based on compatible relaxation, algebraic distances, and least squares fitting of test vectors. Our adaptive variational strategy for computation of the state vector of a given Markov chain is then a combination of this multilevel eigensolver and associated multilevel preconditioned GMRES iterations. We show that the Bootstrap AMG eigensolver by itself can efficiently compute accurate approximations to the state vector. An additional benefit of the Bootstrap approach is that it yields an accurate interpolation operator for many other eigenmodes. This in turn allows for the use of the resulting AMG hierarchy to accelerate the MLE steps using standard multigrid correction steps. The proposed approach is applied to a rang...

  12. Growth and dissolution of macromolecular Markov chains

    CERN Document Server

    Gaspard, Pierre

    2016-01-01

    The kinetics and thermodynamics of free living copolymerization are studied for processes with rates depending on k monomeric units of the macromolecular chain behind the unit that is attached or detached. In this case, the sequence of monomeric units in the growing copolymer is a kth-order Markov chain. In the regime of steady growth, the statistical properties of the sequence are determined analytically in terms of the attachment and detachment rates. In this way, the mean growth velocity as well as the thermodynamic entropy production and the sequence disorder can be calculated systematically. These different properties are also investigated in the regime of depolymerization where the macromolecular chain is dissolved by the surrounding solution. In this regime, the entropy production is shown to satisfy Landauer's principle.

  13. Combinatorial Markov chains on linear extensions

    CERN Document Server

    Ayyer, Arvind; Schilling, Anne

    2012-01-01

    We consider generalizations of Schutzenberger's promotion operator on the set L of linear extensions of a finite poset of size n. This gives rise to a strongly connected graph on L. By assigning weights to the edges of the graph in two different ways, we study two Markov chains, both of which are irreducible. The stationary state of one gives rise to the uniform distribution, whereas the weights of the stationary state of the other has a nice product formula. This generalizes results by Hendricks on the Tsetlin library, which corresponds to the case when the poset is the anti-chain and hence L=S_n is the full symmetric group. We also provide explicit eigenvalues of the transition matrix in general when the poset is a rooted forest. This is shown by proving that the associated monoid is R-trivial and then using Steinberg's extension of Brown's theory for Markov chains on left regular bands to R-trivial monoids.

  14. Application of Markov Chains to Stock Trends

    Directory of Open Access Journals (Sweden)

    Kevin J. Doubleday

    2011-01-01

    Full Text Available Problem statement: Modeling of the Dow Jones Industrial Average is frequently attempted in order to determine trading strategies with maximum payoff. Changes in the DJIA are important since movements may affect both individuals and corporations profoundly. Previous work showed that modeling a market as a random walk was valid and that a market may be viewed as having the Markov property. Approach: The aim of this research was to determine the relationship between a diverse portfolio of stocks and the market as a whole. To that end, the DJIA was analyzed using a discrete time stochastic model, namely a Markov Chain. Two models were highlighted, where the DJIA was considered as being in a state of (1 gain or loss and (2 small, moderate, or large gain or loss. A portfolio of five stocks was then considered and two models of the portfolio much the same as those for the DJIA. These models were used to obtain transitional probabilities and steady state probabilities. Results: Our results indicated that the portfolio behaved similarly to the entire DJIA, both in the simple model and the partitioned model. Conclusion: When treated as a Markov process, the entire market was useful in gauging how a diverse portfolio of stocks might behave. Future work may include different classifications of states to refine the transition matrices.

  15. Regeneration and Fixed-Width Analysis of Markov Chain Monte Carlo Algorithms

    Science.gov (United States)

    Latuszynski, Krzysztof

    2009-07-01

    In the thesis we take the split chain approach to analyzing Markov chains and use it to establish fixed-width results for estimators obtained via Markov chain Monte Carlo procedures (MCMC). Theoretical results include necessary and sufficient conditions in terms of regeneration for central limit theorems for ergodic Markov chains and a regenerative proof of a CLT version for uniformly ergodic Markov chains with E_{π}f^2< infty. To obtain asymptotic confidence intervals for MCMC estimators, strongly consistent estimators of the asymptotic variance are essential. We relax assumptions required to obtain such estimators. Moreover, under a drift condition, nonasymptotic fixed-width results for MCMC estimators for a general state space setting (not necessarily compact) and not necessarily bounded target function f are obtained. The last chapter is devoted to the idea of adaptive Monte Carlo simulation and provides convergence results and law of large numbers for adaptive procedures under path-stability condition for transition kernels.

  16. Hitting time and inverse problems for Markov chains

    OpenAIRE

    de la Peña, Victor; Gzyl, Henryk; McDonald, Patrick

    2008-01-01

    Let Wn be a simple Markov chain on the integers. Suppose that Xn is a simple Markov chain on the integers whose transition probabilities coincide with those of Wn off a finite set. We prove that there is an M > 0 such that the Markov chain Wn and the joint distributions of the first hitting time and first hitting place of Xn started at the origin for the sets {-M, M} and {-(M + 1), (M + 1)} algorithmically determine the transition probabilities of Xn.

  17. Analyticity of entropy rate of hidden Markov chains

    OpenAIRE

    Han, G; Marcus, B

    2006-01-01

    We prove that under mild positivity assumptions the entropy rate of a hidden Markov chain varies analytically as a function of the underlying Markov chain parameters. A general principle to determine the domain of analyticity is stated. An example is given to estimate the radius of convergence for the entropy rate. We then show that the positivity assumptions can be relaxed, and examples are given for the relaxed conditions. We study a special class of hidden Markov chains in more detail: bin...

  18. Bounds on Lifting Continuous Markov Chains to Speed Up Mixing

    OpenAIRE

    Ramanan, Kavita; Smith, Aaron

    2016-01-01

    It is often possible to speed up the mixing of a Markov chain $\\{ X_{t} \\}_{t \\in \\mathbb{N}}$ on a state space $\\Omega$ by \\textit{lifting}, that is, running a more efficient Markov chain $\\{ \\hat{X}_{t} \\}_{t \\in \\mathbb{N}}$ on a larger state space $\\hat{\\Omega} \\supset \\Omega$ that projects to $\\{ X_{t} \\}_{t \\in \\mathbb{N}}$ in a certain sense. In [CLP99], Chen, Lov{\\'a}sz and Pak prove that for Markov chains on finite state spaces, the mixing time of any lift of a Markov chain is at lea...

  19. Approximating Markov Chains: What and why

    International Nuclear Information System (INIS)

    Much of the current study of dynamical systems is focused on geometry (e.g., chaos and bifurcations) and ergodic theory. Yet dynamical systems were originally motivated by an attempt to open-quote open-quote solve,close-quote close-quote or at least understand, a discrete-time analogue of differential equations. As such, numerical, analytical solution techniques for dynamical systems would seem desirable. We discuss an approach that provides such techniques, the approximation of dynamical systems by suitable finite state Markov Chains. Steady state distributions for these Markov Chains, a straightforward calculation, will converge to the true dynamical system steady state distribution, with appropriate limit theorems indicated. Thus (i) approximation by a computable, linear map holds the promise of vastly faster steady state solutions for nonlinear, multidimensional differential equations; (ii) the solution procedure is unaffected by the presence or absence of a probability density function for the attractor, entirely skirting singularity, fractal/multifractal, and renormalization considerations. The theoretical machinery underpinning this development also implies that under very general conditions, steady state measures are weakly continuous with control parameter evolution. This means that even though a system may change periodicity, or become chaotic in its limiting behavior, such statistical parameters as the mean, standard deviation, and tail probabilities change continuously, not abruptly with system evolution. copyright 1996 American Institute of Physics

  20. Asymptotic evolution of quantum Markov chains

    International Nuclear Information System (INIS)

    The iterated quantum operations, so called quantum Markov chains, play an important role in various branches of physics. They constitute basis for many discrete models capable to explore fundamental physical problems, such as the approach to thermal equilibrium, or the asymptotic dynamics of macroscopic physical systems far from thermal equilibrium. On the other hand, in the more applied area of quantum technology they also describe general characteristic properties of quantum networks or they can describe different quantum protocols in the presence of decoherence. A particularly, an interesting aspect of these quantum Markov chains is their asymptotic dynamics and its characteristic features. We demonstrate there is always a vector subspace (typically low-dimensional) of so-called attractors on which the resulting superoperator governing the iterative time evolution of quantum states can be diagonalized and in which the asymptotic quantum dynamics takes place. As the main result interesting algebraic relations are presented for this set of attractors which allow to specify their dual basis and to determine them in a convenient way. Based on this general theory we show some generalizations concerning the theory of fixed points or asymptotic evolution of random quantum operations.

  1. CLTs and asymptotic variance of time-sampled Markov chains

    CERN Document Server

    Latuszynski, Krzysztof

    2011-01-01

    For a Markov transition kernel $P$ and a probability distribution $ \\mu$ on nonnegative integers, a time-sampled Markov chain evolves according to the transition kernel $P_{\\mu} = \\sum_k \\mu(k)P^k.$ In this note we obtain CLT conditions for time-sampled Markov chains and derive a spectral formula for the asymptotic variance. Using these results we compare efficiency of Barker's and Metropolis algorithms in terms of asymptotic variance.

  2. Bayesian analysis of variable-order, reversible Markov chains

    OpenAIRE

    Bacallado, Sergio

    2011-01-01

    We define a conjugate prior for the reversible Markov chain of order $r$. The prior arises from a partially exchangeable reinforced random walk, in the same way that the Beta distribution arises from the exchangeable Poly\\'{a} urn. An extension to variable-order Markov chains is also derived. We show the utility of this prior in testing the order and estimating the parameters of a reversible Markov model.

  3. Recursive smoothers for hidden discrete-time Markov chains

    Directory of Open Access Journals (Sweden)

    Lakhdar Aggoun

    2005-01-01

    Full Text Available We consider a discrete-time Markov chain observed through another Markov chain. The proposed model extends models discussed by Elliott et al. (1995. We propose improved recursive formulae to update smoothed estimates of processes related to the model. These recursive estimates are used to update the parameter of the model via the expectation maximization (EM algorithm.

  4. Series Expansions for Finite-State Markov Chains

    OpenAIRE

    Heidergott, Bernd; Hordijk, Arie; van Uitert, Miranda

    2005-01-01

    This paper provides series expansions of the stationary distribution of a finite Markov chain. This leads to an efficient numerical algorithm for computing the stationary distribution of a finite Markov chain. Numerical examples are given to illustrate the performance of the algorithm.

  5. NONLINEAR EXPECTATIONS AND NONLINEAR MARKOV CHAINS

    Institute of Scientific and Technical Information of China (English)

    PENG SHIGE

    2005-01-01

    This paper deals with nonlinear expectations. The author obtains a nonlinear generalization of the well-known Kolmogorov's consistent theorem and then use it to construct filtration-consistent nonlinear expectations via nonlinear Markov chains. Compared to the author's previous results, i.e., the theory of g-expectations introduced via BSDE on a probability space, the present framework is not based on a given probability measure. Many fully nonlinear and singular situations are covered. The induced topology is a natural generalization of Lp-norms and L∞-norm in linear situations.The author also obtains the existence and uniqueness result of BSDE under this new framework and develops a nonlinear type of von Neumann-Morgenstern representation theorem to utilities and present dynamic risk measures.

  6. A Markov Chain Model for Contagion

    Directory of Open Access Journals (Sweden)

    Angelos Dassios

    2014-11-01

    Full Text Available We introduce a bivariate Markov chain counting process with contagion for modelling the clustering arrival of loss claims with delayed settlement for an insurance company. It is a general continuous-time model framework that also has the potential to be applicable to modelling the clustering arrival of events, such as jumps, bankruptcies, crises and catastrophes in finance, insurance and economics with both internal contagion risk and external common risk. Key distributional properties, such as the moments and probability generating functions, for this process are derived. Some special cases with explicit results and numerical examples and the motivation for further actuarial applications are also discussed. The model can be considered a generalisation of the dynamic contagion process introduced by Dassios and Zhao (2011.

  7. Revisiting Weak Simulation for Substochastic Markov Chains

    DEFF Research Database (Denmark)

    Jansen, David N.; Song, Lei; Zhang, Lijun

    2013-01-01

    The spectrum of branching-time relations for probabilistic systems has been investigated thoroughly by Baier, Hermanns, Katoen and Wolf (2003, 2005), including weak simulation for systems involving substochastic distributions. Weak simulation was proven to be sound w.r.t. the liveness fragment...... of the logic PCTL\\x, and its completeness was conjectured. We revisit this result and show that soundness does not hold in general, but only for Markov chains without divergence. It is refuted for some systems with substochastic distributions. Moreover, we provide a counterexample to completeness....... In this paper, we present a novel definition that is sound for live PCTL\\x, and a variant that is both sound and complete. A long version of this article containing full proofs is available from [11]....

  8. Semi-Markov Chains and Hidden Semi-Markov Models toward Applications Their Use in Reliability and DNA Analysis

    CERN Document Server

    Barbu, Vlad

    2008-01-01

    Semi-Markov processes are much more general and better adapted to applications than the Markov ones because sojourn times in any state can be arbitrarily distributed, as opposed to the geometrically distributed sojourn time in the Markov case. This book concerns with the estimation of discrete-time semi-Markov and hidden semi-Markov processes

  9. The Use of Markov Chains in Marketing Forecasting

    OpenAIRE

    Codruţa Dura

    2006-01-01

    The Markov chains model is frequently used to describe consumers’ behavior in relation to their loyalty towards a brand, a manufacturer, a product, o a chain of stores, etc. Most frequently, this model is applied in marketing for dynamic forecasts of the market quota against a background of intense rivalry between brands. In a Markov chain, the result of a trial depends on the result of the trial that directly precedes the former. If we associate the conditional probability pjk (which means t...

  10. Modeling Uncertainty of Directed Movement via Markov Chains

    Directory of Open Access Journals (Sweden)

    YIN Zhangcai

    2015-10-01

    Full Text Available Probabilistic time geography (PTG is suggested as an extension of (classical time geography, in order to present the uncertainty of an agent located at the accessible position by probability. This may provide a quantitative basis for most likely finding an agent at a location. In recent years, PTG based on normal distribution or Brown bridge has been proposed, its variance, however, is irrelevant with the agent's speed or divergent with the increase of the speed; so they are difficult to take into account application pertinence and stability. In this paper, a new method is proposed to model PTG based on Markov chain. Firstly, a bidirectional conditions Markov chain is modeled, the limit of which, when the moving speed is large enough, can be regarded as the Brown bridge, thus has the characteristics of digital stability. Then, the directed movement is mapped to Markov chains. The essential part is to build step length, the state space and transfer matrix of Markov chain according to the space and time position of directional movement, movement speed information, to make sure the Markov chain related to the movement speed. Finally, calculating continuously the probability distribution of the directed movement at any time by the Markov chains, it can be get the possibility of an agent located at the accessible position. Experimental results show that, the variance based on Markov chains not only is related to speed, but also is tending towards stability with increasing the agent's maximum speed.

  11. Remarks on a monotone Markov chain

    Directory of Open Access Journals (Sweden)

    P. Todorovic

    1987-01-01

    Full Text Available In applications, considerations on stochastic models often involve a Markov chain {ζn}0∞ with state space in R+, and a transition probability Q. For each x  R+ the support of Q(x,. is [0,x]. This implies that ζ0≥ζ1≥…. Under certain regularity assumptions on Q we show that Qn(x,Bu→1 as n→∞ for all u>0 and that 1−Qn(x,Bu≤[1−Q(x,Bu]n where Bu=[0,u. Set τ0=max{k;ζk=ζ0}, τn=max{k;ζk=ζτn−1+1} and write Xn=ζτn−1+1, Tn=τn−τn−1. We investigate some properties of the imbedded Markov chain {Xn}0∞ and of {Tn}0∞. We determine all the marginal distributions of {Tn}0∞ and show that it is asymptotically stationary and that it possesses a monotonicity property. We also prove that under some mild regularity assumptions on β(x=1−Q(x,Bx, ∑1n(Ti−a/bn→dZ∼N(0,1.

  12. Determining a Class of Markov Chains by Hitting Time

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    @@1 Introduction In many practical problems we often cannot observe the behavior of all states for a Markov chain (see [3-5]). A natural question is that from the observable data of a part of states, can one still obtain all statistical characteristics of the Markov chains. In this paper we give the positive answer for this question and prove the surprising result that the transition rate matrix of the birth-death chains with reflecting barriers and Markov chains on a star graph can be uniquely determined by the probability density functions (pdfs) of the sojourn times and the hitting times at a single special state. This result also suggest a new special type of statistics for Markov chains.

  13. Logics and Models for Stochastic Analysis Beyond Markov Chains

    DEFF Research Database (Denmark)

    Zeng, Kebin

    , because of the generality of ME distributions, we have to leave the world of Markov chains. To support ME distributions with multiple exits, we introduce a multi-exits ME distribution together with a process algebra MEME to express the systems having the semantics as Markov renewal processes with ME...

  14. The Laplace Functional and Moments for Markov Branching Chains in Random Environments

    Institute of Scientific and Technical Information of China (English)

    HU Di-he; ZHANG Shu-lin

    2005-01-01

    The concepts of random Markov matrix, Markov branching chain in random environment (MBCRE) and Laplace functional of Markov branching chain in random environment (LFMBCRE) are introduced. The properties of LFMBCRE and the explicit formulas of moments of MBCRE are given.

  15. On the Markov Chain Monte Carlo (MCMC) method

    Indian Academy of Sciences (India)

    Rajeeva L Karandikar

    2006-04-01

    Markov Chain Monte Carlo (MCMC) is a popular method used to generate samples from arbitrary distributions, which may be specified indirectly. In this article, we give an introduction to this method along with some examples.

  16. P Systems Computing the Period of Irreducible Markov Chains

    OpenAIRE

    Cardona Roca, Mónica; Colomer Cugat, M. Angeles; Riscos Núñez, Agustín; Rius Font, Miquel

    2009-01-01

    It is well known that any irreducible and aperiodic Markov chain has exactly one stationary distribution, and for any arbitrary initial distribution, the sequence of distributions at time n converges to the stationary distribution, that is, the Markov chain is approaching equilibrium as n→∞. In this paper, a characterization of the aperiodicity in existential terms of some state is given. At the same time, a P system with external output is associated with any irreducible Ma...

  17. On almost-periodic points of a topological Markov chain

    International Nuclear Information System (INIS)

    We prove that a transitive topological Markov chain has almost-periodic points of all D-periods. Moreover, every D-period is realized by continuously many distinct minimal sets. We give a simple constructive proof of the result which asserts that any transitive topological Markov chain has periodic points of almost all periods, and study the structure of the finite set of positive integers that are not periods.

  18. CONVERGENCE OF MARKOV CHAIN APPROXIMATIONS TO STOCHASTIC REACTION DIFFUSION EQUATIONS

    OpenAIRE

    Kouritzin, Michael A.; Hongwei Long

    2001-01-01

    In the context of simulating the transport of a chemical or bacterial contaminant through a moving sheet of water, we extend a well-established method of approximating reaction-diffusion equations with Markov chains by allowing convection, certain Poisson measure driving sources and a larger class of reaction functions. Our alterations also feature dramatically slower Markov chain state change rates often yielding a ten to one-hundred-fold simulation speed increase over the previous version o...

  19. Markov Chains as Tools for Jazz Improvisation Analysis

    OpenAIRE

    Franz, David Matthew

    1998-01-01

    This thesis describes an exploratory application of a statistical analysis and modeling technique (Markov chains) for the modeling of jazz improvisation with the intended subobjective of providing increased insight into an improviser's style and creativity through the postulation of quantitative measures of style and creativity based on the constructed Markovian analysis techniques. Using Visual Basic programming language, Markov chains of orders one to three are created using transcriptio...

  20. Markov chains of nonlinear Markov processes and an application to a winner-takes-all model for social conformity

    International Nuclear Information System (INIS)

    We discuss nonlinear Markov processes defined on discrete time points and discrete state spaces using Markov chains. In this context, special attention is paid to the distinction between linear and nonlinear Markov processes. We illustrate that the Chapman-Kolmogorov equation holds for nonlinear Markov processes by a winner-takes-all model for social conformity. (fast track communication)

  1. Markov chains of nonlinear Markov processes and an application to a winner-takes-all model for social conformity

    Energy Technology Data Exchange (ETDEWEB)

    Frank, T D [Center for the Ecological Study of Perception and Action, Department of Psychology, University of Connecticut, 406 Babbidge Road, Storrs, CT 06269 (United States)

    2008-07-18

    We discuss nonlinear Markov processes defined on discrete time points and discrete state spaces using Markov chains. In this context, special attention is paid to the distinction between linear and nonlinear Markov processes. We illustrate that the Chapman-Kolmogorov equation holds for nonlinear Markov processes by a winner-takes-all model for social conformity. (fast track communication)

  2. Switching Markov chains for a holistic modeling of SIS unavailability

    International Nuclear Information System (INIS)

    This paper proposes a holistic approach to model the Safety Instrumented Systems (SIS). The model is based on Switching Markov Chain and integrates several parameters like Common Cause Failure, Imperfect Proof testing, partial proof testing, etc. The basic concepts of Switching Markov Chain applied to reliability analysis are introduced and a model to compute the unavailability for a case study is presented. The proposed Switching Markov Chain allows us to assess the effect of each parameter on the SIS performance. The proposed method ensures the relevance of the results. - Highlights: • A holistic approach to model the unavailability safety systems using Switching Markov chains. • The model integrates several parameters like probability of failure due to the test, the probability of not detecting a failure in a test. • The basic concepts of the Switching Markov Chains are introduced and applied to compute the unavailability for safety systems. • The proposed Switching Markov Chain allows assessing the effect of each parameter on the chemical reactor performance

  3. ON MARKOV CHAINS IN SPACE-TIME RANDOM ENVIRONMENTS

    Institute of Scientific and Technical Information of China (English)

    Hu Dihe; Hu Xiaoyu

    2009-01-01

    In Section 1, the authors establish the models of two kinds of Markov chains in space-time random environments (MCSTRE and MCSTRE(+)) with Abstract state space. In Section 2, the authors construct a MCSTRE and a MCSTRE(+) by an initial distribution Ф and a random Markov kernel (RMK) p(γ). In Section 3, the authors establish several equivalence theorems on MCSTRE and MCSTRE(+). Finally, the authors give two very important examples of MCMSTRE, the random walk in spce-time random environment and the Markov branching chain in space-time random environment.

  4. Comprehensive cosmographic analysis by Markov chain method

    International Nuclear Information System (INIS)

    We study the possibility of extracting model independent information about the dynamics of the Universe by using cosmography. We intend to explore it systematically, to learn about its limitations and its real possibilities. Here we are sticking to the series expansion approach on which cosmography is based. We apply it to different data sets: Supernovae type Ia (SNeIa), Hubble parameter extracted from differential galaxy ages, gamma ray bursts, and the baryon acoustic oscillations data. We go beyond past results in the literature extending the series expansion up to the fourth order in the scale factor, which implies the analysis of the deceleration q0, the jerk j0, and the snap s0. We use the Markov chain Monte Carlo method (MCMC) to analyze the data statistically. We also try to relate direct results from cosmography to dark energy (DE) dynamical models parametrized by the Chevallier-Polarski-Linder model, extracting clues about the matter content and the dark energy parameters. The main results are: (a) even if relying on a mathematical approximate assumption such as the scale factor series expansion in terms of time, cosmography can be extremely useful in assessing dynamical properties of the Universe; (b) the deceleration parameter clearly confirms the present acceleration phase; (c) the MCMC method can help giving narrower constraints in parameter estimation, in particular for higher order cosmographic parameters (the jerk and the snap), with respect to the literature; and (d) both the estimation of the jerk and the DE parameters reflect the possibility of a deviation from the ΛCDM cosmological model.

  5. Unsupervised Segmentation of Hidden Semi-Markov Non Stationary Chains

    Science.gov (United States)

    Lapuyade-Lahorgue, Jérôme; Pieczynski, Wojciech

    2006-11-01

    In the classical hidden Markov chain (HMC) model we have a hidden chain X, which is a Markov one and an observed chain Y. HMC are widely used; however, in some situations they have to be replaced by the more general "hidden semi-Markov chains" (HSMC) which are particular "triplet Markov chains" (TMC) T = (X, U, Y), where the auxiliary chain U models the semi-Markovianity of X. Otherwise, non stationary classical HMC can also be modeled by a triplet Markov stationary chain with, as a consequence, the possibility of parameters' estimation. The aim of this paper is to use simultaneously both properties. We consider a non stationary HSMC and model it as a TMC T = (X, U1, U2, Y), where U1 models the semi-Markovianity and U2 models the non stationarity. The TMC T being itself stationary, all parameters can be estimated by the general "Iterative Conditional Estimation" (ICE) method, which leads to unsupervised segmentation. We present some experiments showing the interest of the new model and related processing in image segmentation area.

  6. Automated generation of partial Markov chain from high level descriptions

    International Nuclear Information System (INIS)

    We propose an algorithm to generate partial Markov chains from high level implicit descriptions, namely AltaRica models. This algorithm relies on two components. First, a variation on Dijkstra's algorithm to compute shortest paths in a graph. Second, the definition of a notion of distance to select which states must be kept and which can be safely discarded. The proposed method solves two problems at once. First, it avoids a manual construction of Markov chains, which is both tedious and error prone. Second, up the price of acceptable approximations, it makes it possible to push back dramatically the exponential blow-up of the size of the resulting chains. We report experimental results that show the efficiency of the proposed approach. - Highlights: • We generate Markov chains from a higher level safety modeling language (AltaRica). • We use a variation on Dijkstra's algorithm to generate partial Markov chains. • Hence we solve two problems: the first problem is the tedious manual construction of Markov chains. • The second problem is the blow-up of the size of the chains, at the cost of decent approximations. • The experimental results highlight the efficiency of the method

  7. RS-markov Chain Model of Logistics Service Supply Chain based on Exploration Diagram

    OpenAIRE

    Shi Li; Shi Yu-Zhen

    2013-01-01

    In order to achieve the forecast evaluation of logistics service supply chain, a tool of system-thinking in complex scientific management-Exploration diagram, is used to establish the index system for the forecast evaluation of logistics service supply chain. And according to the significant Markov chain property in the operation of logistics service supply chain, the predictability of the Markov chain is used to put forward a dynamic evaluation model, example ...

  8. Infinitely dimensional control Markov branching chains in random environments

    Institute of Scientific and Technical Information of China (English)

    HU; Dihe

    2006-01-01

    First of all we introduce the concepts of infinitely dimensional control Markov branching chains in random environments (β-MBCRE) and prove the existence of such chains, then we introduce the concepts of conditional generating functionals and random Markov transition functions of such chains and investigate their branching property. Base on these concepts we calculate the moments of the β-MBCRE and obtain the main results of this paper such as extinction probabilities, polarization and proliferation rate. Finally we discuss the classification ofβ-MBCRE according to the different standards.

  9. Invariance principle for additive functionals of Markov chains

    OpenAIRE

    Kartashov, Yuri N.; Kulik, Alexey M.

    2007-01-01

    We consider a sequence of additive functionals {\\phi_n}, set on a sequence of Markov chains {X_n} that weakly converges to a Markov process X. We give sufficient condition for such a sequence to converge in distribution, formulated in terms of the characteristics of the additive functionals, and related to the Dynkin's theorem on the convergence of W-functionals. As an application of the main theorem, the general sufficient condition for convergence of additive functionals in terms of transit...

  10. The Dynamics of Repeat Migration: A Markov Chain Analysis

    OpenAIRE

    Zimmermann, Klaus F.; Amelie F. Constant

    2003-01-01

    While the literature has established that there is substantial and highly selective return migration, the growing importance of repeat migration has been largely ignored. Using Markov chain analysis, this paper provides a modeling framework for repeated moves of migrants between the host and home countries. The Markov transition matrix between the states in two consecutive periods is parameterized and estimated using a logit specification and a large panel data with 14 waves. The analysis for...

  11. Mean variance optimality in Markov decision chains

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel; Sitař, Milan

    Hradec Králové : Gadeamus, 2005 - (Skalská, H.), s. 350-357 ISBN 978-80-7041-535-1. [Mathematical Methods in Economics 2005 /23./. Hradec Králové (CZ), 14.09.2005-16.09.2005] R&D Projects: GA ČR GA402/05/0115 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov reward processes * expectation and variance of cumulative rewards Subject RIV: BB - Applied Statistics, Operational Research

  12. Markov Chains For Testing Redundant Software

    Science.gov (United States)

    White, Allan L.; Sjogren, Jon A.

    1990-01-01

    Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.

  13. On a Markov chain roulette-type game

    International Nuclear Information System (INIS)

    A Markov chain on non-negative integers which arises in a roulette-type game is discussed. The transition probabilities are p01 = ρ, pNj = δNj, pi,i+W = q, pi,i-1 = p = 1 - q, 1 ≤ W < N, 0 ≤ ρ ≤ 1, N - W < j ≤ N and i = 1, 2, ..., N - W. Using formulae for the determinant of a partitioned matrix, a closed form expression for the solution of the Markov chain roulette-type game is deduced. The present analysis is supported by two mathematical models from tumor growth and war with bargaining

  14. Target Density Normalization for Markov Chain Monte Carlo Algorithms

    CERN Document Server

    Caldwell, Allen

    2014-01-01

    Techniques for evaluating the normalization integral of the target density for Markov Chain Monte Carlo algorithms are described and tested numerically. It is assumed that the Markov Chain algorithm has converged to the target distribution and produced a set of samples from the density. These are used to evaluate sample mean, harmonic mean and Laplace algorithms for the calculation of the integral of the target density. A clear preference for the sample mean algorithm applied to a reduced support region is found, and guidelines are given for implementation.

  15. Limit Theorems for the Sample Entropy of Hidden Markov Chains

    CERN Document Server

    Han, Guangyue

    2011-01-01

    The Shannon-McMillan-Breiman theorem asserts that the sample entropy of a stationary and ergodic stochastic process converges to the entropy rate of the same process almost surely. In this paper, we focus our attention on the convergence behavior of the sample entropy of a hidden Markov chain. Under certain positivity assumption, we prove that a central limit theorem (CLT) with some Berry-Esseen bound for the sample entropy of a hidden Markov chain, and we use this CLT to establish a law of iterated logarithm (LIL) for the sample entropy.

  16. Dynamic modeling of presence of occupants using inhomogeneous Markov chains

    DEFF Research Database (Denmark)

    Andersen, Philip Hvidthøft Delff; Iversen, Anne; Madsen, Henrik;

    2014-01-01

    time of day, and by use of a filter of the observations it is able to capture per-employee sequence dynamics. Simulations using this method are compared with simulations using homogeneous Markov chains and show far better ability to reproduce key properties of the data. The method is based on...... inhomogeneous Markov chains with where the transition probabilities are estimated using generalized linear models with polynomials, B-splines, and a filter of passed observations as inputs. For treating the dispersion of the data series, a hierarchical model structure is used where one model is for low presence...

  17. Harmonic Oscillator Model for Radin's Markov-Chain Experiments

    International Nuclear Information System (INIS)

    The conscious observer stands as a central figure in the measurement problem of quantum mechanics. Recent experiments by Radin involving linear Markov chains driven by random number generators illuminate the role and temporal dynamics of observers interacting with quantum mechanically labile systems. In this paper a Lagrangian interpretation of these experiments indicates that the evolution of Markov chain probabilities can be modeled as damped harmonic oscillators. The results are best interpreted in terms of symmetric equicausal determinism rather than strict retrocausation, as posited by Radin. Based on the present analysis, suggestions are made for more advanced experiments

  18. On a Markov chain roulette-type game

    Energy Technology Data Exchange (ETDEWEB)

    El-Shehawey, M A; El-Shreef, Gh A [Department of Mathematics, Damietta Faculty of Science, PO Box 6, New Damietta (Egypt)

    2009-05-15

    A Markov chain on non-negative integers which arises in a roulette-type game is discussed. The transition probabilities are p{sub 01} = {rho}, p{sub Nj} = {delta}{sub Nj}, p{sub i,i+W} = q, p{sub i,i-1} = p = 1 - q, 1 {<=} W < N, 0 {<=} {rho} {<=} 1, N - W < j {<=} N and i = 1, 2, ..., N - W. Using formulae for the determinant of a partitioned matrix, a closed form expression for the solution of the Markov chain roulette-type game is deduced. The present analysis is supported by two mathematical models from tumor growth and war with bargaining.

  19. Influence of credit scoring on the dynamics of Markov chain

    Science.gov (United States)

    Galina, Timofeeva

    2015-11-01

    Markov processes are widely used to model the dynamics of a credit portfolio and forecast the portfolio risk and profitability. In the Markov chain model the loan portfolio is divided into several groups with different quality, which determined by presence of indebtedness and its terms. It is proposed that dynamics of portfolio shares is described by a multistage controlled system. The article outlines mathematical formalization of controls which reflect the actions of the bank's management in order to improve the loan portfolio quality. The most important control is the organization of approval procedure of loan applications. The credit scoring is studied as a control affecting to the dynamic system. Different formalizations of "good" and "bad" consumers are proposed in connection with the Markov chain model.

  20. Ergodic degrees for continuous-time Markov chains

    Institute of Scientific and Technical Information of China (English)

    MAO; Yonghua

    2004-01-01

    This paper studies the existence of the higher orders deviation matrices for continuous time Markov chains by the moments for the hitting times. An estimate of the polynomial convergence rates for the transition matrix to the stationary measure is obtained. Finally, the explicit formulas for birth-death processes are presented.

  1. On a Markov chain roulette-type game

    Science.gov (United States)

    El-Shehawey, M. A.; El-Shreef, Gh A.

    2009-05-01

    A Markov chain on non-negative integers which arises in a roulette-type game is discussed. The transition probabilities are p01 = ρ, pNj = δNj, pi,i+W = q, pi,i-1 = p = 1 - q, 1 game is deduced. The present analysis is supported by two mathematical models from tumor growth and war with bargaining.

  2. Markov chains with quasitoeplitz transition matrix: first zero hitting

    Directory of Open Access Journals (Sweden)

    Alexander M. Dukhovny

    1989-01-01

    Full Text Available This paper continues the investigation of Markov Chains with a quasitoeplitz transition matrix. Generating functions of first zero hitting probabilities and mean times are found by the solution of special Riemann boundary value problems on the unit circle. Duality is discussed.

  3. Operations and support cost modeling using Markov chains

    Science.gov (United States)

    Unal, Resit

    1989-01-01

    Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.

  4. Converging from Branching to Linear Metrics on Markov Chains

    DEFF Research Database (Denmark)

    Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand; Mardare, Radu Iulian

    We study the strong and strutter trace distances on Markov chains (MCs). Our interest in these metrics is motivated by their relation to the probabilistic LTL-model checking problem: we prove that they correspond to the maximal differences in the probability of satisfying the same LTL and LTL...

  5. Adiabatic condition and the quantum hitting time of Markov chains

    International Nuclear Information System (INIS)

    We present an adiabatic quantum algorithm for the abstract problem of searching marked vertices in a graph, or spatial search. Given a random walk (or Markov chain) P on a graph with a set of unknown marked vertices, one can define a related absorbing walk P' where outgoing transitions from marked vertices are replaced by self-loops. We build a Hamiltonian H(s) from the interpolated Markov chain P(s)=(1-s)P+sP' and use it in an adiabatic quantum algorithm to drive an initial superposition over all vertices to a superposition over marked vertices. The adiabatic condition implies that, for any reversible Markov chain and any set of marked vertices, the running time of the adiabatic algorithm is given by the square root of the classical hitting time. This algorithm therefore demonstrates a novel connection between the adiabatic condition and the classical notion of hitting time of a random walk. It also significantly extends the scope of previous quantum algorithms for this problem, which could only obtain a full quadratic speedup for state-transitive reversible Markov chains with a unique marked vertex.

  6. Exploring Mass Perception with Markov Chain Monte Carlo

    Science.gov (United States)

    Cohen, Andrew L.; Ross, Michael G.

    2009-01-01

    Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…

  7. Using Markov Chain Analyses in Counselor Education Research

    Science.gov (United States)

    Duys, David K.; Headrick, Todd C.

    2004-01-01

    This study examined the efficacy of an infrequently used statistical analysis in counselor education research. A Markov chain analysis was used to examine hypothesized differences between students' use of counseling skills in an introductory course. Thirty graduate students participated in the study. Independent raters identified the microskills…

  8. Building Higher-Order Markov Chain Models with EXCEL

    Science.gov (United States)

    Ching, Wai-Ki; Fung, Eric S.; Ng, Michael K.

    2004-01-01

    Categorical data sequences occur in many applications such as forecasting, data mining and bioinformatics. In this note, we present higher-order Markov chain models for modelling categorical data sequences with an efficient algorithm for solving the model parameters. The algorithm can be implemented easily in a Microsoft EXCEL worksheet. We give a…

  9. Power plant reliability calculation with Markov chain models

    International Nuclear Information System (INIS)

    In the paper power plant operation is modelled using continuous time Markov chains with discrete state space. The model is used to compute the power plant reliability and the importance and influence of individual states, as well as the transition probabilities between states. For comparison the model is fitted to data for coal and nuclear power plants recorded over several years. (orig.)

  10. On the Total Variation Distance of Semi-Markov Chains

    DEFF Research Database (Denmark)

    Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand; Mardare, Radu Iulian

    Semi-Markov chains (SMCs) are continuous-time probabilistic transition systems where the residence time on states is governed by generic distributions on the positive real line. This paper shows the tight relation between the total variation distance on SMCs and their model checking problem over...

  11. A Parallel Solver for Large-Scale Markov Chains

    Czech Academy of Sciences Publication Activity Database

    Benzi, M.; Tůma, Miroslav

    2002-01-01

    Roč. 41, - (2002), s. 135-153. ISSN 0168-9274 R&D Projects: GA AV ČR IAA2030801; GA ČR GA101/00/1035 Keywords : parallel preconditioning * iterative methods * discrete Markov chains * generalized inverses * singular matrices * graph partitioning * AINV * Bi-CGSTAB Subject RIV: BA - General Mathematics Impact factor: 0.504, year: 2002

  12. Students' Progress throughout Examination Process as a Markov Chain

    Science.gov (United States)

    Hlavatý, Robert; Dömeová, Ludmila

    2014-01-01

    The paper is focused on students of Mathematical methods in economics at the Czech university of life sciences (CULS) in Prague. The idea is to create a model of students' progress throughout the whole course using the Markov chain approach. Each student has to go through various stages of the course requirements where his success depends on the…

  13. Bayesian internal dosimetry calculations using Markov Chain Monte Carlo

    International Nuclear Information System (INIS)

    A new numerical method for solving the inverse problem of internal dosimetry is described. The new method uses Markov Chain Monte Carlo and the Metropolis algorithm. Multiple intake amounts, biokinetic types, and times of intake are determined from bioassay data by integrating over the Bayesian posterior distribution. The method appears definitive, but its application requires a large amount of computing time. (author)

  14. Algebraic convergence for discrete-time ergodic Markov chains

    Institute of Scientific and Technical Information of China (English)

    MAO; Yonghua(毛永华)

    2003-01-01

    This paper studies the e-ergodicity for discrete-time recurrent Markov chains. It proves that thee-order deviation matrix exists and is finite if and only if the chain is (e + 2)-ergodic, and then the algebraicdecay rates of the n-step transition probability to the stationary distribution are obtained. The criteria fore-ergodicity are given in terms of existence of solution to an equation. The main results are illustrated by some examples.

  15. Hierarchical Multiple Markov Chain Model for Unsupervised Texture Segmentation

    Czech Academy of Sciences Publication Activity Database

    Scarpa, G.; Gaetano, R.; Haindl, Michal; Zerubia, J.

    2009-01-01

    Roč. 18, č. 8 (2009), s. 1830-1843. ISSN 1057-7149 R&D Projects: GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : Classification * texture analysis * segmentation * hierarchical image models * Markov process Subject RIV: BD - Theory of Information Impact factor: 2.848, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-hierarchical multiple markov chain model for unsupervised texture segmentation.pdf

  16. Markov chain analysis of single spin flip Ising simulations

    International Nuclear Information System (INIS)

    The Markov processes defined by random and loop-based schemes for single spin flip attempts in Monte Carlo simulations of the 2D Ising model are investigated, by explicitly constructing their transition matrices. Their analysis reveals that loops over all lattice sites using a Metropolis-type single spin flip probability often do not define ergodic Markov chains, and have distorted dynamical properties even if they are ergodic. The transition matrices also enable a comparison of the dynamics of random versus loop spin selection and Glauber versus Metropolis probabilities

  17. Markov Chain for Reuse Strategies of Product Families

    Institute of Scientific and Technical Information of China (English)

    LUO Jia; JIANG Lan

    2007-01-01

    A methodology is presented to plan reuse strategies of common modules in a product family by using the concepts of function degradation, reliability, function requirement, cost and life time. Markov chain model is employed to predict function degradation and reliability. A utility model is used to evaluate the preference between used modules and new modules. An example of cascading-requirment product family illustrates the main ideas of our work. The Markov models are used effectively to predict function degradation and reliability. Utility theory is helpful to evaluate the reuse options of common modules.

  18. Markov chain aggregation for agent-based models

    CERN Document Server

    Banisch, Sven

    2016-01-01

    This self-contained text develops a Markov chain approach that makes the rigorous analysis of a class of microscopic models that specify the dynamics of complex systems at the individual level possible. It presents a general framework of aggregation in agent-based and related computational models, one which makes use of lumpability and information theory in order to link the micro and macro levels of observation. The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent-based model (ABM), which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. An explicit formal representation of a resulting “micro-chain” including microscopic transition rates is derived for a class of models by using the random mapping representation of a Markov process. The type of probability distribution used to implement the stochastic part of the model, which defines the upd...

  19. Regularity of harmonic functions for some Markov chains with unbounded range

    OpenAIRE

    Xu, Fangjun

    2012-01-01

    We consider a class of continuous time Markov chains on $\\Z^d$. These chains are the discrete space analogue of Markov processes with jumps. Under some conditions, we show that harmonic functions associated with these Markov chains are H\\"{o}lder continuous.

  20. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  1. Rapid mixing and Markov bases

    OpenAIRE

    Windisch, Tobias

    2015-01-01

    The mixing behaviour of Markov chains on lattice points of polytopes using Markov bases is examined. It is shown that, in fixed dimension, these Markov chains do not mix rapidly. As a way out, a method of how to adapt Markov bases in order to achieve the fastest mixing behaviour is introduced.

  2. Optimization of Markov chains for a SUSY fitter: Fittino

    International Nuclear Information System (INIS)

    A Markov chains is a ''random walk'' algorithm which allows an efficient scan of a given profile and the search of the absolute minimum, even when this profil suffers from the presence of many secondary minima. This property makes them particularly suited to the study of Supersymmetry (SUSY) models, where minima have to be found in up-to 18-dimensional space for the general MSSM. Hence the SUSY fitter ''Fittino'' uses a Metropolis*Hastings Markov chain in a frequentist interpretation to study the impact of current low -energy measurements, as well as expected measurements from LHC and ILC, on the SUSY parameter space. The expected properties of an optimal Markov chain should be the independence of final results with respect to the starting point and a fast convergence. These two points can be achieved by optimizing the width of the proposal distribution, that is the ''average step length'' between two links in the chain. We developped an algorithm for the optimization of the proposal width, by modifying iteratively the width so that the rejection rate be around fifty percent. This optimization leads to a starting point independent chain as well as a faster convergence.

  3. Markov Chain Order estimation with Conditional Mutual Information

    CERN Document Server

    Papapetrou, Maria; 10.1016/j.physa.2012.12.017.

    2013-01-01

    We introduce the Conditional Mutual Information (CMI) for the estimation of the Markov chain order. For a Markov chain of $K$ symbols, we define CMI of order $m$, $I_c(m)$, as the mutual information of two variables in the chain being $m$ time steps apart, conditioning on the intermediate variables of the chain. We find approximate analytic significance limits based on the estimation bias of CMI and develop a randomization significance test of $I_c(m)$, where the randomized symbol sequences are formed by random permutation of the components of the original symbol sequence. The significance test is applied for increasing $m$ and the Markov chain order is estimated by the last order for which the null hypothesis is rejected. We present the appropriateness of CMI-testing on Monte Carlo simulations and compare it to the Akaike and Bayesian information criteria, the maximal fluctuation method (Peres-Shields estimator) and a likelihood ratio test for increasing orders using $\\phi$-divergence. The order criterion of...

  4. Markov chain modelling of pitting corrosion in underground pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Caleyo, F. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico)], E-mail: fcaleyo@gmail.com; Velazquez, J.C. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico); Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado, 10400 La Habana (Cuba); Hallen, J.M. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico)

    2009-09-15

    A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.

  5. Markov chain modelling of pitting corrosion in underground pipelines

    International Nuclear Information System (INIS)

    A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.

  6. An Approach of Diagnosis Based On The Hidden Markov Chains Model

    Directory of Open Access Journals (Sweden)

    Karim Bouamrane

    2008-07-01

    Full Text Available Diagnosis is a key element in industrial system maintenance process performance. A diagnosis tool is proposed allowing the maintenance operators capitalizing on the knowledge of their trade and subdividing it for better performance improvement and intervention effectiveness within the maintenance process service. The Tool is based on the Markov Chain Model and more precisely the Hidden Markov Chains (HMC which has the system failures determination advantage, taking into account the causal relations, stochastic context modeling of their dynamics and providing a relevant diagnosis help by their ability of dubious information use. Since the FMEA method is a well adapted artificial intelligence field, the modeling with Markov Chains is carried out with its assistance. Recently, a dynamic programming recursive algorithm, called 'Viterbi Algorithm', is being used in the Hidden Markov Chains field. This algorithm provides as input to the HMC a set of system observed effects and generates at exit the various causes having caused the loss from one or several system functions.

  7. Algebraic decay in self-similar Markov chains

    International Nuclear Information System (INIS)

    A continuous-time Markov chain is used to model motion in the neighborhood of a critical invariant circle for a Hamiltonian map. States in the infinite chain represent successive rational approximants to the frequency of the invariant circle. For the case of a noble frequency, the chain is self-similar and the nonlinear integral equation for the first passage time distribution is solved exactly. The asymptotic distribution is a power law times a function periodic in the logarithm of the time. For parameters relevant to the critical noble circle, the decay proceeds as t/sup -4.05/

  8. An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes

    Science.gov (United States)

    Kapland, David

    2008-01-01

    This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…

  9. Comparison and converse comparison theorems for backward stochastic differential equations with Markov chain noise

    OpenAIRE

    Yang, Zhe; Ramarimbahoaka, Dimbinirina; Robert J. Elliott

    2016-01-01

    Comparison and converse comparison theorems are important parts of the research on backward stochastic differential equations. In this paper, we obtain comparison results for one dimensional backward stochastic differential equations with Markov chain noise, adapting previous results under simplified hypotheses. We introduce a type of nonlinear expectation, the $f$-expectation, which is an interpretation of the solution to a BSDE, and use it to establish a converse comparison theorem for the ...

  10. Recursive estimation of high-order Markov chains: Approximation by finite mixtures

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2016-01-01

    Roč. 326, č. 1 (2016), s. 188-201. ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Markov chain * Approximate parameter estimation * Bayesian recursive estimation * Adaptive systems * Kullback–Leibler divergence * Forgetting Subject RIV: BC - Control Systems Theory Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2015/AS/karny-0447119.pdf

  11. Constructing 1/ωα noise from reversible Markov chains

    Science.gov (United States)

    Erland, Sveinung; Greenwood, Priscilla E.

    2007-09-01

    This paper gives sufficient conditions for the output of 1/ωα noise from reversible Markov chains on finite state spaces. We construct several examples exhibiting this behavior in a specified range of frequencies. We apply simple representations of the covariance function and the spectral density in terms of the eigendecomposition of the probability transition matrix. The results extend to hidden Markov chains. We generalize the results for aggregations of AR1-processes of C. W. J. Granger [J. Econometrics 14, 227 (1980)]. Given the eigenvalue function, there is a variety of ways to assign values to the states such that the 1/ωα condition is satisfied. We show that a random walk on a certain state space is complementary to the point process model of 1/ω noise of B. Kaulakys and T. Meskauskas [Phys. Rev. E 58, 7013 (1998)]. Passing to a continuous state space, we construct 1/ωα noise which also has a long memory.

  12. Bayesian Smoothing Algorithms in Partially Observed Markov Chains

    Science.gov (United States)

    Ait-el-Fquih, Boujemaa; Desbouvries, François

    2006-11-01

    Let x = {xn}n∈N be a hidden process, y = {yn}n∈N an observed process and r = {rn}n∈N some auxiliary process. We assume that t = {tn}n∈N with tn = (xn, rn, yn-1) is a (Triplet) Markov Chain (TMC). TMC are more general than Hidden Markov Chains (HMC) and yet enable the development of efficient restoration and parameter estimation algorithms. This paper is devoted to Bayesian smoothing algorithms for TMC. We first propose twelve algorithms for general TMC. In the Gaussian case, these smoothers reduce to a set of algorithms which include, among other solutions, extensions to TMC of classical Kalman-like smoothing algorithms (originally designed for HMC) such as the RTS algorithms, the Two-Filter algorithms or the Bryson and Frazier algorithm.

  13. Geometric allocation approaches in Markov chain Monte Carlo

    International Nuclear Information System (INIS)

    The Markov chain Monte Carlo method is a versatile tool in statistical physics to evaluate multi-dimensional integrals numerically. For the method to work effectively, we must consider the following key issues: the choice of ensemble, the selection of candidate states, the optimization of transition kernel, algorithm for choosing a configuration according to the transition probabilities. We show that the unconventional approaches based on the geometric allocation of probabilities or weights can improve the dynamics and scaling of the Monte Carlo simulation in several aspects. Particularly, the approach using the irreversible kernel can reduce or sometimes completely eliminate the rejection of trial move in the Markov chain. We also discuss how the space-time interchange technique together with Walker's method of aliases can reduce the computational time especially for the case where the number of candidates is large, such as models with long-range interactions

  14. Statistical significance test for transition matrices of atmospheric Markov chains

    Science.gov (United States)

    Vautard, Robert; Mo, Kingtse C.; Ghil, Michael

    1990-01-01

    Low-frequency variability of large-scale atmospheric dynamics can be represented schematically by a Markov chain of multiple flow regimes. This Markov chain contains useful information for the long-range forecaster, provided that the statistical significance of the associated transition matrix can be reliably tested. Monte Carlo simulation yields a very reliable significance test for the elements of this matrix. The results of this test agree with previously used empirical formulae when each cluster of maps identified as a distinct flow regime is sufficiently large and when they all contain a comparable number of maps. Monte Carlo simulation provides a more reliable way to test the statistical significance of transitions to and from small clusters. It can determine the most likely transitions, as well as the most unlikely ones, with a prescribed level of statistical significance.

  15. Dynamics of market indices, Markov chains, and random walking problem

    CERN Document Server

    Krivoruchenko, M I

    2001-01-01

    Dynamics of the major USA market indices DJIA, S&P, Nasdaq, and NYSE is analyzed from the point of view of the random walking problem with two-step correlations of the market moves. The parameters characterizing the stochastic dynamics are determined empirically from the historical quotes for the daily, weekly, and monthly series. The results show existence of statistically significant correlations between the subsequent market moves. The weekly and monthly parameters are calculated in terms of the daily parameters, assuming that the Markov chains with two-step correlations give a complete description of the market stochastic dynamics. We show that the macro- and micro-parameters obey the renorm group equation. The comparison of the parameters determined from the renorm group equation with the historical values shows that the Markov chains approach gives reasonable predictions for the weekly quotes and underestimates the probability for continuation of the down trend in the monthly quotes. The return and ...

  16. On Dirichlet eigenvectors for neutral two-dimensional Markov chains

    CERN Document Server

    Champagnat, Nicolas; Miclo, Laurent

    2012-01-01

    We consider a general class of discrete, two-dimensional Markov chains modeling the dynamics of a population with two types, without mutation or immigration, and neutral in the sense that type has no influence on each individual's birth or death parameters. We prove that all the eigenvectors of the corresponding transition matrix or infinitesimal generator \\Pi\\ can be expressed as the product of "universal" polynomials of two variables, depending on each type's size but not on the specific transitions of the dynamics, and functions depending only on the total population size. These eigenvectors appear to be Dirichlet eigenvectors for \\Pi\\ on the complement of triangular subdomains, and as a consequence the corresponding eigenvalues are ordered in a specific way. As an application, we study the quasistationary behavior of finite, nearly neutral, two-dimensional Markov chains, absorbed in the sense that 0 is an absorbing state for each component of the process.

  17. Maximum entropy estimation of transition probabilities of reversible Markov chains

    OpenAIRE

    Erik Van der Straeten

    2009-01-01

    In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.

  18. Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains

    Directory of Open Access Journals (Sweden)

    Erik Van der Straeten

    2009-11-01

    Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.

  19. Computational Discrete Time Markov Chain with Correlated Transition Probabilities

    OpenAIRE

    Peerayuth Charnsethikul

    2006-01-01

    This study presents a computational procedure for analyzing statistics of steady state probabilities in a discrete time Markov chain with correlations among their transition probabilities. The proposed model simply uses the first order Taylor's series expansion and statistical expected value properties to obtain the resulting linear matrix equations system. Computationally, the bottleneck is O(n4) but can be improved by distributed and parallel processing. A preliminary computational experien...

  20. Mortgages and Markov Chains: A Simplified Evaluation Model

    OpenAIRE

    Paul Zipkin

    1993-01-01

    This paper has two purposes. The first is purely expository: to introduce stochastic interest-rate models and security-evaluation methods in a simple mathematical setting. Specifically, we assume the uncertainties in the model are represented by a discrete-time, finite-state Markov chain. Second, using this framework, we present a relatively simple model for the evaluation of mortgage-backed securities.

  1. Model of life insurance policies using Markov chains with rewards

    Czech Academy of Sciences Publication Activity Database

    Sitař, Milan

    Bratislava : University of Economics, 2004 - (Lukáčik, M.), s. 179-186 ISBN 80-8078-012-9. [Quantitative Methods in Economics. Multiple Criteria Decision Making /12./. Virt (SK), 02.06.2004-04.06.2004] R&D Projects: GA ČR GA402/02/1015 Institutional research plan: CEZ:AV0Z1075907 Keywords : Markov chains with rewards * life insurance * mathematical reserve Subject RIV: BB - Applied Statistics, Operational Research

  2. Space system operations and support cost analysis using Markov chains

    Science.gov (United States)

    Unal, Resit; Dean, Edwin B.; Moore, Arlene A.; Fairbairn, Robert E.

    1990-01-01

    This paper evaluates the use of Markov chain process in probabilistic life cycle cost analysis and suggests further uses of the process as a design aid tool. A methodology is developed for estimating operations and support cost and expected life for reusable space transportation systems. Application of the methodology is demonstrated for the case of a hypothetical space transportation vehicle. A sensitivity analysis is carried out to explore the effects of uncertainty in key model inputs.

  3. Risk-Sensitive Average Optimality in Markov Decision Chains

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel; Montes-de-Oca, R.

    Berlin : Springer, 2008 - (Kalcsics, J.; Nickel, S.), s. 69-74 ISBN 978-3-540-77902-5. [Annual International Conference of the German Operations Research Society (GOR). Saarbruecken (DE), 05.09.2007-07.09.2007] R&D Projects: GA ČR GA402/05/0115; GA ČR GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov decision chain * risk-sensitive optimality * asymptotical behaviour Subject RIV: AH - Economics

  4. A control chart using copula-based Markov chain models

    OpenAIRE

    Long, Ting-Hsuan; Emura, Takeshi

    2014-01-01

    Statistical process control is an important and convenient tool to stabilize the quality of manufactured goods and service operations. The traditional Shewhart control chart has been used extensively for process control, which is valid under the independence assumption of consecutive observations. In real world applications, there are many types of dependent observations in which the traditional control chart cannot be used. In this paper, we propose to apply a copula-based Markov chain to pe...

  5. Parallel Markov Chain Monte Carlo via Spectral Clustering

    OpenAIRE

    Basse, Guillaume W.; Pillai, Natesh S.; Smith, Aaron

    2016-01-01

    As it has become common to use many computer cores in routine applications, finding good ways to parallelize popular algorithms has become increasingly important. In this paper, we present a parallelization scheme for Markov chain Monte Carlo (MCMC) methods based on spectral clustering of the underlying state space, generalizing earlier work on parallelization of MCMC methods by state space partitioning. We show empirically that this approach speeds up MCMC sampling for multimodal distributio...

  6. ''adding'' algorithm for the Markov chain formalism for radiation transfer

    International Nuclear Information System (INIS)

    The Markov chain radiative transfer method of Esposito and House has been shown to be both efficient and accurate for calculation of the diffuse reflection from a homogeneous scattering planetary atmosphere. The use of a new algorithm similar to the ''adding'' formula of Hansen and Travis extends the application of this formalism to an arbitrarily deep atmosphere. The basic idea for this algorithm is to consider a preceding calculation as a single state of a new Markov chain. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. The time required for the algorithm is comparable to that for a doubling calculation for a homogeneous atmosphere, but for a non-homogeneous atmosphere the new method is considerably faster than the standard ''adding'' routine. As with he standard ''adding'' method, the information on the internal radiation field is lost during the calculation. This method retains the advantage of the earlier Markov chain method that the time required is relatively insensitive to the number of illumination angles or observation angles for which the diffuse reflection is calculated. A technical write-up giving fuller details of the algorithm and a sample code are available from the author

  7. Fastest Mixing Markov Chain on Symmetric K-Partite Network

    CERN Document Server

    Jafarizadeh, Saber

    2010-01-01

    Solving fastest mixing Markov chain problem (i.e. finding transition probabilities on the edges to minimize the second largest eigenvalue modulus of the transition probability matrix) over networks with different topologies is one of the primary areas of research in the context of computer science and one of the well known networks in this issue is K-partite network. Here in this work we present analytical solution for the problem of fastest mixing Markov chain by means of stratification and semidefinite programming, for four particular types of K-partite networks, namely Symmetric K-PPDR, Semi Symmetric K-PPDR, Cycle K-PPDR and Semi Cycle K-PPDR networks. Our method in this paper is based on convexity of fastest mixing Markov chain problem, and inductive comparing of the characteristic polynomials initiated by slackness conditions in order to find the optimal transition probabilities. The presented results shows that a Symmetric K-PPDR network and its equivalent Semi Symmetric K-PPDR network have the same SL...

  8. Model Perubahan Penggunaan Lahan Menggunakan Cellular Automata-Markov Chain di Kawasan Mamminasata

    OpenAIRE

    Vera Damayanti Peruge, Tiur

    2012-01-01

    Telah dilakukan penelitian tentang perubahan penggunaan lahan di kawasan Mamminasata menggunakan model Cellular Automata-Markov Chain. Tujuan dari penelitian ini adalah menganalisis perubahan penggunaan lahan melalui peta penggunaan lahan kawasan Mamminasata tahun 2004 dan 2009 untuk memperoleh penggunaan lahan tahun 2012 berbasis Markov Chain dengan analisis probabilitas transisi Markov. Hasil analisis yang diperoleh dilakukan validasi dengan validasi Kappa m...

  9. Reversible Markov chain estimation using convex-concave programming

    CERN Document Server

    Trendelkamp-Schroer, Benjamin; Noe, Frank

    2016-01-01

    We present a convex-concave reformulation of the reversible Markov chain estimation problem and outline an efficient numerical scheme for the solution of the resulting problem based on a primal-dual interior point method for monotone variational inequalities. Extensions to situations in which information about the stationary vector is available can also be solved via the convex- concave reformulation. The method can be generalized and applied to the discrete transition matrix reweighting analysis method to perform inference from independent chains with specified couplings between the stationary probabilities. The proposed approach offers a significant speed-up compared to a fixed-point iteration for a number of relevant applications.

  10. Large deviations for Markov chains in the positive quadrant

    International Nuclear Information System (INIS)

    The paper deals with so-called N-partially space-homogeneous time-homogeneous Markov chains X(y,n), n=0,1,2,..., X(y,0)=y, in the positive quadrant. These Markov chains are characterized by the following property of the transition probabilities P(y,A)=P(X(y,1) element of A): for some N≥0 the measure P(y,dx) depends only on x2, y2, and x1-y1 in the domain x1>N, y1>N, and only on x1, y1, and x2-y2 in the domain x2>N, y2>N. For such chains the asymptotic behaviour is found for a fixed set B as s→∞, |x|→∞, and n→∞. Some other conditions on the growth of parameters are also considered, for example, |x-y|→∞, |y|→∞. A study is made of the structure of the most probable trajectories, which give the main contribution to this asymptotics, and a number of other results pertaining to the topic are established. Similar results are obtained for the narrower class of 0-partially homogeneous ergodic chains under less restrictive moment conditions on the transition probabilities P(y,dx). Moreover, exact asymptotic expressions for the probabilities P(X(0,n) element of x+B) are found for 0-partially homogeneous ergodic chains under some additional conditions. The interest in partially homogeneous Markov chains in positive octants is due to the mathematical aspects (new and interesting problems arise in the framework of general large deviation theory) as well as applied issues, for such chains prove to be quite accurate mathematical models for numerous basic types of queueing and communication networks such as the widely known Jackson networks, polling systems, or communication networks associated with the ALOHA algorithm. There is a vast literature dealing with the analysis of these objects. The present paper is an attempt to find the extent to which an asymptotic analysis is possible for Markov chains of this type in their general form without using any special properties of the specific applications mentioned above. It turns out that such an analysis is quite

  11. Recurrence and invariant measure of Markov chains in double-infinite random environments

    Institute of Scientific and Technical Information of China (English)

    XING; Xiusan

    2001-01-01

    [1]Cogburn, R., Markov chains in random environments: The case of Markovian environments, Ann. Probab., 1980, 8(3): 908—916.[2]Cogburn, R., The ergodic theory of Markov chains in random environments, Z. W., 1984, 66(2): 109—128.[3]Orey, S., Markov chains with stochastically stationary transition probabilities, Ann. Probab., 1991, 19(3): 907—928.[4]Li Yingqiu, Some notes of Markov chains in Markov environments, Advances in Mathematics(in Chinese), 1999, 28(4): 358—360.

  12. Robust Dynamics and Control of a Partially Observed Markov Chain

    International Nuclear Information System (INIS)

    In a seminal paper, Martin Clark (Communications Systems and Random Process Theory, Darlington, 1977, pp. 721-734, 1978) showed how the filtered dynamics giving the optimal estimate of a Markov chain observed in Gaussian noise can be expressed using an ordinary differential equation. These results offer substantial benefits in filtering and in control, often simplifying the analysis and an in some settings providing numerical benefits, see, for example Malcolm et al. (J. Appl. Math. Stoch. Anal., 2007, to appear).Clark's method uses a gauge transformation and, in effect, solves the Wonham-Zakai equation using variation of constants. In this article, we consider the optimal control of a partially observed Markov chain. This problem is discussed in Elliott et al. (Hidden Markov Models Estimation and Control, Applications of Mathematics Series, vol. 29, 1995). The innovation in our results is that the robust dynamics of Clark are used to compute forward in time dynamics for a simplified adjoint process. A stochastic minimum principle is established

  13. Inferring animal densities from tracking data using Markov chains.

    Directory of Open Access Journals (Sweden)

    Hal Whitehead

    Full Text Available The distributions and relative densities of species are keys to ecology. Large amounts of tracking data are being collected on a wide variety of animal species using several methods, especially electronic tags that record location. These tracking data are effectively used for many purposes, but generally provide biased measures of distribution, because the starts of the tracks are not randomly distributed among the locations used by the animals. We introduce a simple Markov-chain method that produces unbiased measures of relative density from tracking data. The density estimates can be over a geographical grid, and/or relative to environmental measures. The method assumes that the tracked animals are a random subset of the population in respect to how they move through the habitat cells, and that the movements of the animals among the habitat cells form a time-homogenous Markov chain. We illustrate the method using simulated data as well as real data on the movements of sperm whales. The simulations illustrate the bias introduced when the initial tracking locations are not randomly distributed, as well as the lack of bias when the Markov method is used. We believe that this method will be important in giving unbiased estimates of density from the growing corpus of animal tracking data.

  14. Dynamic temperature selection for parallel-tempering in Markov chain Monte Carlo simulations

    CERN Document Server

    Vousden, Will; Mandel, Ilya

    2015-01-01

    Modern problems in astronomical Bayesian inference require efficient methods for sampling from complex, high-dimensional, often multi-modal probability distributions. Most popular methods, such as Markov chain Monte Carlo sampling, perform poorly on strongly multi-modal probability distributions, rarely jumping between modes or settling on just one mode without finding others. Parallel tempering addresses this problem by sampling simultaneously with separate Markov chains from tempered versions of the target distribution with reduced contrast levels. Gaps between modes can be traversed at higher temperatures, while individual modes can be efficiently explored at lower temperatures. In this paper, we investigate how one might choose the ladder of temperatures to achieve lower autocorrelation time for the sampler (and therefore more efficient sampling). In particular, we present a simple, easily-implemented algorithm for dynamically adapting the temperature configuration of a sampler while sampling in order to ...

  15. A Markov chain model for CANDU feeder pipe degradation

    International Nuclear Information System (INIS)

    There is need for risk based approach to manage feeder pipe degradation to ensure safe operation by minimizing the nuclear safety risk. The current lack of understanding of some fundamental degradation mechanisms will result in uncertainty in predicting the rupture frequency. There are still concerns caused by uncertainties in the inspection techniques and engineering evaluations which should be addressed in the current procedures. A probabilistic approach is therefore useful in quantifying the risk and also it provides a tool for risk based decision making. This paper discusses the application of Markov chain model for feeder pipes in order to predict and manage the risks associated with the existing and future aging-related feeder degradation mechanisms. The major challenge in the approach is the lack of service data in characterizing the transition probabilities of the Markov model. The paper also discusses various approaches in estimating plant specific degradation rates. (author)

  16. Caching and interpolated likelihoods: accelerating cosmological Monte Carlo Markov chains

    International Nuclear Information System (INIS)

    We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a ''proof of concept'', and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user

  17. DREAM(D: an adaptive Markov Chain Monte Carlo simulation algorithm to solve discrete, noncontinuous, and combinatorial posterior parameter estimation problems

    Directory of Open Access Journals (Sweden)

    C. J. F. Ter Braak

    2011-12-01

    Full Text Available Formal and informal Bayesian approaches have found widespread implementation and use in environmental modeling to summarize parameter and predictive uncertainty. Successful implementation of these methods relies heavily on the availability of efficient sampling methods that approximate, as closely and consistently as possible the (evolving posterior target distribution. Much of this work has focused on continuous variables that can take on any value within their prior defined ranges. Here, we introduce theory and concepts of a discrete sampling method that resolves the parameter space at fixed points. This new code, entitled DREAM(D uses the recently developed DREAM algorithm (Vrugt et al., 2008, 2009a, b as its main building block but implements two novel proposal distributions to help solve discrete and combinatorial optimization problems. This novel MCMC sampler maintains detailed balance and ergodicity, and is especially designed to resolve the emerging class of optimal experimental design problems. Three different case studies involving a Sudoku puzzle, soil water retention curve, and rainfall – runoff model calibration problem are used to benchmark the performance of DREAM(D. The theory and concepts developed herein can be easily integrated into other (adaptive MCMC algorithms.

  18. Orlicz integrability of additive functionals of Harris ergodic Markov chains

    CERN Document Server

    Adamczak, Radosław

    2012-01-01

    For a Harris ergodic Markov chain $(X_n)_{n\\ge 0}$, on a general state space, started from the so called small measure or from the stationary distribution we provide optimal estimates for Orlicz norms of sums $\\sum_{i=0}^\\tau f(X_i)$, where $\\tau$ is the first regeneration time of the chain. The estimates are expressed in terms of other Orlicz norms of the function $f$ (wrt the stationary distribution) and the regeneration time $\\tau$ (wrt the small measure). We provide applications to tail estimates for additive functionals of the chain $(X_n)$ generated by unbounded functions as well as to classical limit theorems (CLT, LIL, Berry-Esseen).

  19. Mixed Vehicle Flow At Signalized Intersection: Markov Chain Analysis

    Directory of Open Access Journals (Sweden)

    Gertsbakh Ilya B.

    2015-09-01

    Full Text Available We assume that a Poisson flow of vehicles arrives at isolated signalized intersection, and each vehicle, independently of others, represents a random number X of passenger car units (PCU’s. We analyze numerically the stationary distribution of the queue process {Zn}, where Zn is the number of PCU’s in a queue at the beginning of the n-th red phase, n → ∞. We approximate the number Yn of PCU’s arriving during one red-green cycle by a two-parameter Negative Binomial Distribution (NBD. The well-known fact is that {Zn} follow an infinite-state Markov chain. We approximate its stationary distribution using a finite-state Markov chain. We show numerically that there is a strong dependence of the mean queue length E[Zn] in equilibrium on the input distribution of Yn and, in particular, on the ”over dispersion” parameter γ= Var[Yn]/E[Yn]. For Poisson input, γ = 1. γ > 1 indicates presence of heavy-tailed input. In reality it means that a relatively large ”portion” of PCU’s, considerably exceeding the average, may arrive with high probability during one red-green cycle. Empirical formulas are presented for an accurate estimation of mean queue length as a function of load and g of the input flow. Using the Markov chain technique, we analyze the mean ”virtual” delay time for a car which always arrives at the beginning of the red phase.

  20. SDI and Markov Chains for Regional Drought Characteristics

    Directory of Open Access Journals (Sweden)

    Chen-Feng Yeh

    2015-08-01

    Full Text Available In recent years, global climate change has altered precipitation patterns, causing uneven spatial and temporal distribution of precipitation that gradually induces precipitation polarization phenomena. Taiwan is located in the subtropical climate zone, with distinct wet and dry seasons, which makes the polarization phenomenon more obvious; this has also led to a large difference between river flows during the wet and dry seasons, which is significantly influenced by precipitation, resulting in hydrological drought. Therefore, to effectively address the growing issue of water shortages, it is necessary to explore and assess the drought characteristics of river systems. In this study, the drought characteristics of northern Taiwan were studied using the streamflow drought index (SDI and Markov chains. Analysis results showed that the year 2002 was a turning point for drought severity in both the Lanyang River and Yilan River basins; the severity of rain events in the Lanyang River basin increased after 2002, and the severity of drought events in the Yilan River basin exhibited a gradual upward trend. In the study of drought severity, analysis results from periods of three months (November to January and six months (November to April have shown significant drought characteristics. In addition, analysis of drought occurrence probabilities using the method of Markov chains has shown that the occurrence probabilities of drought events are higher in the Lanyang River basin than in the Yilan River basin; particularly for extreme events, the occurrence probability of an extreme drought event is 20.6% during the dry season (November to April in the Lanyang River basin, and 3.4% in the Yilan River basin. This study shows that for analysis of drought/wet occurrence probabilities, the results obtained for the drought frequency and occurrence probability using short-term data with the method of Markov chains can be used to predict the long-term occurrence

  1. LISA data analysis using Markov chain Monte Carlo methods

    International Nuclear Information System (INIS)

    The Laser Interferometer Space Antenna (LISA) is expected to simultaneously detect many thousands of low-frequency gravitational wave signals. This presents a data analysis challenge that is very different to the one encountered in ground based gravitational wave astronomy. LISA data analysis requires the identification of individual signals from a data stream containing an unknown number of overlapping signals. Because of the signal overlaps, a global fit to all the signals has to be performed in order to avoid biasing the solution. However, performing such a global fit requires the exploration of an enormous parameter space with a dimension upwards of 50 000. Markov Chain Monte Carlo (MCMC) methods offer a very promising solution to the LISA data analysis problem. MCMC algorithms are able to efficiently explore large parameter spaces, simultaneously providing parameter estimates, error analysis, and even model selection. Here we present the first application of MCMC methods to simulated LISA data and demonstrate the great potential of the MCMC approach. Our implementation uses a generalized F-statistic to evaluate the likelihoods, and simulated annealing to speed convergence of the Markov chains. As a final step we supercool the chains to extract maximum likelihood estimates, and estimates of the Bayes factors for competing models. We find that the MCMC approach is able to correctly identify the number of signals present, extract the source parameters, and return error estimates consistent with Fisher information matrix predictions

  2. Renewal Theory for Markov Chains on the Real Line

    OpenAIRE

    Keener, Robert W.

    1982-01-01

    Standard renewal theory is concerned with expectations related to sums of positive i.i.d. variables, $S_n = \\sum^n_{i=1} Z_i$. We generalize this theory to the case where $\\{S_i\\}$ is a Markov chain on the real line with stationary transition probabilities satisfying a drift condition. The expectations we are concerned with satisfy generalized renewal equations, and in our main theorems, we show that these expectations are the unique solutions of the equations they satisfy.

  3. Second Order Optimality in Transient and Discounted Markov Decision Chains

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    Plzeň: University of West Bohemia, Plzeň, 2015, s. 731-736. ISBN 978-80-261-0539-8. [Mathematical Methods in Economics 2015 /33./. Cheb (CZ), 09.09.2015-11.09.2015] R&D Projects: GA ČR GA13-14445S; GA ČR GA15-10331S Institutional support: RVO:67985556 Keywords : dynamic programming * discounted and transient Markov reward chains * reward-variance optimality Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/2015/E/sladky-0448938.pdf

  4. Topological charge evolution in the Markov chain of QCD

    International Nuclear Information System (INIS)

    The topological charge is studied on lattices of large physical volume and fine lattice spacing. We illustrate how a parity transformation on the SU(3) link-variables of lattice gauge configurations reverses the sign of the topological charge and leaves the action invariant. Random applications of the parity transformation are proposed to traverse from one topological charge sign to the other. The transformation provides an improved unbiased estimator of the ensemble average and is essential in improving the ergodicity of the Markov chain process

  5. Markov chain model for particle migration at the repository scale

    International Nuclear Information System (INIS)

    A model for particle migration at multiple scales is developed using the Markov chain probability model. The goal of the model is to enable analyses of radionuclide migration at the repository scale based on the information obtained in smaller-scale detailed analyses by other models. Geologic domain is divided into an array of compartments, and particle migration is simulated by transitions from one compartment to another based on transition probabilities. Nuclide transport in hypothetical repository with heterogeneous flow due to random connectivity between compartments is demonstrated. In the comparison with the analytical continuum model of mass transport, the results from the present model show a good agreement. (author)

  6. Imputing unknown competitor marketing activity with a Hidden Markov Chain

    OpenAIRE

    Haughton, Dominique; Hua, Guangying; Jin, Danny; Lin, John; Wei, Qizhi; Zhang, Changan

    2014-01-01

    We demonstrate on a case study with two competing products at a bank how one can use a Hidden Markov Chain (HMC) to estimate missing information on a competitor's marketing activity. The idea is that given time series with sales volumes for products A and B and marketing expenditures for product A, as well as suitable predictors of sales for products A and B, we can infer at each point in time whether it is likely or not that marketing activities took place for product B. The method is succes...

  7. MARKOV CHAIN MODELING OF PERFORMANCE DEGRADATION OF PHOTOVOLTAIC SYSTEM

    Directory of Open Access Journals (Sweden)

    E. Suresh Kumar

    2012-01-01

    Full Text Available Modern probability theory studies chance processes for which theknowledge of previous outcomes influence predictions for future experiments. In principle, when a sequence of chance experiments, all of the past outcomes could influence the predictions for the next experiment. In Markov chain type of chance, the outcome of a given experiment can affect the outcome of the next experiment. The system state changes with time and the state X and time t are two random variables. Each of these variables can be either continuous or discrete. Various degradation on photovoltaic (PV systems can be viewed as different Markov states and further degradation can be treated as the outcome of the present state. The PV system is treated as a discrete state continuous time system with four possible outcomes, namely, s1 : Good condition, s2 : System with partial degradation failures and fully operational, s3 : System with major faults and partially working and hence partial output power, s4 : System completely fails. The calculation of the reliability of the photovoltaic system is complicated since the system have elements or subsystems exhibiting dependent failures and involving repair and standby operations. Markov model is a better technique that has much appeal and works well when failure hazards and repair hazards are constant. The usual practice of reliability analysis techniques include FMEA((failure mode and effect analysis, Parts count analysis, RBD ( reliability block diagram , FTA( fault tree analysis etc. These are logical, boolean and block diagram approaches and never accounts the environmental degradation on the performance of the system. This is too relevant in the case of PV systems which are operated under harsh environmental conditions. This paper is an insight into the degradation of performance of PV systems and presenting a Markov model of the system by means of the different states and transitions between these states.

  8. Probabilistic approach of water residence time and connectivity using Markov chains with application to tidal embayments

    Science.gov (United States)

    Bacher, C.; Filgueira, R.; Guyondet, T.

    2016-01-01

    Markov chain analysis was recently proposed to assess the time scales and preferential pathways into biological or physical networks by computing residence time, first passage time, rates of transfer between nodes and number of passages in a node. We propose to adapt an algorithm already published for simple systems to physical systems described with a high resolution hydrodynamic model. The method is applied to bays and estuaries on the Eastern Coast of Canada for their interest in shellfish aquaculture. Current velocities have been computed by using a 2 dimensional grid of elements and circulation patterns were summarized by averaging Eulerian flows between adjacent elements. Flows and volumes allow computing probabilities of transition between elements and to assess the average time needed by virtual particles to move from one element to another, the rate of transfer between two elements, and the average residence time of each system. We also combined transfer rates and times to assess the main pathways of virtual particles released in farmed areas and the potential influence of farmed areas on other areas. We suggest that Markov chain is complementary to other sets of ecological indicators proposed to analyse the interactions between farmed areas - e.g., depletion index, carrying capacity assessment. Markov chain has several advantages with respect to the estimation of connectivity between pair of sites. It makes possible to estimate transfer rates and times at once in a very quick and efficient way, without the need to perform long term simulations of particle or tracer concentration.

  9. Simulation of daily rainfall through markov chain modeling

    International Nuclear Information System (INIS)

    Being an agricultural country, the inhabitants of dry land in cultivated areas mainly rely on the daily rainfall for watering their fields. A stochastic model based on first order Markov Chain was developed to simulate daily rainfall data for Multan, D. I. Khan, Nawabshah, Chilas and Barkhan for the period 1981-2010. Transitional probability matrices of first order Markov Chain was utilized to generate the daily rainfall occurrence while gamma distribution was used to generate the daily rainfall amount. In order to achieve the parametric values of mentioned cities, method of moments is used to estimate the shape and scale parameters which lead to synthetic sequence generation as per gamma distribution. In this study, unconditional and conditional probabilities of wet and dry days in sum with means and standard deviations are considered as the essential parameters for the simulated stochastic generation of daily rainfalls. It has been found that the computerized synthetic rainfall series concurred pretty well with the actual observed rainfall series. (author)

  10. Efficient Parallel Learning of Hidden Markov Chain Models on SMPs

    Science.gov (United States)

    Li, Lei; Fu, Bin; Faloutsos, Christos

    Quad-core cpus have been a common desktop configuration for today's office. The increasing number of processors on a single chip opens new opportunity for parallel computing. Our goal is to make use of the multi-core as well as multi-processor architectures to speed up large-scale data mining algorithms. In this paper, we present a general parallel learning framework, Cut-And-Stitch, for training hidden Markov chain models. Particularly, we propose two model-specific variants, CAS-LDS for learning linear dynamical systems (LDS) and CAS-HMM for learning hidden Markov models (HMM). Our main contribution is a novel method to handle the data dependencies due to the chain structure of hidden variables, so as to parallelize the EM-based parameter learning algorithm. We implement CAS-LDS and CAS-HMM using OpenMP on two supercomputers and a quad-core commercial desktop. The experimental results show that parallel algorithms using Cut-And-Stitch achieve comparable accuracy and almost linear speedups over the traditional serial version.

  11. A Markov chain model for reliability growth and decay

    Science.gov (United States)

    Siegrist, K.

    1982-01-01

    A mathematical model is developed to describe a complex system undergoing a sequence of trials in which there is interaction between the internal states of the system and the outcomes of the trials. For example, the model might describe a system undergoing testing that is redesigned after each failure. The basic assumptions for the model are that the state of the system after a trial depends probabilistically only on the state before the trial and on the outcome of the trial and that the outcome of a trial depends probabilistically only on the state of the system before the trial. It is shown that under these basic assumptions, the successive states form a Markov chain and the successive states and outcomes jointly form a Markov chain. General results are obtained for the transition probabilities, steady-state distributions, etc. A special case studied in detail describes a system that has two possible state ('repaired' and 'unrepaired') undergoing trials that have three possible outcomes ('inherent failure', 'assignable-cause' 'failure' and 'success'). For this model, the reliability function is computed explicitly and an optimal repair policy is obtained.

  12. Radiative transfer calculated from a Markov chain formalism

    International Nuclear Information System (INIS)

    The theory of Markov chains is used to formulate the radiative transport problem in a general way by modeling the successive interactions of a photon as a stochastic process. Under the minimal requirement that the stochastic process is a Markov chain, the determination of the diffuse reflection of transmission from a scattering atmosphere is equivalent to the solution of a system of linear equations. This treatment is mathematically equivalent to and thus has many of the advantages of, Monte Carlo methods, but can be considerably more rapid than Monte Carlo algorithms for numerical calculations in particular applications. We have verified the speed and accuracy of this formalism for the stand problem of finding the intensity of scattered light from a homogeneous plane-parallel atmosphere with an arbitrary phase function for scattering. Accurate results over a wide range of parameters were obtained with computation times comparable to those of a standard ''doubling'' routine. The generality of this formalism thus allows fast, direct solutions to problems that were previously soluble only by Monte Carlo methods. Some comparisons are made with respect to integral equation methods

  13. Radiative transfer calculated from a Markov chain formalism

    Science.gov (United States)

    Esposito, L. W.; House, L. L.

    1978-01-01

    The theory of Markov chains is used to formulate the radiative transport problem in a general way by modeling the successive interactions of a photon as a stochastic process. Under the minimal requirement that the stochastic process is a Markov chain, the determination of the diffuse reflection or transmission from a scattering atmosphere is equivalent to the solution of a system of linear equations. This treatment is mathematically equivalent to, and thus has many of the advantages of, Monte Carlo methods, but can be considerably more rapid than Monte Carlo algorithms for numerical calculations in particular applications. We have verified the speed and accuracy of this formalism for the standard problem of finding the intensity of scattered light from a homogeneous plane-parallel atmosphere with an arbitrary phase function for scattering. Accurate results over a wide range of parameters were obtained with computation times comparable to those of a standard 'doubling' routine. The generality of this formalism thus allows fast, direct solutions to problems that were previously soluble only by Monte Carlo methods. Some comparisons are made with respect to integral equation methods.

  14. On Markov Chains Induced by Partitioned Transition Probability Matrices

    Institute of Scientific and Technical Information of China (English)

    Thomas KAIJSER

    2011-01-01

    Let S be a denumerable state space and let P be a transition probability matrix on S. If a denumerable set M of nonnegative matrices is such that the sum of the matrices is equal to P, then we call M a partition of P. Let K denote the set of probability vectors on S. With every partition M of P we can associate a transition probability function PM on K defined in such a way that if p ∈ K and M ∈ M are such that ‖pM‖ > 0, then, with probability ‖pM‖, the vector p is transferred to the vector pM/‖pM‖. Here ‖· ‖ denotes the l1-norm. In this paper we investigate the convergence in distribution for Markov chains generated by transition probability functions induced by partitions of transition probability matrices. The main motivation for this investigation is the application of the convergence results obtained to filtering processes of partially observed Markov chains with denumerable state space.

  15. SATMC: Spectral Energy Distribution Analysis Through Markov Chains

    CERN Document Server

    Johnson, S P; Tang, Y; Scott, K S

    2013-01-01

    We present the general purpose spectral energy distribution (SED) fitting tool SED Analysis Through Markov Chains (SATMC). Utilizing Monte Carlo Markov Chain (MCMC) algorithms, SATMC fits an observed SED to SED templates or models of the user's choice to infer intrinsic parameters, generate confidence levels and produce the posterior parameter distribution. Here we describe the key features of SATMC from the underlying MCMC engine to specific features for handling SED fitting. We detail several test cases of SATMC, comparing results obtained to traditional least-squares methods, which highlight its accuracy, robustness and wide range of possible applications. We also present a sample of submillimetre galaxies that have been fitted using the SED synthesis routine GRASIL as input. In general, these SMGs are shown to occupy a large volume of parameter space, particularly in regards to their star formation rates which range from ~30-3000 M_sun yr^-1 and stellar masses which range from ~10^10-10^12 M_sun. Taking a...

  16. Maximum Likelihood Estimation in Gaussian Chain Graph Models under the Alternative Markov Property

    OpenAIRE

    Drton, Mathias; Eichler, Michael

    2005-01-01

    The AMP Markov property is a recently proposed alternative Markov property for chain graphs. In the case of continuous variables with a joint multivariate Gaussian distribution, it is the AMP rather than the earlier introduced LWF Markov property that is coherent with data-generation by natural block-recursive regressions. In this paper, we show that maximum likelihood estimates in Gaussian AMP chain graph models can be obtained by combining generalized least squares and iterative proportiona...

  17. On the relation between recurrence and ergodicity properties in denumerable Markov decision chains

    NARCIS (Netherlands)

    R. Dekker (Rommert); A. Hordijk (Arie); F.M. Spieksma

    1994-01-01

    textabstractThis paper studies two properties of the set of Markov chains induced by the deterministic policies in a Markov decision chain. These properties are called μ-uniform geometric ergodicity and μ-uniform geometric recurrence. μ-uniform ergodicity generalises a quasi-compactness condition. I

  18. Asymptotics of Entropy Rate in Special Families of Hidden Markov Chains

    OpenAIRE

    Han, G; Marcus, BH

    2008-01-01

    We derive an asymptotic formula for entropy rate of a hidden Markov chain under certain parameterizations. We also discuss applications of the asymptotic formula to the asymptotic behaviors of entropy rate of hidden Markov chains as outputs of certain channels, such as binary symmetric channel, binary erasure channel, and some special Gilbert-Elliot channel. © 2006 IEEE.

  19. A Markov Chain Estimator of Multivariate Volatility from High Frequency Data

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Horel, Guillaume; Lunde, Asger; Archakov, Ilya

    We introduce a multivariate estimator of financial volatility that is based on the theory of Markov chains. The Markov chain framework takes advantage of the discreteness of high-frequency returns. We study the finite sample properties of the estimation in a simulation study and apply it to...

  20. Analysis of Users Web Browsing Behavior Using Markov chain Model

    Directory of Open Access Journals (Sweden)

    Diwakar Shukla

    2011-03-01

    Full Text Available In present days of growing information technology, many browsers available for surfing and web mining. A user has option to use any of them at a time to mine out the desired website. Every browser has pre-defined level of popularity and reputation in the market. This paper considers the setup of only two browsers in a computer system and a user prefers to any one, if fails, switches to the other one .The behavior of user is modeled through Markov chain procedure and transition probabilities are calculated. The quitting to browsing is treated as a parameter of variation over the popularity. Graphical study is performed to explain the inter relationship between user behavior parameters and browser market popularity parameters. If rate of a company is lowest in terms of browser failure and lowest in terms of quitting probability then company enjoys better popularity and larger user proportion

  1. On the multi-level solution algorithm for Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Horton, G. [Univ. of Erlangen, Nuernberg (Germany)

    1996-12-31

    We discuss the recently introduced multi-level algorithm for the steady-state solution of Markov chains. The method is based on the aggregation principle, which is well established in the literature. Recursive application of the aggregation yields a multi-level method which has been shown experimentally to give results significantly faster than the methods currently in use. The algorithm can be reformulated as an algebraic multigrid scheme of Galerkin-full approximation type. The uniqueness of the scheme stems from its solution-dependent prolongation operator which permits significant computational savings in the evaluation of certain terms. This paper describes the modeling of computer systems to derive information on performance, measured typically as job throughput or component utilization, and availability, defined as the proportion of time a system is able to perform a certain function in the presence of component failures and possibly also repairs.

  2. Kinetics and thermodynamics of first-order Markov chain copolymerization

    Science.gov (United States)

    Gaspard, P.; Andrieux, D.

    2014-07-01

    We report a theoretical study of stochastic processes modeling the growth of first-order Markov copolymers, as well as the reversed reaction of depolymerization. These processes are ruled by kinetic equations describing both the attachment and detachment of monomers. Exact solutions are obtained for these kinetic equations in the steady regimes of multicomponent copolymerization and depolymerization. Thermodynamic equilibrium is identified as the state at which the growth velocity is vanishing on average and where detailed balance is satisfied. Away from equilibrium, the analytical expression of the thermodynamic entropy production is deduced in terms of the Shannon disorder per monomer in the copolymer sequence. The Mayo-Lewis equation is recovered in the fully irreversible growth regime. The theory also applies to Bernoullian chains in the case where the attachment and detachment rates only depend on the reacting monomer.

  3. Kinetics and thermodynamics of first-order Markov chain copolymerization

    Energy Technology Data Exchange (ETDEWEB)

    Gaspard, P.; Andrieux, D. [Center for Nonlinear Phenomena and Complex Systems, Université Libre de Bruxelles, Code Postal 231, Campus Plaine, B-1050 Brussels (Belgium)

    2014-07-28

    We report a theoretical study of stochastic processes modeling the growth of first-order Markov copolymers, as well as the reversed reaction of depolymerization. These processes are ruled by kinetic equations describing both the attachment and detachment of monomers. Exact solutions are obtained for these kinetic equations in the steady regimes of multicomponent copolymerization and depolymerization. Thermodynamic equilibrium is identified as the state at which the growth velocity is vanishing on average and where detailed balance is satisfied. Away from equilibrium, the analytical expression of the thermodynamic entropy production is deduced in terms of the Shannon disorder per monomer in the copolymer sequence. The Mayo-Lewis equation is recovered in the fully irreversible growth regime. The theory also applies to Bernoullian chains in the case where the attachment and detachment rates only depend on the reacting monomer.

  4. Kinetics and thermodynamics of first-order Markov chain copolymerization

    International Nuclear Information System (INIS)

    We report a theoretical study of stochastic processes modeling the growth of first-order Markov copolymers, as well as the reversed reaction of depolymerization. These processes are ruled by kinetic equations describing both the attachment and detachment of monomers. Exact solutions are obtained for these kinetic equations in the steady regimes of multicomponent copolymerization and depolymerization. Thermodynamic equilibrium is identified as the state at which the growth velocity is vanishing on average and where detailed balance is satisfied. Away from equilibrium, the analytical expression of the thermodynamic entropy production is deduced in terms of the Shannon disorder per monomer in the copolymer sequence. The Mayo-Lewis equation is recovered in the fully irreversible growth regime. The theory also applies to Bernoullian chains in the case where the attachment and detachment rates only depend on the reacting monomer

  5. Application of the Markov chain approximation to the sunspot observations

    International Nuclear Information System (INIS)

    The positions of the 13,588 sunspot groups observed during the cycle of 1950-1960 at the Istanbul University Observatory have been corrected for the effect of differential rotation. The evolution probability of a sunspot group to the other one in the same region have been determined. By using the Markov chain approximation, the types of these groups and their transition probabilities during the following activity cycle (1950-1960), and the concentration of active regions during 1950-1960 have been estimated. The transition probabilities from the observations of the activity cycle 1960-1970 have been compared with the predicted transition probabilities and a good correlation has been noted. 5 refs.; 2 tabs

  6. On the Multilevel Solution Algorithm for Markov Chains

    Science.gov (United States)

    Horton, Graham

    1997-01-01

    We discuss the recently introduced multilevel algorithm for the steady-state solution of Markov chains. The method is based on an aggregation principle which is well established in the literature and features a multiplicative coarse-level correction. Recursive application of the aggregation principle, which uses an operator-dependent coarsening, yields a multi-level method which has been shown experimentally to give results significantly faster than the typical methods currently in use. When cast as a multigrid-like method, the algorithm is seen to be a Galerkin-Full Approximation Scheme with a solution-dependent prolongation operator. Special properties of this prolongation lead to the cancellation of the computationally intensive terms of the coarse-level equations.

  7. Projection methods for the numerical solution of Markov chain models

    Science.gov (United States)

    Saad, Youcef

    1989-01-01

    Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.

  8. Solution of the Markov chain for the dead time problem

    International Nuclear Information System (INIS)

    A method for solving the equation for the Markov chain, describing the effect of a non-extendible dead time on the statistics of time correlated pulses, is discussed. The equation, which was derived in an earlier paper, describes a non-linear process and is not amenable to exact solution. The present method consists of representing the probability generating function as a factorial cumulant expansion and neglecting factorial cumulants beyond the second. This results in a closed set of non-linear equations for the factorial moments. Stationary solutions of these equations, which are of interest for calculating the count rate, are obtained iteratively. The method is applied to the variable dead time counter technique for estimation of system parameters in passive neutron assay of Pu and reactor noise analysis. Comparisons of results by this method with Monte Carlo calculations are presented. (author)

  9. HYDRA: a Java library for Markov Chain Monte Carlo

    Directory of Open Access Journals (Sweden)

    Gregory R. Warnes

    2002-03-01

    Full Text Available Hydra is an open-source, platform-neutral library for performing Markov Chain Monte Carlo. It implements the logic of standard MCMC samplers within a framework designed to be easy to use, extend, and integrate with other software tools. In this paper, we describe the problem that motivated our work, outline our goals for the Hydra pro ject, and describe the current features of the Hydra library. We then provide a step-by-step example of using Hydra to simulate from a mixture model drawn from cancer genetics, first using a variable-at-a-time Metropolis sampler and then a Normal Kernel Coupler. We conclude with a discussion of future directions for Hydra.

  10. Markov Chain Monte Carlo Bayesian Learning for Neural Networks

    Science.gov (United States)

    Goodrich, Michael S.

    2011-01-01

    Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.

  11. Rate-Distortion via Markov Chain Monte Carlo

    CERN Document Server

    Jalali, Shirin

    2008-01-01

    We propose an approach to lossy source coding, utilizing ideas from Gibbs sampling, simulated annealing, and Markov Chain Monte Carlo (MCMC). The idea is to sample a reconstruction sequence from a Boltzmann distribution associated with an energy function that incorporates the distortion between the source and reconstruction, the compressibility of the reconstruction, and the point sought on the rate-distortion curve. To sample from this distribution, we use a 'heat bath algorithm': Starting from an initial candidate reconstruction (say the original source sequence), at every iteration, an index i is chosen and the i-th sequence component is replaced by drawing from the conditional probability distribution for that component given all the rest. At the end of this process, the encoder conveys the reconstruction to the decoder using universal lossless compression. The complexity of each iteration is independent of the sequence length and only linearly dependent on a certain context parameter (which grows sub-log...

  12. Dinamika Pada Rantai Markov Dengan Dua Komponen (Dinamika On Two Compotent Markov Chains)

    OpenAIRE

    Yakub, Riki

    2010-01-01

    Dinamika pada rantai Markov dengan dua komponen dipengaruhi oleh nilai eigen dari matriks probabilitas transisinya serta keadaan awal yang diberikan. Berdasarkan nilai λ2 yang diperoleh, dinamika pada rantai Markov dengan dua komponen dapat dikelompokkan menjadi 3 bagian utama. Yaitu: a. Dinamika pada rantai Markov dengan dua komponen jika nilai 0

  13. Random billiards with wall temperature and associated Markov chains

    International Nuclear Information System (INIS)

    By a random billiard we mean a billiard system in which the standard rule of specular reflection is replaced with a Markov transition probabilities operator P that gives, at each collision of the billiard particle with the boundary of the billiard domain, the probability distribution of the post-collision velocity for a given pre-collision velocity. A random billiard with microstructure, or RBM for short, is a random billiard for which P is derived from a choice of geometric/mechanical structure on the boundary of the billiard domain, as explained in the text. Such systems provide simple and explicit mechanical models of particle–surface interaction that can incorporate thermal effects and permit a detailed study of thermostatic action from the perspective of the standard theory of Markov chains on general state spaces. The main focus of this paper is on the operator P itself and how it relates to the mechanical and geometric features of the microstructure, such as mass ratios, curvatures, and potentials. The main results are as follows: (1) we give a characterization of the stationary probabilities (equilibrium states) of P and show how standard equilibrium distributions studied in classical statistical mechanics such as the Maxwell–Boltzmann distribution and the Knudsen cosine law arise naturally as generalized invariant billiard measures; (2) we obtain some of the more basic functional theoretic properties of P, in particular that P is under very general conditions a self-adjoint operator of norm 1 on a Hilbert space to be defined below, and show in a simple but somewhat typical example that P is a compact (Hilbert–Schmidt) operator. This leads to the issue of relating the spectrum of eigenvalues of P to the geometric/mechanical features of the billiard microstructure; (3) we explore the latter issue, both analytically and numerically in a few representative examples. Additionally, (4) a general algorithm for simulating the Markov chains is given based on

  14. Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions

    OpenAIRE

    Masuyama, Hiroyuki

    2014-01-01

    In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally,...

  15. Accelerating Monte Carlo Markov chains with proxy and error models

    Science.gov (United States)

    Josset, Laureline; Demyanov, Vasily; Elsheikh, Ahmed H.; Lunati, Ivan

    2015-12-01

    In groundwater modeling, Monte Carlo Markov Chain (MCMC) simulations are often used to calibrate aquifer parameters and propagate the uncertainty to the quantity of interest (e.g., pollutant concentration). However, this approach requires a large number of flow simulations and incurs high computational cost, which prevents a systematic evaluation of the uncertainty in the presence of complex physical processes. To avoid this computational bottleneck, we propose to use an approximate model (proxy) to predict the response of the exact model. Here, we use a proxy that entails a very simplified description of the physics with respect to the detailed physics described by the "exact" model. The error model accounts for the simplification of the physical process; and it is trained on a learning set of realizations, for which both the proxy and exact responses are computed. First, the key features of the set of curves are extracted using functional principal component analysis; then, a regression model is built to characterize the relationship between the curves. The performance of the proposed approach is evaluated on the Imperial College Fault model. We show that the joint use of the proxy and the error model to infer the model parameters in a two-stage MCMC set-up allows longer chains at a comparable computational cost. Unnecessary evaluations of the exact responses are avoided through a preliminary evaluation of the proposal made on the basis of the corrected proxy response. The error model trained on the learning set is crucial to provide a sufficiently accurate prediction of the exact response and guide the chains to the low misfit regions. The proposed methodology can be extended to multiple-chain algorithms or other Bayesian inference methods. Moreover, FPCA is not limited to the specific presented application and offers a general framework to build error models.

  16. Some Limit Properties of Random Transition Probability for Second-Order Nonhomogeneous Markov Chains Indexed by a Tree

    OpenAIRE

    Zhiyan Shi; Weiguo Yang

    2009-01-01

    We study some limit properties of the harmonic mean of random transition probability for a second-order nonhomogeneous Markov chain and a nonhomogeneous Markov chain indexed by a tree. As corollary, we obtain the property of the harmonic mean of random transition probability for a nonhomogeneous Markov chain.

  17. Markov chain-based numerical method for degree distributions of growing networks

    International Nuclear Information System (INIS)

    In this paper, we establish a relation between growing networks and Markov chains, and propose a computational approach for network degree distributions. Using the Barabasi-Albert model as an example, we first show that the degree evolution of a node in a growing network follows a nonhomogeneous Markov chain. Exploring the special structure of these Markov chains, we develop an efficient algorithm to compute the degree distribution numerically with a computation complexity of O(t2), where t is the number of time steps. We use three examples to demonstrate the computation procedure and compare the results with those from existing methods

  18. Prediction of Synchrostate Transitions in EEG Signals Using Markov Chain Models

    CERN Document Server

    Jamal, Wasifa; Oprescu, Ioana-Anastasia; Maharatna, Koushik

    2014-01-01

    This paper proposes a stochastic model using the concept of Markov chains for the inter-state transitions of the millisecond order quasi-stable phase synchronized patterns or synchrostates, found in multi-channel Electroencephalogram (EEG) signals. First and second order transition probability matrices are estimated for Markov chain modelling from 100 trials of 128-channel EEG signals during two different face perception tasks. Prediction accuracies with such finite Markov chain models for synchrostate transition are also compared, under a data-partitioning based cross-validation scheme.

  19. Large Deviations for Empirical Measures of Not Necessarily Irreducible Countable Markov Chains with Arbitrary Initial Measures

    Institute of Scientific and Technical Information of China (English)

    Yi Wen JIANG; Li Ming WU

    2005-01-01

    All known results on large deviations of occupation measures of Markov processes are based on the assumption of (essential) irreducibility. In this paper we establish the weak* large deviation principle of occupation measures for any countable Markov chain with arbitrary initial measures. The new rate function that we obtain is not convex and depends on the initial measure, contrary to the (essentially) irreducible case.

  20. Descriptive and predictive evaluation of high resolution Markov chain precipitation models

    DEFF Research Database (Denmark)

    Sørup, Hjalte Jomo Danielsen; Madsen, Henrik; Arnbjerg-Nielsen, Karsten

    2012-01-01

    first‐order Markov model seems to capture most of the properties of precipitation, but inclusion of seasonal and diurnal variation improves the model. Including a second‐order Markov Chain component does improve the descriptive capabilities of the model, but is very expensive in its parameter use...... and necessary tools when evaluating model fit and performance. Copyright © 2012 John Wiley & Sons, Ltd....

  1. Hidden Markov chain modeling for epileptic networks identification.

    Science.gov (United States)

    Le Cam, Steven; Louis-Dorr, Valérie; Maillard, Louis

    2013-01-01

    The partial epileptic seizures are often considered to be caused by a wrong balance between inhibitory and excitatory interneuron connections within a focal brain area. These abnormal balances are likely to result in loss of functional connectivities between remote brain structures, while functional connectivities within the incriminated zone are enhanced. The identification of the epileptic networks underlying these hypersynchronies are expected to contribute to a better understanding of the brain mechanisms responsible for the development of the seizures. In this objective, threshold strategies are commonly applied, based on synchrony measurements computed from recordings of the electrophysiologic brain activity. However, such methods are reported to be prone to errors and false alarms. In this paper, we propose a hidden Markov chain modeling of the synchrony states with the aim to develop a reliable machine learning methods for epileptic network inference. The method is applied on a real Stereo-EEG recording, demonstrating consistent results with the clinical evaluations and with the current knowledge on temporal lobe epilepsy. PMID:24110697

  2. ENSO informed Drought Forecasting Using Nonhomogeneous Hidden Markov Chain Model

    Science.gov (United States)

    Kwon, H.; Yoo, J.; Kim, T.

    2013-12-01

    The study aims at developing a new scheme to investigate the potential use of ENSO (El Niño/Southern Oscillation) for drought forecasting. In this regard, objective of this study is to extend a previously developed nonhomogeneous hidden Markov chain model (NHMM) to identify climate states associated with drought that can be potentially used to forecast drought conditions using climate information. As a target variable for forecasting, SPI(standardized precipitation index) is mainly utilized. This study collected monthly precipitation data over 56 stations that cover more than 30 years and K-means cluster analysis using drought properties was applied to partition regions into mutually exclusive clusters. In this study, six main clusters were distinguished through the regionalization procedure. For each cluster, the NHMM was applied to estimate the transition probability of hidden states as well as drought conditions informed by large scale climate indices (e.g. SOI, Nino1.2, Nino3, Nino3.4, MJO and PDO). The NHMM coupled with large scale climate information shows promise as a technique for forecasting drought scenarios. A more detailed explanation of large scale climate patterns associated with the identified hidden states will be provided with anomaly composites of SSTs and SLPs. Acknowledgement This research was supported by a grant(11CTIPC02) from Construction Technology Innovation Program (CTIP) funded by Ministry of Land, Transport and Maritime Affairs of Korean government.

  3. Threshold partitioning of sparse matrices and applications to Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hwajeong; Szyld, D.B. [Temple Univ., Philadelphia, PA (United States)

    1996-12-31

    It is well known that the order of the variables and equations of a large, sparse linear system influences the performance of classical iterative methods. In particular if, after a symmetric permutation, the blocks in the diagonal have more nonzeros, classical block methods have a faster asymptotic rate of convergence. In this paper, different ordering and partitioning algorithms for sparse matrices are presented. They are modifications of PABLO. In the new algorithms, in addition to the location of the nonzeros, the values of the entries are taken into account. The matrix resulting after the symmetric permutation has dense blocks along the diagonal, and small entries in the off-diagonal blocks. Parameters can be easily adjusted to obtain, for example, denser blocks, or blocks with elements of larger magnitude. In particular, when the matrices represent Markov chains, the permuted matrices are well suited for block iterative methods that find the corresponding probability distribution. Applications to three types of methods are explored: (1) Classical block methods, such as Block Gauss Seidel. (2) Preconditioned GMRES, where a block diagonal preconditioner is used. (3) Iterative aggregation method (also called aggregation/disaggregation) where the partition obtained from the ordering algorithm with certain parameters is used as an aggregation scheme. In all three cases, experiments are presented which illustrate the performance of the methods with the new orderings. The complexity of the new algorithms is linear in the number of nonzeros and the order of the matrix, and thus adding little computational effort to the overall solution.

  4. Seriation in paleontological data using markov chain Monte Carlo methods.

    Directory of Open Access Journals (Sweden)

    Kai Puolamäki

    2006-02-01

    Full Text Available Given a collection of fossil sites with data about the taxa that occur in each site, the task in biochronology is to find good estimates for the ages or ordering of sites. We describe a full probabilistic model for fossil data. The parameters of the model are natural: the ordering of the sites, the origination and extinction times for each taxon, and the probabilities of different types of errors. We show that the posterior distributions of these parameters can be estimated reliably by using Markov chain Monte Carlo techniques. The posterior distributions of the model parameters can be used to answer many different questions about the data, including seriation (finding the best ordering of the sites and outlier detection. We demonstrate the usefulness of the model and estimation method on synthetic data and on real data on large late Cenozoic mammals. As an example, for the sites with large number of occurrences of common genera, our methods give orderings, whose correlation with geochronologic ages is 0.95.

  5. Real time Markov chains: Wind states in anemometric data

    CERN Document Server

    Sanchez, P A; Jaramillo, O A

    2015-01-01

    The description of wind phenomena is frequently based on data obtained from anemometers, which usually report the wind speed and direction only in a horizontal plane. Such measurements are commonly used either to develop wind generation farms or to forecast weather conditions in a geographical region. Beyond these standard applications, the information contained in the data may be richer than expected and may lead to a better understanding of the wind dynamics in a geographical area. In this work we propose a statistical analysis based on the wind velocity vectors, which we propose may be grouped in "wind states" associated to binormal distribution functions. We found that the velocity plane defined by the anemometric velocity data may be used as a phase space, where a finite number of states may be found and sorted using standard clustering methods. The main result is a discretization technique useful to model the wind with Markov chains. We applied such ideas in anemometric data for two different sites in M...

  6. Markov chain Monte Carlo methods: an introductory example

    Science.gov (United States)

    Klauenberg, Katy; Elster, Clemens

    2016-02-01

    When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.

  7. Technical manual for basic version of the Markov chain nest productivity model (MCnest)

    Science.gov (United States)

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  8. Asymptotics of Entropy Rate in Special Families of Hidden Markov Chains

    CERN Document Server

    Han, Guangyue

    2008-01-01

    We derive an asymptotic formula for entropy rate of a hidden Markov chain around a "weak Black Hole". We also discuss applications of the asymptotic formula to the asymptotic behaviors of certain channels.

  9. User’s manual for basic version of MCnest Markov chain nest productivity model

    Science.gov (United States)

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  10. A comparison of strategies for Markov chain Monte Carlo computation in quantitative genetics

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Ibanez-Escriche, Noelia; Sorensen, Daniel

    2008-01-01

    In quantitative genetics, Markov chain Monte Carlo (MCMC) methods are indispensable for statistical inference in non-standard models like generalized linear models with genetic random effects or models with genetically structured variance heterogeneity. A particular challenge for MCMC applications...

  11. Applying Markov Chains for NDVI Time Series Forecasting of Latvian Regions

    Directory of Open Access Journals (Sweden)

    Stepchenko Arthur

    2015-12-01

    Full Text Available Time series of earth observation based estimates of vegetation inform about variations in vegetation at the scale of Latvia. A vegetation index is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation. NDVI index is an important variable for vegetation forecasting and management of various problems, such as climate change monitoring, energy usage monitoring, managing the consumption of natural resources, agricultural productivity monitoring, drought monitoring and forest fire detection. In this paper, we make a one-step-ahead prediction of 7-daily time series of NDVI index using Markov chains. The choice of a Markov chain is due to the fact that a Markov chain is a sequence of random variables where each variable is located in some state. And a Markov chain contains probabilities of moving from one state to other.

  12. Maps of sparse Markov chains efficiently reveal community structure in network flows with memory

    CERN Document Server

    Persson, Christian; Edler, Daniel; Rosvall, Martin

    2016-01-01

    To better understand the flows of ideas or information through social and biological systems, researchers develop maps that reveal important patterns in network flows. In practice, network flow models have implied memoryless first-order Markov chains, but recently researchers have introduced higher-order Markov chain models with memory to capture patterns in multi-step pathways. Higher-order models are particularly important for effectively revealing actual, overlapping community structure, but higher-order Markov chain models suffer from the curse of dimensionality: their vast parameter spaces require exponentially increasing data to avoid overfitting and therefore make mapping inefficient already for moderate-sized systems. To overcome this problem, we introduce an efficient cross-validated mapping approach based on network flows modeled by sparse Markov chains. To illustrate our approach, we present a map of citation flows in science with research fields that overlap in multidisciplinary journals. Compared...

  13. The evolution of tax evasion in the Czech Republic: a Markov chain analysis

    Czech Academy of Sciences Publication Activity Database

    Hanousek, Jan; Palda, F.

    Bern: Peter Lang, 2007 - (Hayoz, N.; Hug, S.), s. 327-360 ISBN 978-3-03910-651-6 Institutional research plan: CEZ:MSM0021620846 Keywords : tax evasion * Markov chain analysis * Czech Republic Subject RIV: AH - Economics

  14. Markov chains with transition delta-matrix: ergodicity conditions, invariant probability measures and applications

    Directory of Open Access Journals (Sweden)

    Lev Abolnikov

    1991-01-01

    Full Text Available A large class of Markov chains with so-called Δm,n-and Δ′m,n-transition matrices (“delta-matrices” which frequently occur in applications (queues, inventories, dams is analyzed.

  15. On finding the fundamental matrix of finite state homogeneous Markov chains in special case

    OpenAIRE

    Gaiduk, A. N.

    2010-01-01

    For a Їnite state homogeneous Markov chain with circulant transition matrix that describes shift register that clocks 1,2 times with probabilities p; q we have found fundamental matrix. From fundamental matrix we derive hitting time matrix.

  16. Characterizing the Aperiodicity of Irreducible Markov Chains by Using P Systems

    OpenAIRE

    Cardona, Mónica; Colomer, M. Angels; Pérez Jiménez, Mario de Jesús

    2009-01-01

    It is well known that any irreducible and aperiodic Markov chain has exactly one stationary distribution, and for any arbitrary initial distribution, the sequence of distributions at time n converges to the stationary distribution, that is, the Markov chain is approaching equilibrium as n ! 1. In this paper, a characterization of the aperiodicity in existential terms of some state is given. At the same time, a P system with external output is associated with any irreducible ...

  17. THE TRANSITION PROBABILITY MATRIX OF A MARKOV CHAIN MODEL IN AN ATM NETWORK

    Institute of Scientific and Technical Information of China (English)

    YUE Dequan; ZHANG Huachen; TU Fengsheng

    2003-01-01

    In this paper we consider a Markov chain model in an ATM network, which has been studied by Dag and Stavrakakis. On the basis of the iterative formulas obtained by Dag and Stavrakakis, we obtain the explicit analytical expression of the transition probability matrix. It is very simple to calculate the transition probabilities of the Markov chain by these expressions. In addition, we obtain some results about the structure of the transition probability matrix, which are helpful in numerical calculation and theoretical analysis.

  18. Continuous-time block-monotone Markov chains and their block-augmented truncations

    OpenAIRE

    Masuyama, Hiroyuki

    2015-01-01

    This paper considers continuous-time block-monotone Markov chains (BMMCs) and their block-augmented truncations. We first introduce the block-monotonicity and block-wise dominance relation for continuous-time Markov chains and then provide some fundamental results on the two notions. Using these results, we show that the stationary probability vectors obtained by the block-augmented truncation converge to the stationary probability vector of the original BMMC. We also show that the last-colum...

  19. Dynamic temperature selection for parallel-tempering in Markov chain Monte Carlo simulations

    OpenAIRE

    Vousden, Will; Farr, Will M.; Mandel, Ilya

    2015-01-01

    Modern problems in astronomical Bayesian inference require efficient methods for sampling from complex, high-dimensional, often multi-modal probability distributions. Most popular methods, such as Markov chain Monte Carlo sampling, perform poorly on strongly multi-modal probability distributions, rarely jumping between modes or settling on just one mode without finding others. Parallel tempering addresses this problem by sampling simultaneously with separate Markov chains from tempered versio...

  20. Markov chain modeling of evolution of strains in reinforced concrete flexural beams

    OpenAIRE

    Anoop, M. B.; Balaji Rao, K.; Lakshmanan, N.; Raghuprasad, B. K.

    2012-01-01

    From the analysis of experimentally observed variations in surface strains with loading in reinforced concrete beams, it is noted that there is a need to consider the evolution of strains (with loading) as a stochastic process. Use of Markov Chains for modeling stochastic evolution of strains with loading in reinforced concrete flexural beams is studied in this paper. A simple, yet practically useful, bi-level homogeneous Gaussian Markov Chain (BLHGMC) model is proposed for determining the st...

  1. Mixing Times of Markov Chains on Degree Constrained Orientations of Planar Graphs

    OpenAIRE

    Felsner, Stefan; Heldt, Daniel

    2016-01-01

    We study Markov chains for $\\alpha$-orientations of plane graphs, these are orientations where the outdegree of each vertex is prescribed by the value of a given function $\\alpha$. The set of $\\alpha$-orientations of a plane graph has a natural distributive lattice structure. The moves of the up-down Markov chain on this distributive lattice corresponds to reversals of directed facial cycles in the $\\alpha$-orientation. We have a positive and several negative results regarding the mixing time...

  2. Robust filtering and prediction for systems with embedded finite-state Markov-Chain dynamics

    International Nuclear Information System (INIS)

    This research developed new methodologies for the design of robust near-optimal filters/predictors for a class of system models that exhibit embedded finite-state Markov-chain dynamics. These methodologies are developed through the concepts and methods of stochastic model building (including time-series analysis), game theory, decision theory, and filtering/prediction for linear dynamic systems. The methodology is based on the relationship between the robustness of a class of time-series models and quantization which is applied to the time series as part of the model identification process. This relationship is exploited by utilizing the concept of an equivalence, through invariance of spectra, between the class of Markov-chain models and the class of autoregressive moving average (ARMA) models. This spectral equivalence permits a straightforward implementation of the desirable robust properties of the Markov-chain approximation in a class of models which may be applied in linear-recursive form in a linear Kalman filter/predictor structure. The linear filter/predictor structure is shown to provide asymptotically optimal estimates of states which represent one or more integrations of the Markov-chain state. The development of a new saddle-point theorem for a game based on the Markov-chain model structure gives rise to a technique for determining a worst case Markov-chain process, upon which a robust filter/predictor design if based

  3. Markov Chain Computation for Homogeneous and Non-homogeneous Data: MARCH 1.1 Users Guide

    Directory of Open Access Journals (Sweden)

    Andre Berchtold

    2001-03-01

    Full Text Available MARCH is a free software for the computation of different types of Markovian models including homogeneous Markov Chains, Hidden Markov Models (HMMs and Double Chain Markov Models (DCMMs. The main characteristic of this software is the implementation of a powerful optimization method for HMMs and DCMMs combining a genetic algorithm with the standard Baum-Welch procedure. MARCH is distributed as a set of Matlab functions running under Matlab 5 or higher on any computing platform. A PC Windows version running independently from Matlab is also available.

  4. A Markov chain Monte Carlo analysis of the CMSSM

    International Nuclear Information System (INIS)

    We perform a comprehensive exploration of the Constrained MSSM parameter space employing a Markov Chain Monte Carlo technique and a Bayesian analysis. We compute superpartner masses and other collider observables, as well as a cold dark matter abundance, and compare them with experimental data. We include uncertainties arising from theoretical approximations as well as from residual experimental errors of relevant Standard Model parameters. We delineate probability distributions of the CMSSM parameters, the collider and cosmological observables as well as a dark matter direct detection cross section. The 68% probability intervals of the CMSSM parameters are: 0.52TeV 1/2 0 0 g-tilde q-tildeR χ1± -9 s→μ+μ-) -8, 1.9 x 10-10 μSUSY -10 and 1 x 10-10 pb SIp -8 pb for direct WIMP detection. We highlight a complementarity between LHC and WIMP dark matter searches in exploring the CMSSM parameter space. We further expose a number of correlations among the observables, in particular between BR(Bs→μ+μ-) and BR(B-bar →Xsγ) or σSIp. Once SUSY is discovered, this and other correlations may prove helpful in distinguishing the CMSSM from other supersymmetric models. We investigate the robustness of our results in terms of the assumed ranges of CMSSM parameters and the effect of the (g-2)μ anomaly which shows some tension with the other observables. We find that the results for m0, and the observables which strongly depend on it, are sensitive to our assumptions, while our conclusions for the other variables are robust

  5. Dynamic temperature selection for parallel tempering in Markov chain Monte Carlo simulations

    Science.gov (United States)

    Vousden, W. D.; Farr, W. M.; Mandel, I.

    2016-01-01

    Modern problems in astronomical Bayesian inference require efficient methods for sampling from complex, high-dimensional, often multimodal probability distributions. Most popular methods, such as MCMC sampling, perform poorly on strongly multimodal probability distributions, rarely jumping between modes or settling on just one mode without finding others. Parallel tempering addresses this problem by sampling simultaneously with separate Markov chains from tempered versions of the target distribution with reduced contrast levels. Gaps between modes can be traversed at higher temperatures, while individual modes can be efficiently explored at lower temperatures. In this paper, we investigate how one might choose the ladder of temperatures to achieve more efficient sampling, as measured by the autocorrelation time of the sampler. In particular, we present a simple, easily implemented algorithm for dynamically adapting the temperature configuration of a sampler while sampling. This algorithm dynamically adjusts the temperature spacing to achieve a uniform rate of exchanges between chains at neighbouring temperatures. We compare the algorithm to conventional geometric temperature configurations on a number of test distributions and on an astrophysical inference problem, reporting efficiency gains by a factor of 1.2-2.5 over a well-chosen geometric temperature configuration and by a factor of 1.5-5 over a poorly chosen configuration. On all of these problems, a sampler using the dynamical adaptations to achieve uniform acceptance ratios between neighbouring chains outperforms one that does not.

  6. 3D+t brain MRI segmentation using robust 4D Hidden Markov Chain.

    Science.gov (United States)

    Lavigne, François; Collet, Christophe; Armspach, Jean-Paul

    2014-01-01

    In recent years many automatic methods have been developed to help physicians diagnose brain disorders, but the problem remains complex. In this paper we propose a method to segment brain structures on two 3D multi-modal MR images taken at different times (longitudinal acquisition). A bias field correction is performed with an adaptation of the Hidden Markov Chain (HMC) allowing us to take into account the temporal correlation in addition to spatial neighbourhood information. To improve the robustness of the segmentation of the principal brain structures and to detect Multiple Sclerosis Lesions as outliers the Trimmed Likelihood Estimator (TLE) is used during the process. The method is validated on 3D+t brain MR images. PMID:25571045

  7. Modeling and Computing of Stock Index Forecasting Based on Neural Network and Markov Chain

    Directory of Open Access Journals (Sweden)

    Yonghui Dai

    2014-01-01

    Full Text Available The stock index reflects the fluctuation of the stock market. For a long time, there have been a lot of researches on the forecast of stock index. However, the traditional method is limited to achieving an ideal precision in the dynamic market due to the influences of many factors such as the economic situation, policy changes, and emergency events. Therefore, the approach based on adaptive modeling and conditional probability transfer causes the new attention of researchers. This paper presents a new forecast method by the combination of improved back-propagation (BP neural network and Markov chain, as well as its modeling and computing technology. This method includes initial forecasting by improved BP neural network, division of Markov state region, computing of the state transition probability matrix, and the prediction adjustment. Results of the empirical study show that this method can achieve high accuracy in the stock index prediction, and it could provide a good reference for the investment in stock market.

  8. Markov chain order estimation with parametric significance tests of conditional mutual information

    CERN Document Server

    Papapetrou, Maria

    2015-01-01

    Besides the different approaches suggested in the literature, accurate estimation of the order of a Markov chain from a given symbol sequence is an open issue, especially when the order is moderately large. Here, parametric significance tests of conditional mutual information (CMI) of increasing order $m$, $I_c(m)$, on a symbol sequence are conducted for increasing orders $m$ in order to estimate the true order $L$ of the underlying Markov chain. CMI of order $m$ is the mutual information of two variables in the Markov chain being $m$ time steps apart, conditioning on the intermediate variables of the chain. The null distribution of CMI is approximated with a normal and gamma distribution deriving analytic expressions of their parameters, and a gamma distribution deriving its parameters from the mean and variance of the normal distribution. The accuracy of order estimation is assessed with the three parametric tests, and the parametric tests are compared to the randomization significance test and other known ...

  9. Enhancement of Markov chain model by integrating exponential smoothing: A case study on Muslims marriage and divorce

    Science.gov (United States)

    Jamaluddin, Fadhilah; Rahim, Rahela Abdul

    2015-12-01

    Markov Chain has been introduced since the 1913 for the purpose of studying the flow of data for a consecutive number of years of the data and also forecasting. The important feature in Markov Chain is obtaining the accurate Transition Probability Matrix (TPM). However to obtain the suitable TPM is hard especially in involving long-term modeling due to unavailability of data. This paper aims to enhance the classical Markov Chain by introducing Exponential Smoothing technique in developing the appropriate TPM.

  10. MARKOV CHAIN-BASED ANALYSIS OF THE DEGREE DISTRIBUTION FOR A GROWING NETWORK

    Institute of Scientific and Technical Information of China (English)

    Hou Zhenting; Tong Jinying; Shi Dinghua

    2011-01-01

    In this article, we focus on discussing the degree distribution of the DMS model from the perspective of probability. On the basis of the concept and technique of first-passage probability in Markov theory, we provide a rigorous proof for existence of the steady-state degree distribution, mathematically re-deriving the exact formula of the distribution. The approach based on Markov chain theory is universal and performs well in a large class of growing networks.

  11. Weighted Markov Chains and Graphic State Nodes for Information Retrieval.

    Science.gov (United States)

    Benoit, G.

    2002-01-01

    Discusses users' search behavior and decision making in data mining and information retrieval. Describes iterative information seeking as a Markov process during which users advance through states of nodes; and explains how the information system records the decision as weights, allowing the incorporation of users' decisions into the Markov…

  12. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    International Nuclear Information System (INIS)

    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading

  13. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    Energy Technology Data Exchange (ETDEWEB)

    Nikabdullah, N. [Department of Mechanical and Materials Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia and Institute of Space Science (ANGKASA), Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); Singh, S. S. K.; Alebrahim, R.; Azizi, M. A. [Department of Mechanical and Materials Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); K, Elwaleed A. [Institute of Space Science (ANGKASA), Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); Noorani, M. S. M. [School of Mathematical Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia (Malaysia)

    2014-06-19

    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading.

  14. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    Science.gov (United States)

    Nikabdullah, N.; Singh, S. S. K.; Alebrahim, R.; Azizi, M. A.; K, Elwaleed A.; Noorani, M. S. M.

    2014-06-01

    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading.

  15. Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution

    Directory of Open Access Journals (Sweden)

    Ivan B. Djordjevic

    2015-08-01

    Full Text Available Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i Markovian classical model, (ii Markovian-like quantum model, and (iii hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage Markov chain-like models of aging, which

  16. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - I: Theory

    International Nuclear Information System (INIS)

    The development of the adjoint sensitivity analysis procedure (ASAP) for generic dynamic reliability models based on Markov chains is presented, together with applications of this procedure to the analysis of several systems of increasing complexity. The general theory is presented in Part I of this work and is accompanied by a paradigm application to the dynamic reliability analysis of a simple binary component, namely a pump functioning on an 'up/down' cycle until it fails irreparably. This paradigm example admits a closed form analytical solution, which permits a clear illustration of the main characteristics of the ASAP for Markov chains. In particular, it is shown that the ASAP for Markov chains presents outstanding computational advantages over other procedures currently in use for sensitivity and uncertainty analysis of the dynamic reliability of large-scale systems. This conclusion is further underscored by the large-scale applications presented in Part II. (authors)

  17. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms.

    Science.gov (United States)

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442

  18. Interacting Markov chain Monte Carlo methods for solving nonlinear measure-valued equations

    CERN Document Server

    Del Moral, Pierre; 10.1214/09-AAP628

    2010-01-01

    We present a new class of interacting Markov chain Monte Carlo algorithms for solving numerically discrete-time measure-valued equations. The associated stochastic processes belong to the class of self-interacting Markov chains. In contrast to traditional Markov chains, their time evolutions depend on the occupation measure of their past values. This general methodology allows us to provide a natural way to sample from a sequence of target probability measures of increasing complexity. We develop an original theoretical analysis to analyze the behavior of these iterative algorithms which relies on measure-valued processes and semigroup techniques. We establish a variety of convergence results including exponential estimates and a uniform convergence theorem with respect to the number of target distributions. We also illustrate these algorithms in the context of Feynman-Kac distribution flows.

  19. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms

    Science.gov (United States)

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442

  20. Formal Reasoning About Finite-State Discrete-Time Markov Chains in HOL

    Institute of Scientific and Technical Information of China (English)

    Liya Liu; Osman Hasan; Sofiène Tahar

    2013-01-01

    Markov chains are extensively used in modeling different aspects of engineering and scientific systems,such as performance of algorithms and reliability of systems.Different techniques have been developed for analyzing Markovian models,for example,Markov Chain Monte Carlo based simulation,Markov Analyzer,and more recently probabilistic modelchecking.However,these techniques either do not guarantee accurate analysis or are not scalable.Higher-order-logic theorem proving is a formal method that has the ability to overcome the above mentioned limitations.However,it is not mature enough to handle all sorts of Markovian models.In this paper,we propose a formalization of Discrete-Time Markov Chain (DTMC) that facilitates formal reasoning about time-homogeneous finite-state discrete-time Markov chain.In particular,we provide a formal verification on some of its important properties,such as joint probabilities,Chapman-Kolmogorov equation,reversibility property,using higher-order logic.To demonstrate the usefulness of our work,we analyze two applications:a simplified binary communication channel and the Automatic Mail Quality Measurement protocol.

  1. Random billiards with wall temperature and associated Markov chains

    OpenAIRE

    Cook, Scott; Feres, Renato

    2012-01-01

    By a random billiard we mean a billiard system in which the standard specular reflection rule is replaced with a Markov transition probabilities operator P that, at each collision of the billiard particle with the boundary of the billiard domain, gives the probability distribution of the post-collision velocity for a given pre-collision velocity. A random billiard with microstructure (RBM) is a random billiard for which P is derived from a choice of geometric/mechanical structure on the bound...

  2. A note on asymptotic expansions for Markov chains using operator theory

    DEFF Research Database (Denmark)

    Jensen, J.L.

    1987-01-01

    We consider asymptotic expansions for sums Sn on the form Sn = fhook0(X0) + fhook(X1, X0) + ... + fhook(Xn, Xn-1), where Xi is a Markov chain. Under different ergodicity conditions on the Markov chain and certain conditional moment conditions on fhook(Xi, Xi-1), a simple representation of the cha...... characteristic function of Sn is obtained. The representation is in term of the maximal eigenvalue of the linear operator sending a function g(x) into the function x → E(g(Xi)exp[itfhook(Xi, x)]|Xi-1 = x). © 1987....

  3. Bounding spectral gaps of Markov chains: a novel exact multi-decomposition technique

    International Nuclear Information System (INIS)

    We propose an exact technique to calculate lower bounds of spectral gaps of discrete time reversible Markov chains on finite state sets. Spectral gaps are a common tool for evaluating convergence rates of Markov chains. As an illustration, we successfully use this technique to evaluate the 'absorption time' of the 'Backgammon model', a paradigmatic model for glassy dynamics. We also discuss the application of this technique to the 'contingency table problem', a notoriously difficult problem from probability theory. The interest of this technique is that it connects spectral gaps, which are quantities related to dynamics, with static quantities, calculated at equilibrium

  4. Summary statistics for end-point conditioned continuous-time Markov chains

    DEFF Research Database (Denmark)

    Hobolth, Asger; Jensen, Jens Ledet

    Continuous-time Markov chains are a widely used modelling tool. Applications include DNA sequence evolution, ion channel gating behavior and mathematical finance. We consider the problem of calculating properties of summary statistics (e.g. mean time spent in a state, mean number of jumps between...... two states and the distribution of the total number of jumps) for discretely observed continuous time Markov chains. Three alternative methods for calculating properties of summary statistics are described and the pros and cons of the methods are discussed. The methods are based on (i) an eigenvalue...

  5. THE CONSTRUCTION OF MULTITYPE CANONICAL MARKOV BRANCHING CHAINS IN RANDOM ENVIRONMENTS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The investigation for branching processes has a long history by their strong physics background, but only a few authors have investigated the branching processes in random environments. First of all, the author introduces the concepts of the multitype canonical Markov branching chain in random environment (CMBCRE) and multitype Markov branching chain in random environment (MBCRE) and proved that CMBCRE must be MBCRE, and any MBCRE must be equivalent to another CMBCRE in distribution. The main results of this article are the construction of CMBCRE and some of its probability properties.

  6. 2nd International Workshop on the Numerical Solution of Markov Chains

    CERN Document Server

    1995-01-01

    Computations with Markov Chains presents the edited and reviewed proceedings of the Second International Workshop on the Numerical Solution of Markov Chains, held January 16--18, 1995, in Raleigh, North Carolina. New developments of particular interest include recent work on stability and conditioning, Krylov subspace-based methods for transient solutions, quadratic convergent procedures for matrix geometric problems, further analysis of the GTH algorithm, the arrival of stochastic automata networks at the forefront of modelling stratagems, and more. An authoritative overview of the field for applied probabilists, numerical analysts and systems modelers, including computer scientists and engineers.

  7. An extension of a theorem of Kesten to topological Markov chains

    CERN Document Server

    Stadlbauer, Manuel

    2011-01-01

    The main results of this note extend a theorem of Kesten for symmetric random walks on discrete groups to group extensions of topological Markov chains. In contrast to the result in probability theory, there is a notable asymmetry in the assumptions on the base. That is, it turns out that, under very mild assumptions on the continuity and symmetry of the associated potential, amenability of the group implies that the Gurevic-pressures of the extension and the base coincide whereas the converse holds true if the potential is H\\"older continuous and the topological Markov chain has big images and preimages. Finally, an application to periodic hyperbolic manifolds is given.

  8. Measurements of Particle Size Distribution Based on Mie Scattering Theory and Markov Chain Inversion Algorithm

    Directory of Open Access Journals (Sweden)

    Zi Ye

    2012-10-01

    Full Text Available Measuring particle size distribution through calculating light scattering intensity is a typical inverse problem. This paper builds an inverse mathematical model based on Mie scattering, deduces the inversion formulas for particle size, and calculates the relative coefficients through programming with built-in functions in MATLAB. In order to improve the accuracy and noise immunity of particle size distribution measurement, the development of stochastic inversion algorithm: an inverse problem model based on Markov chain algorithm is proposed. Results of numerical simulation are added acceptable noise indicate that the algorithm of Markov chain has strong noise immunity and can meet the requirements of on-line measurement.

  9. Analisis Perbandingan Kompresi Citra Menggunakan Lempel-Ziv-Markov Chain Algorithm (LZMA) Dan Run Length Encoding

    OpenAIRE

    Lubis, Erick Ricardo

    2014-01-01

    This study aims to design a computer application that can compress bmp or png image. Bmp or png image can be compressed using the Lempel-Ziv-Markov chain algorithm (LZMA) and using Run Length Encoding . The output of this application is a new compressed file with the extension ERL. Average compression ratio between the input file with the output file to bmp image using the Lempel-Ziv-Markov chain Algorithm is 81.763% with an average time of 3315.3 milliseconds , while the avera...

  10. Simplification of irreversible Markov chains by removal of states with fast leaving rates.

    Science.gov (United States)

    Jia, Chen

    2016-07-01

    In the recent work of Ullah et al. (2012a), the authors developed an effective method to simplify reversible Markov chains by removal of states with low equilibrium occupancies. In this paper, we extend this result to irreversible Markov chains. We show that an irreversible chain can be simplified by removal of states with fast leaving rates. Moreover, we reveal that the irreversibility of the chain will always decrease after model simplification. This suggests that although model simplification can retain almost all the dynamic information of the chain, it will lose some thermodynamic information as a trade-off. Examples from biology are also given to illustrate the main results of this paper. PMID:27067245

  11. Markov

    Directory of Open Access Journals (Sweden)

    Carlos Alejandro De Luna Ortega

    2006-01-01

    Full Text Available En este artículo se aborda el diseño de un reconocedor de voz, con el idioma español mexicano, del estado de Aguascalientes, de palabras aisladas, con dependencia del hablante y vocabulario pequeño, empleando Redes Neuronales Artificiales (ANN por sus siglas en inglés, Alineamiento Dinámico del Tiempo (DTW por sus siglas en inglés y Modelos Ocultos de Markov (HMM por sus siglas en inglés para la realización del algoritmo de reconocimiento.

  12. Stochastic modeling of pitting corrosion in underground pipelines using Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Velazquez, J.C.; Caleyo, F.; Hallen, J.M.; Araujo, J.E. [Instituto Politecnico Nacional (IPN), Mexico D.F. (Mexico). Escuela Superior de Ingenieria Quimica e Industrias Extractivas (ESIQIE); Valor, A. [Universidad de La Habana, La Habana (Cuba)

    2009-07-01

    A non-homogenous, linear growth (pure birth) Markov process, with discrete states in continuous time, has been used to model external pitting corrosion in underground pipelines. The transition probability function for the pit depth is obtained from the analytical solution of the forward Kolmogorov equations for this process. The parameters of the transition probability function between depth states can be identified from the observed time evolution of the mean of the pit depth distribution. Monte Carlo simulations were used to predict the time evolution of the mean value of the pit depth distribution in soils with different physicochemical characteristics. The simulated distributions have been used to create an empirical Markov-chain-based stochastic model for predicting the evolution of pitting corrosion from the observed properties of the soil in contact with the pipeline. Real- life case studies, involving simulated and measured pit depth distributions are presented to illustrate the application of the proposed Markov chains model. (author)

  13. Avian life history profiles for use in the Markov chain nest productivity model (MCnest)

    Science.gov (United States)

    The Markov Chain nest productivity model, or MCnest, quantitatively estimates the effects of pesticides or other toxic chemicals on annual reproductive success of avian species (Bennett and Etterson 2013, Etterson and Bennett 2013). The Basic Version of MCnest was developed as a...

  14. An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.

    Science.gov (United States)

    Kim, Seock-Ho

    2001-01-01

    Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions…

  15. An NCME Instructional Module on Estimating Item Response Theory Models Using Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Kim, Jee-Seon; Bolt, Daniel M.

    2007-01-01

    The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…

  16. Tokunaga and Horton self-similarity for level set trees of Markov chains

    International Nuclear Information System (INIS)

    Highlights: ► Self-similar properties of the level set trees for Markov chains are studied. ► Tokunaga and Horton self-similarity are established for symmetric Markov chains and regular Brownian motion. ► Strong, distributional self-similarity is established for symmetric Markov chains with exponential jumps. ► It is conjectured that fractional Brownian motions are Tokunaga self-similar. - Abstract: The Horton and Tokunaga branching laws provide a convenient framework for studying self-similarity in random trees. The Horton self-similarity is a weaker property that addresses the principal branching in a tree; it is a counterpart of the power-law size distribution for elements of a branching system. The stronger Tokunaga self-similarity addresses so-called side branching. The Horton and Tokunaga self-similarity have been empirically established in numerous observed and modeled systems, and proven for two paradigmatic models: the critical Galton–Watson branching process with finite progeny and the finite-tree representation of a regular Brownian excursion. This study establishes the Tokunaga and Horton self-similarity for a tree representation of a finite symmetric homogeneous Markov chain. We also extend the concept of Horton and Tokunaga self-similarity to infinite trees and establish self-similarity for an infinite-tree representation of a regular Brownian motion. We conjecture that fractional Brownian motions are also Tokunaga and Horton self-similar, with self-similarity parameters depending on the Hurst exponent.

  17. Markov Chain model for the stochastic behaviors of wind-direction data

    International Nuclear Information System (INIS)

    Highlights: • I develop a Markov chain model to describe about the stochastic and probabilistic behaviors of wind direction data. • I describe some of the theoretical arguments regarding the Markov chain model in term of wind direction data. • I suggest a limiting probabilities approach to determine a dominant directions of wind blow. - Abstract: Analyzing the behaviors of wind direction can complement knowledge concerning wind speed and help researchers draw conclusions regarding wind energy potential. Knowledge of the wind’s direction enables the wind turbine to be positioned in such a way as to maximize the total amount of captured energy and optimize the wind farm’s performance. In this paper, first-order and higher-order Markov chain models are proposed to describe the probabilistic behaviors of wind-direction data. A case study is conducted using data from Mersing, Malaysia. The wind-direction data are classified according to an eight-state Markov chain based on natural geographical directions. The model’s parameters are estimated using the maximum likelihood method and the linear programming formulation. Several theoretical arguments regarding the model are also discussed. Finally, limiting probabilities are used to determine a long-run proportion of the wind directions generated. The results explain the dominant direction for Mersing’s wind in terms of probability metrics

  18. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast and...

  19. Experiences with Markov Chain Monte Carlo Convergence Assessment in Two Psychometric Examples

    Science.gov (United States)

    Sinharay, Sandip

    2004-01-01

    There is an increasing use of Markov chain Monte Carlo (MCMC) algorithms for fitting statistical models in psychometrics, especially in situations where the traditional estimation techniques are very difficult to apply. One of the disadvantages of using an MCMC algorithm is that it is not straightforward to determine the convergence of the…

  20. Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model

    Science.gov (United States)

    de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.

    2006-01-01

    The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…

  1. Average, sensitive and Blackwell-optimal policies in denumerable Markov decision chains with unbounded rewards

    NARCIS (Netherlands)

    R. Dekker (Rommert); A. Hordijk (Arie)

    1988-01-01

    textabstractIn this paper we consider a (discrete-time) Markov decision chain with a denumerabloe state space and compact action sets and we assume that for all states the rewards and transition probabilities depend continuously on the actions. The first objective of this paper is to develop an anal

  2. Automated compositional Markov chain generation for a plain-old telephone system

    NARCIS (Netherlands)

    Hermanns, H.; Katoen, J.P.

    2000-01-01

    Obtaining performance models, like Markov chains and queueing networks, for systems of significant complexity and magnitude is a difficult task that is usually tackled using human intelligence and experience. This holds in particular for performance models of a highly irregular nature. In this paper

  3. A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis

    Science.gov (United States)

    Edwards, Michael C.

    2010-01-01

    Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…

  4. Integration of logistic regression, Markov chain and cellular automata models to simulate urban expansion

    NARCIS (Netherlands)

    Jokar Arsanjani, J.; Helbich, M.; Kainz, W.; Boloorani, A.

    2013-01-01

    This research analyses the suburban expansion in the metropolitan area of Tehran, Iran. A hybrid model consisting of logistic regression model, Markov chain (MC), and cellular automata (CA) was designed to improve the performance of the standard logistic regression model. Environmental and socio-eco

  5. Teaching Markov Chain Monte Carlo: Revealing the Basic Ideas behind the Algorithm

    Science.gov (United States)

    Stewart, Wayne; Stewart, Sepideh

    2014-01-01

    For many scientists, researchers and students Markov chain Monte Carlo (MCMC) simulation is an important and necessary tool to perform Bayesian analyses. The simulation is often presented as a mathematical algorithm and then translated into an appropriate computer program. However, this can result in overlooking the fundamental and deeper…

  6. Metastates in Mean-Field Models with Random External Fields Generated by Markov Chains

    Science.gov (United States)

    Formentin, M.; Külske, C.; Reichenbachs, A.

    2012-01-01

    We extend the construction by Külske and Iacobelli of metastates in finite-state mean-field models in independent disorder to situations where the local disorder terms are a sample of an external ergodic Markov chain in equilibrium. We show that for non-degenerate Markov chains, the structure of the theorems is analogous to the case of i.i.d. variables when the limiting weights in the metastate are expressed with the aid of a CLT for the occupation time measure of the chain. As a new phenomenon we also show in a Potts example that for a degenerate non-reversible chain this CLT approximation is not enough, and that the metastate can have less symmetry than the symmetry of the interaction and a Gaussian approximation of disorder fluctuations would suggest.

  7. Metastates in Markov chain-driven random mean-field models

    CERN Document Server

    Formentin, M; Reichenbachs, A

    2011-01-01

    We extend the construction by Kuelske and Iacobelli of metastates in finite-state mean-field models in independent disorder to situations where the local disorder terms are realization of an external ergodic Markov chain in equilibrium. We show that for non-degenerate Markov chains, the structure of the theorems is analogous to the case of i.i.d. variables when the limiting weights in the metastate are expressed with the aid of a CLT for the occupation time measure of the chain. As a new phenomenon we also show in a Potts example that, for a degenerate non-reversible chain this CLT approximation is not enough and the metastate can have less symmetry than the symmetry of the interaction and a Gaussian approximation of disorder fluctuations would suggest.

  8. Rates of convergence of some multivariate Markov chains with polynomial eigenfunctions

    CERN Document Server

    Khare, Kshitij; 10.1214/08-AAP562

    2009-01-01

    We provide a sharp nonasymptotic analysis of the rates of convergence for some standard multivariate Markov chains using spectral techniques. All chains under consideration have multivariate orthogonal polynomial as eigenfunctions. Our examples include the Moran model in population genetics and its variants in community ecology, the Dirichlet-multinomial Gibbs sampler, a class of generalized Bernoulli--Laplace processes, a generalized Ehrenfest urn model and the multivariate normal autoregressive process.

  9. Fitting optimum order of Markov chain models for daily rainfall occurrences in Peninsular Malaysia

    Science.gov (United States)

    Deni, Sayang Mohd; Jemain, Abdul Aziz; Ibrahim, Kamarulzaman

    2009-06-01

    The analysis of the daily rainfall occurrence behavior is becoming more important, particularly in water-related sectors. Many studies have identified a more comprehensive pattern of the daily rainfall behavior based on the Markov chain models. One of the aims in fitting the Markov chain models of various orders to the daily rainfall occurrence is to determine the optimum order. In this study, the optimum order of the Markov chain models for a 5-day sequence will be examined in each of the 18 rainfall stations in Peninsular Malaysia, which have been selected based on the availability of the data, using the Akaike’s (AIC) and Bayesian information criteria (BIC). The identification of the most appropriate order in describing the distribution of the wet (dry) spells for each of the rainfall stations is obtained using the Kolmogorov-Smirnov goodness-of-fit test. It is found that the optimum order varies according to the levels of threshold used (e.g., either 0.1 or 10.0 mm), the locations of the region and the types of monsoon seasons. At most stations, the Markov chain models of a higher order are found to be optimum for rainfall occurrence during the northeast monsoon season for both levels of threshold. However, it is generally found that regardless of the monsoon seasons, the first-order model is optimum for the northwestern and eastern regions of the peninsula when the level of thresholds of 10.0 mm is considered. The analysis indicates that the first order of the Markov chain model is found to be most appropriate for describing the distribution of wet spells, whereas the higher-order models are found to be adequate for the dry spells in most of the rainfall stations for both threshold levels and monsoon seasons.

  10. The Fracture Mechanical Markov Chain Fatigue Model Compared with Empirical Data

    DEFF Research Database (Denmark)

    Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard

    The applicability of the FMF-model (Fracture Mechanical Markov Chain Fatigue Model) introduced in Gansted, L., R. Brincker and L. Pilegaard Hansen (1991) is tested by simulations and compared with empirical data. Two sets of data have been used, the Virkler data (aluminium alloy) and data...... established at the Laboratory of Structural Engineering at Aalborg University, the AUC-data, (mild steel). The model, which is based on the assumption, that the crack propagation process can be described by a discrete Space Markov theory, is applicable to constant as well as random loading. It is shown that...

  11. Zero-sum Risk-sensitive Stochastic Games for Continuous Time Markov Chains

    OpenAIRE

    Ghosh, Mrinal K.; Kumar, K. Suresh; Pal, Chandan

    2016-01-01

    We study infinite horizon discounted-cost and ergodic-cost risk-sensitive zero-sum stochastic games for controlled continuous time Markov chains on a countable state space. For the discounted-cost game we prove the existence of value and saddle-point equilibrium in the class of Markov strategies under nominal conditions. For the ergodic-cost game we prove the existence of values and saddle point equilibrium by studying the corresponding Hamilton-Jacobi-Isaacs equation under a certain Lyapunov...

  12. Fitting complex population models by combining particle filters with Markov chain Monte Carlo.

    Science.gov (United States)

    Knape, Jonas; de Valpine, Perry

    2012-02-01

    We show how a recent framework combining Markov chain Monte Carlo (MCMC) with particle filters (PFMCMC) may be used to estimate population state-space models. With the purpose of utilizing the strengths of each method, PFMCMC explores hidden states by particle filters, while process and observation parameters are estimated using an MCMC algorithm. PFMCMC is exemplified by analyzing time series data on a red kangaroo (Macropus rufus) population in New South Wales, Australia, using MCMC over model parameters based on an adaptive Metropolis-Hastings algorithm. We fit three population models to these data; a density-dependent logistic diffusion model with environmental variance, an unregulated stochastic exponential growth model, and a random-walk model. Bayes factors and posterior model probabilities show that there is little support for density dependence and that the random-walk model is the most parsimonious model. The particle filter Metropolis-Hastings algorithm is a brute-force method that may be used to fit a range of complex population models. Implementation is straightforward and less involved than standard MCMC for many models, and marginal densities for model selection can be obtained with little additional effort. The cost is mainly computational, resulting in long running times that may be improved by parallelizing the algorithm. PMID:22624307

  13. From Brownian Dynamics to Markov Chain: An Ion Channel Example

    KAUST Repository

    Chen, Wan

    2014-02-27

    A discrete rate theory for multi-ion channels is presented, in which the continuous dynamics of ion diffusion is reduced to transitions between Markovian discrete states. In an open channel, the ion permeation process involves three types of events: an ion entering the channel, an ion escaping from the channel, or an ion hopping between different energy minima in the channel. The continuous dynamics leads to a hierarchy of Fokker-Planck equations, indexed by channel occupancy. From these the mean escape times and splitting probabilities (denoting from which side an ion has escaped) can be calculated. By equating these with the corresponding expressions from the Markov model, one can determine the Markovian transition rates. The theory is illustrated with a two-ion one-well channel. The stationary probability of states is compared with that from both Brownian dynamics simulation and the hierarchical Fokker-Planck equations. The conductivity of the channel is also studied, and the optimal geometry maximizing ion flux is computed. © 2014 Society for Industrial and Applied Mathematics.

  14. Generation of solar radiation values by using Markov chains; Generacion de valores de radiacion usando cadenas de Markov

    Energy Technology Data Exchange (ETDEWEB)

    Adaro, Jorge; Cesari, Daniela; Lema, Alba; Galimberti, Pablo [Universidad Nacional de Rio Cuarto (Argentina). Facultad de Ingenieria]. E-mail: aadaro@ing.unrc.edu.ar

    2000-07-01

    The objective of the present work is to adopt a methodology that allows to generate sequences of values of global solar radiation. It is carried out a preliminary study on the generation of radiation sequence a concept of Chains of Markov. For it is analyzed it the readiness of data and it is investigated about the possibility of using such a methodology calculating values of indexes of clarity previously. With data of available radiation and provided the National Meteorological Service for Rio Cuarto, the preliminary study is carried out the preliminary study looking for to validated the pattern to the effects of being able to transfer the use of the methodology in other regions. (author)

  15. Forgetting of the initial condition for the filter in general state-space hidden Markov chain: a coupling approach

    OpenAIRE

    Douc, Randal; Moulines, Eric; Ritov, Ya'Acov

    2007-01-01

    21 We give simple conditions that ensure exponential forgetting of the initial conditions of the filter for general state-space hidden Markov chain. The proofs are based on the coupling argument applied to the posterior Markov kernels. These results are useful both for filtering hidden Markov models using approximation methods (e.g., particle filters) and for proving asymptotic properties of estimators. The results are general enough to cover models like the Gaussian state space model, wit...

  16. Error bounds for augmented truncations of discrete-time block-monotone Markov chains under subgeometric drift conditions

    OpenAIRE

    Masuyama, Hiroyuki

    2015-01-01

    This paper studies the last-column-block-augmented (LC-block-augmented) northwest-corner truncation of discrete-time block-monotone Markov chains under subgeometric drift conditions. The main result of this paper is to present an upper bound for the total variation distance between the stationary probability vectors of an original Markov chain and its LC-block-augmented northwest-corner truncation. The main result is extended to Markov chains that themselves may not be block-monotone but are ...

  17. STRONG LAW OF LARGE NUMBERS AND SHANNON-MCMILLAN THEOREM FOR MARKOV CHAINS FIELD ON CAYLEY TREE

    Institute of Scientific and Technical Information of China (English)

    杨卫国; 刘文

    2001-01-01

    This paper studies the strong law of large numbers and the Shannom-McMillan theorem for Markov chains field on Cayley tree. The authors first prove the strong law of large number on the frequencies of states and orderd couples of states for Markov chains field on Cayley tree. Then they prove thc Shannon-McMillan theorem with a.e. convergence for Markov chains field on Cayley tree. In the proof, a new technique in the study the strong limit theorem in probability theory is applied.

  18. Quantum Markov fields on graphs

    OpenAIRE

    Accardi, Luigi; Ohno, Hiromichi; Mukhamedov, Farrukh

    2009-01-01

    We introduce generalized quantum Markov states and generalized d-Markov chains which extend the notion quantum Markov chains on spin systems to that on $C^*$-algebras defined by general graphs. As examples of generalized d-Markov chains, we construct the entangled Markov fields on tree graphs. The concrete examples of generalized d-Markov chains on Cayley trees are also investigated.

  19. Modeling Urban Expansion in Bangkok Metropolitan Region Using Demographic–Economic Data through Cellular Automata-Markov Chain and Multi-Layer Perceptron-Markov Chain Models

    Directory of Open Access Journals (Sweden)

    Chudech Losiri

    2016-07-01

    Full Text Available Urban expansion is considered as one of the most important problems in several developing countries. Bangkok Metropolitan Region (BMR is the urbanized and agglomerated area of Bangkok Metropolis (BM and its vicinity, which confronts the expansion problem from the center of the city. Landsat images of 1988, 1993, 1998, 2003, 2008, and 2011 were used to detect the land use and land cover (LULC changes. The demographic and economic data together with corresponding maps were used to determine the driving factors for land conversions. This study applied Cellular Automata-Markov Chain (CA-MC and Multi-Layer Perceptron-Markov Chain (MLP-MC to model LULC and urban expansions. The performance of the CA-MC and MLP-MC yielded more than 90% overall accuracy to predict the LULC, especially the MLP-MC method. Further, the annual population and economic growth rates were considered to produce the land demand for the LULC in 2014 and 2035 using the statistical extrapolation and system dynamics (SD. It was evident that the simulated map in 2014 resulting from the SD yielded the highest accuracy. Therefore, this study applied the SD method to generate the land demand for simulating LULC in 2035. The outcome showed that urban occupied the land around a half of the BMR.

  20. An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains

    Directory of Open Access Journals (Sweden)

    Qihong Duan

    2010-01-01

    Full Text Available In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there are m transient states in the system and that there are n failure time data. The devised algorithm only needs to compute the exponential of m×m upper triangular matrices for O(nm2 times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.

  1. An 'adding' algorithm for the Markov chain formalism for radiation transfer

    Science.gov (United States)

    Esposito, L. W.

    1979-01-01

    An adding algorithm is presented, that extends the Markov chain method and considers a preceding calculation as a single state of a new Markov chain. This method takes advantage of the description of the radiation transport as a stochastic process. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. It is determined that the time required for the algorithm is comparable to that for a doubling calculation for homogeneous atmospheres. For an inhomogeneous atmosphere the new method is considerably faster than the standard adding routine. It is concluded that the algorithm is efficient, accurate, and suitable for smaller computers in calculating the diffuse intensity scattered by an inhomogeneous planetary atmosphere.

  2. Fitting timeseries by continuous-time Markov chains: A quadratic programming approach

    International Nuclear Information System (INIS)

    Construction of stochastic models that describe the effective dynamics of observables of interest is an useful instrument in various fields of application, such as physics, climate science, and finance. We present a new technique for the construction of such models. From the timeseries of an observable, we construct a discrete-in-time Markov chain and calculate the eigenspectrum of its transition probability (or stochastic) matrix. As a next step we aim to find the generator of a continuous-time Markov chain whose eigenspectrum resembles the observed eigenspectrum as closely as possible, using an appropriate norm. The generator is found by solving a minimization problem: the norm is chosen such that the object function is quadratic and convex, so that the minimization problem can be solved using quadratic programming techniques. The technique is illustrated on various toy problems as well as on datasets stemming from simulations of molecular dynamics and of atmospheric flows

  3. Risk-Sensitive and Risk-Neutral Optimality in Markov Decision Chains; a Unified Approach

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    Bratislava: Vydavatelstvo EKONÓM, 2012 - (Reiff, M.), s. 201-205 ISBN 978-80-225-3426-0. [Quantitative Methods in Economics (Multiple Criteria Decision Making XVI). Bratislava (SK), 30.05.2012-01.06.2012] R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956 Institutional support: RVO:67985556 Keywords : discrete-time Markov decision chains * exponential utility functions * risk-sensitive coefficient * connections between risk-sensitive and risk-neutral models Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2012/E/sladky-risk-sensitive and risk-neutral optimality in markov decision chains a unified approach.pdf

  4. Markov decision chains in discrete- and continuous-time; a unified approach

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    Bratislava, SR: University of Economics, Bratislava, 2010 - (Reiff, M.), s. 207-219. (Iura Edition, člen skupiny Walters Kluwer). ISBN 978-80-8078-364-8. [Quantitative Methods in Economics, Multiple Criteria Decision Making XV. Smolenice (SK), 06.10.2010-08.10.2010] R&D Projects: GA ČR(CZ) GA402/08/0107; GA ČR GAP402/10/0956; GA ČR GAP402/10/1610 Institutional research plan: CEZ:AV0Z10750506 Keywords : discrete-time and continuous-time Markov decision chains * discounted and averaging optimality * connections between discounted and averaging models * uniformization Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/sladky-markov decision chains in discrete- and continuous-time; a unified approach.pdf

  5. Based on Markov Chain of Agricultural Enterprise Risk Investment Profit Forecast Economic Model

    Directory of Open Access Journals (Sweden)

    Ouyang Bin

    2014-01-01

    Full Text Available The Markov chain empirical prediction results of economic models based on agricultural enterprises, so as to avoid investment risks. Venture capital investment as a new investment tool, its earnings forecast method is better than the traditional finance investment prediction, which is more complex, more professional. This study constructs a risk investment based on Markov chain prediction of economic model, the fitting matrix method to estimate one-step transfer probability matrix. Finally taking Shaanxi Huasheng Group's actual earnings data as an example, a calculating example is given, which shows the effectiveness of the method. This study presents the economic model for the development of the venture capital industry has certain positive role.

  6. Application of Markov chain to the pattern of mitochondrial deoxyribonucleic acid mutations

    Science.gov (United States)

    Vantika, Sandy; Pasaribu, Udjianna S.

    2014-03-01

    This research explains how Markov chain used to model the pattern of deoxyribonucleic acid mutations in mitochondrial (mitochondrial DNA). First, sign test was used to see a pattern of nucleotide bases that will appear at one position after the position of mutated nucleotide base. Results obtained from the sign test showed that for most cases, there exist a pattern of mutation except in the mutation cases of adenine to cytosine, adenine to thymine, and cytosine to guanine. Markov chain analysis results on data of mutations that occur in mitochondrial DNA indicate that one and two positions after the position of mutated nucleotide bases tend to be occupied by particular nucleotide bases. From this analysis, it can be said that the adenine, cytosine, guanine and thymine will mutate if the nucelotide base at one and/or two positions after them is cytosine.

  7. Risk Minimization for Insurance Products via F-Doubly Stochastic Markov Chains

    Directory of Open Access Journals (Sweden)

    Francesca Biagini

    2016-07-01

    Full Text Available We study risk-minimization for a large class of insurance contracts. Given that the individual progress in time of visiting an insurance policy’s states follows an F -doubly stochastic Markov chain, we describe different state-dependent types of insurance benefits. These cover single payments at maturity, annuity-type payments and payments at the time of a transition. Based on the intensity of the F -doubly stochastic Markov chain, we provide the Galtchouk-Kunita-Watanabe decomposition for a general insurance contract and specify risk-minimizing strategies in a Brownian financial market setting. The results are further illustrated explicitly within an affine structure for the intensity.

  8. Analysis of a Lance missile platoon using a semi-Markov chain

    OpenAIRE

    Argo, Harry M.

    1989-01-01

    Approved for public release; distribution is unlimited This thesis develops a combat effectiveness model for the Lance missile system. The survivability and ability to accomplish the mission for a Lance missile launch platoon depends upon enemy capabilities, platoon configuration, missile reliability and many other tangible factors. The changing status of a launch platoon is modeled using a semi-Markov chain with transient and absorbing states. Expected number of missiles fi...

  9. Observational constraints on G-corrected holographic dark energy using a Markov chain Monte Carlo method

    OpenAIRE

    Alavirad, Hamzeh; Malekjani, Mohammad

    2013-01-01

    We constrain holographic dark energy (HDE) with time varying gravitational coupling constant in the framework of the modified Friedmann equations using cosmological data from type Ia supernovae, baryon acoustic oscillations, cosmic microwave background radiation and X-ray gas mass fraction. Applying a Markov Chain Monte Carlo (MCMC) simulation, we obtain the best fit values of the model and cosmological parameters within $1\\sigma$ confidence level (CL) in a flat universe as: $\\Omega_{\\rm b}h^...

  10. Forecasting Income Distributions of Households in Poland on the Basis of Markov Chains

    OpenAIRE

    Czajkowski, Andrzej

    2009-01-01

    In order to forecast income distributions of population, we can make use of, among others, stochastic processes. These processes can be used to determine probabilities of transition of households from one income class to another. The paper attempts to present an application of homogenous Markov chains in the process of forecasting the income structure of six socio-economic groups of population in Poland for the years 2004, 2006 and 2008. Forecasts are based on results of individual household ...

  11. A MARKOV CHAIN ANALYSIS OF STRUCTURAL CHANGES IN THE TEXAS HIGH PLAINS COTTON GINNING INDUSTRY

    OpenAIRE

    Ethridge, Don E.; Roy, Sujit K.; Myers, David W.

    1985-01-01

    Markov chain analysis of changes in the number and size of cotton gin firms in West Texas was conducted assuming stationary and non-stationary transition probabilities. Projections of industry structure were made to 1999 with stationary probability assumptions and six sets of assumed conditions for labor and energy costs and technological change in the non-stationary transition model. Results indicate a continued decline in number of firms, but labor, energy, and technology conditions alter t...

  12. Analysis of the Navigation Behavior of the Users' using Grey Relational Pattern Analysis with Markov Chains

    OpenAIRE

    BINDU MADHURI .Ch,; DR. ANAND CHANDULAL.J

    2010-01-01

    Generally user page visits are sequential in nature. The large number of Web pages on many Web sites has raised navigational problems. Markov chains have been used to model user sequential navigational behavior on the World Wide Web (WWW).The enormous growth in the number of documents in the WWW increases the need for improved link navigation and path analysis models. Link prediction and path analysis are important problems with a wide range of applications ranging from personalization to web...

  13. An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains

    OpenAIRE

    Dengfu Zhao; Zhiping Chen; Qihong Duan

    2010-01-01

    In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transi...

  14. System reliability assessment via sensitivity analysis in the Markov chain scheme

    International Nuclear Information System (INIS)

    Methods for reliability sensitivity analysis in the Markov chain scheme are presented, together with a new formulation which makes use of Generalized Perturbation Theory (GPT) methods. As well known, sensitivity methods are fundamental in system risk analysis, since they allow to identify important components, so to assist the analyst in finding weaknesses in design and operation and in suggesting optimal modifications for system upgrade. The relationship between the GPT sensitivity expression and the Birnbaum importance is also given

  15. On the Total Reward Variance for Continuous-Time Markov Reward Chains

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel; van Dijk, N. M.

    2006-01-01

    Roč. 43, č. 4 (2006), s. 1044-1052. ISSN 0021-9002 R&D Projects: GA ČR GA402/05/0115; GA ČR GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : continuous-time Markov reward chain * variance of cumulative reward * asymptotic behaviour * uniformization Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.504, year: 2006

  16. Bayesian inference of BWR model parameters by Markov chain Monte Carlo

    International Nuclear Information System (INIS)

    In this paper, the Markov chain Monte Carlo approach to Bayesian inference is applied for estimating the parameters of a reduced-order model of the dynamics of a boiling water reactor system. A Bayesian updating strategy is devised to progressively refine the estimates, as newly measured data become available. Finally, the technique is used for detecting parameter changes during the system lifetime, e.g. due to component degradation

  17. Long Term Recommender Benchmarking for Mobile Shopping List Applications using Markov Chains

    OpenAIRE

    Schopfer, Sandro; Keller, Thorben

    2014-01-01

    This paper presents a method to estimate the performance and success rate of a recommender system for digital shopping lists. The list contains a number of items that are allowed to occupy three different states (to be purchased, purchased and deleted) as a function of time. Using Markov chains, the probability distribution function over time can be estimated for each state, and thus, the probability that a recommendation is deleted from the list can be used to benchmark a recommender on its ...

  18. Evaluating Stationary Distribution of the Binary GA Markov Chain in Special Cases

    OpenAIRE

    Mitavskiy, Boris S.; Cannings, Chris

    2008-01-01

    The evolutionary algorithm stochastic process is well-known to be Markovian. These have been under investigation in much of the theoretical evolutionary computing research. When mutation rate is positive, the Markov chain modeling an evolutionary algorithm is irreducible and, therefore, has a unique stationary distribution, yet, rather little is known about the stationary distribution. On the other hand, knowing the stationary distribution may provide some information abo...

  19. Approximate regenerative-block bootstrap for Markov chains: some simulation studies

    OpenAIRE

    Bertail, Patrice; Clémençon, Stéphan

    2007-01-01

    Abstract : In Bertail & Clémençon (2005a) a novel methodology for bootstrappinggeneral Harris Markov chains has been proposed, which crucially exploits their renewalproperties (when eventually extended via the Nummelin splitting technique) and has theoreticalproperties that surpass other existing methods within the Markovian framework(bmoving block bootstrap, sieve bootstrap etc...). This paper is devoted to discuss practicalissues related to the implementation of this specific resampling met...

  20. THE DECOMPOSITION OF STATE SPACE FOR MARKOV CHAIN IN RANDOM ENVIRONMENT

    Institute of Scientific and Technical Information of China (English)

    Hu Dihe

    2005-01-01

    This paper is a continuation of [8] and [9]. The author obtains the decomposition of state space χof an Markov chain in random environment by making use of the results in [8] and [9], gives three examples, random walk in random environment, renewal process in random environment and queue process in random environment, and obtains the decompositions of the state spaces of these three special examples.

  1. A Short History of Markov Chain Monte Carlo: Subjective Recollections from Incomplete Data

    OpenAIRE

    Robert, Christian; Casella, George

    2011-01-01

    We attempt to trace the history and development of Markov chain Monte Carlo (MCMC) from its early inception in the late 1940s through its use today. We see how the earlier stages of Monte Carlo (MC, not MCMC) research have led to the algorithms currently in use. More importantly, we see how the development of this methodology has not only changed our solutions to problems, but has changed the way we think about problems.

  2. Experimental study of parallel iterative solutions of Markov chains with block partitions

    OpenAIRE

    Migallón Gomis, Violeta; Penadés, Jose; Szyld, Daniel B.

    1999-01-01

    Experiments are performed which demonstrate that parallel implementations of block stationary iterative methods can solve singular systems of equations in substantially less time than the sequential counteparts. Furthermore, these experiments illustrate the behavior of different partitions of matrices representing Markov chains, when parallel iterative methods are used for their solution. Several version of block iterative methods are tested. Spanish CICYT grant PB96-1054-CV02-1, National...

  3. Laser-based detection and tracking moving objects using data-driven Markov chain Monte Carlo

    OpenAIRE

    Vu, Trung-Dung; Aycard, Olivier

    2009-01-01

    We present a method of simultaneous detection and tracking moving objects from a moving vehicle equipped with a single layer laser scanner. A model-based approach is introduced to interpret the laser measurement sequence by hypotheses of moving object trajectories over a sliding window of time. Knowledge of various aspects including object model, measurement model, motion model are integrated in one theoretically sound Bayesian framework. The data-driven Markov chain Monte Carlo (DDMCMC) tech...

  4. On Functional CLT for Reversible Markov Chains with nonlinear growth of the Variance

    CERN Document Server

    Longla, Martial; Peligrad, Magda

    2011-01-01

    In this paper we study the functional central limit theorem for stationary Markov chains with self-adjoint operator and general state space. We investigate the case when the variance of the partial sum is not asymptotically linear in n; and establish that conditional convergence in distribution of partial sums implies functional CLT. The main tools are maximal inequalities that are further exploited to derive conditions for tightness and convergence to the Brownian motion.

  5. An efficient interpolation technique for jump proposals in reversible-jump Markov chain Monte Carlo calculations

    OpenAIRE

    Farr, W. M.; Stevens, D; Mandel, Ilya

    2015-01-01

    Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted ...

  6. First Passage Probability Estimation of Wind Turbines by Markov Chain Monte Carlo

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.

    2013-01-01

    Markov Chain Monte Carlo simulation has received considerable attention within the past decade as reportedly one of the most powerful techniques for the first passage probability estimation of dynamic systems. A very popular method in this direction capable of estimating probability of rare event...... rotor equal to its nominal value. Finally Monte Carlo simulations are performed which allow assessment of the accuracy of the first passage probability estimation by the SS methods....

  7. Inference in Kingman's Coalescent with Particle Markov Chain Monte Carlo Method

    OpenAIRE

    Chen, Yifei; Xie, Xiaohui

    2013-01-01

    We propose a new algorithm to do posterior sampling of Kingman's coalescent, based upon the Particle Markov Chain Monte Carlo methodology. Specifically, the algorithm is an instantiation of the Particle Gibbs Sampling method, which alternately samples coalescent times conditioned on coalescent tree structures, and tree structures conditioned on coalescent times via the conditional Sequential Monte Carlo procedure. We implement our algorithm as a C++ package, and demonstrate its utility via a ...

  8. Links between Kleinberg's hubs and authorities, correspondence analysis, and Markov chains

    OpenAIRE

    Fouss, François; Renders, Jean-Michel; Saerens, Marco

    2003-01-01

    In this work, we show that Kleinberg's hubs and authorities model is closely related to both correspondence analysis, a well-known multivariate statistical technique, and a particular Markov chain model of navigation through the web. The only difference between correspondence analysis and Kleinberg's method is the use of the average value of the hubs (authorities) scores for computing the authorities (hubs) scores, instead of the sum for Kleinberg's method. We also show that correspondence an...

  9. Batch means and spectral variance estimators in Markov chain Monte Carlo

    OpenAIRE

    Flegal, James M.; Jones, Galin L.

    2008-01-01

    Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based on an estimate of the variance of the asymptotic normal distribution. We consider spectral and batch means methods for estimating this variance. In particular, we establish conditions which guarantee that these estimators are strongly consistent as the simulation effort increases. In addition, fo...

  10. Transmissivities of radiation beams and particle beams in multicomponent mixed materials. 2. Finite Markov chain model

    International Nuclear Information System (INIS)

    This paper is a complement to an earlier paper on the transmissivity of beams in multi-component mixed materials of identical grain size and presents an analytical expression of transmissivity based on the finite Markov chain in the case of mixed materials of different grain sizes. The expression is represented by the flux vector and the response matrix obtained from the transition matrix. (author)

  11. Analysis of Users’ Web Navigation Behavior using GRPA with Variable Length Markov Chains

    Directory of Open Access Journals (Sweden)

    Bindu Madhuri. Ch

    2011-03-01

    Full Text Available With the never-ending growth of Web services and Web-based information systems, the volumes of click stream and user data collected by Web-based organizations in their daily operations has reached enormous proportions. Analyzing such huge data can help to evaluate the effectiveness of promotional campaigns, optimize the functionality of Web-based applications, and provide more personalized content to visitors. In the previous work, we had proposed a method, Grey Relational Pattern Analysis using Markov chains, which involves to discovering the meaningful patterns and relationships from a large collection of data, often stored in Web and applications server access logs, proxy logs etc. Herein, we propose a novel approach to analyse the navigational behavior of User using GRPA with Variable-Length Markov Chains. A VLMC is a model extension that allows variable length history to be captured. GRPA with VariableLength Markov Chains, which reflects on sequential information in Web usage data effectively and efficiently, and it can be extended to allow integration with a Web user navigation behavior prediction model for better Web Usage mining Applications.

  12. State space orderings for Gauss-Seidel in Markov chains revisited

    Energy Technology Data Exchange (ETDEWEB)

    Dayar, T. [Bilkent Univ., Ankara (Turkey)

    1996-12-31

    Symmetric state space orderings of a Markov chain may be used to reduce the magnitude of the subdominant eigenvalue of the (Gauss-Seidel) iteration matrix. Orderings that maximize the elemental mass or the number of nonzero elements in the dominant term of the Gauss-Seidel splitting (that is, the term approximating the coefficient matrix) do not necessarily converge faster. An ordering of a Markov chain that satisfies Property-R is semi-convergent. On the other hand, there are semi-convergent symmetric state space orderings that do not satisfy Property-R. For a given ordering, a simple approach for checking Property-R is shown. An algorithm that orders the states of a Markov chain so as to increase the likelihood of satisfying Property-R is presented. The computational complexity of the ordering algorithm is less than that of a single Gauss-Seidel iteration (for sparse matrices). In doing all this, the aim is to gain an insight for faster converging orderings. Results from a variety of applications improve the confidence in the algorithm.

  13. Bayesian clustering of DNA sequences using Markov chains and a stochastic partition model.

    Science.gov (United States)

    Jääskinen, Väinö; Parkkinen, Ville; Cheng, Lu; Corander, Jukka

    2014-02-01

    In many biological applications it is necessary to cluster DNA sequences into groups that represent underlying organismal units, such as named species or genera. In metagenomics this grouping needs typically to be achieved on the basis of relatively short sequences which contain different types of errors, making the use of a statistical modeling approach desirable. Here we introduce a novel method for this purpose by developing a stochastic partition model that clusters Markov chains of a given order. The model is based on a Dirichlet process prior and we use conjugate priors for the Markov chain parameters which enables an analytical expression for comparing the marginal likelihoods of any two partitions. To find a good candidate for the posterior mode in the partition space, we use a hybrid computational approach which combines the EM-algorithm with a greedy search. This is demonstrated to be faster and yield highly accurate results compared to earlier suggested clustering methods for the metagenomics application. Our model is fairly generic and could also be used for clustering of other types of sequence data for which Markov chains provide a reasonable way to compress information, as illustrated by experiments on shotgun sequence type data from an Escherichia coli strain. PMID:24246289

  14. Markov chains or the game of structure and chance. From complex networks, to language evolution, to musical compositions

    Science.gov (United States)

    Blanchard, Ph.; Dawin, J. R.; Volchenkov, D.

    2010-06-01

    Markov chains provide us with a powerful tool for studying the structure of graphs and databases in details. We review the method of generalized inverses for Markov chains and apply it for the analysis of urban structures, evolution of languages, and musical compositions. We also discuss a generalization of Lévy flights over large complex networks and study the interplay between the nonlinearity of diffusion process and the topological structure of the network.

  15. Phase Transitions for Quantum XY-Model on the Cayley Tree of Order Three in Quantum Markov Chain Scheme

    International Nuclear Information System (INIS)

    In the present paper we study forward Quantum Markov Chains (QMC) defined on a Cayley tree. Using the tree structure of graphs, we give a construction of quantum Markov chains on a Cayley tree. By means of such constructions we prove the existence of a phase transition for the XY-model on a Cayley tree of order three in QMC scheme. By the phase transition we mean the existence of two distinct QMC for the given family of interaction operators {K}. (author)

  16. Memory Functions of the Additive Markov chains: Applications to Complex Dynamic Systems

    CERN Document Server

    Melnyk, S S; Yampolskii, V A

    2004-01-01

    A new approach to describing correlation properties of complex dynamic systems with long-range memory based on a concept of additive Markov chains (Phys. Rev. E 68, 061107 (2003)) is developed. An equation connecting a memory function of the chain and its correlation function is presented. This equation allows reconstructing the memory function using the correlation function of the system. Thus, we have elaborated a novel method to generate a sequence with prescribed correlation function. Effectiveness and robustness of the proposed method is demonstrated by simple model examples. Memory functions of concrete coarse-grained literary texts are found and their universal power-law behavior at long distances is revealed.

  17. Continuous time markov process model for nuclide decay chain transport in the fractured rock medium

    International Nuclear Information System (INIS)

    A stochastic approach using continuous time Markov process is presented to model the one dimensional nuclide transport in fractured rock media as a further extension for previous works. Nuclide transport of decay chain of arbitrary length in the single planar fractured rock media in the vicinity of the radioactive waste repository is modeled using a continuous time Markov process. While most of analytical solutions for nuclide transport of decay chain deal with the limited length of decay chain, do not consider the case of having rock matrix diffusion, and have very complicated solution form, the present model offers rather a simplified solution in the form of expectance and its variance resulted from a stochastic modeling. As another deterministic way, even numerical models of decay chain transport, in most cases, show very complicated procedure to get the solution and large discrepancy for the exact solution as opposed to the stochastic model developed in this study. To demonstrate the use of the present model and to verify the model by comparing with the deterministic model, a specific illustration was made for the transport of a chain of three member in single fractured rock midium with constant groundwater flow rate in the fracture, which ignores the rock matrix diffusion and shows good capability to model the fractured media around the repository. (Author)

  18. Animal vocal sequences: not the Markov chains we thought they were.

    Science.gov (United States)

    Kershenbaum, Arik; Bowles, Ann E; Freeberg, Todd M; Jin, Dezhe Z; Lameira, Adriano R; Bohn, Kirsten

    2014-10-01

    Many animals produce vocal sequences that appear complex. Most researchers assume that these sequences are well characterized as Markov chains (i.e. that the probability of a particular vocal element can be calculated from the history of only a finite number of preceding elements). However, this assumption has never been explicitly tested. Furthermore, it is unclear how language could evolve in a single step from a Markovian origin, as is frequently assumed, as no intermediate forms have been found between animal communication and human language. Here, we assess whether animal taxa produce vocal sequences that are better described by Markov chains, or by non-Markovian dynamics such as the 'renewal process' (RP), characterized by a strong tendency to repeat elements. We examined vocal sequences of seven taxa: Bengalese finches Lonchura striata domestica, Carolina chickadees Poecile carolinensis, free-tailed bats Tadarida brasiliensis, rock hyraxes Procavia capensis, pilot whales Globicephala macrorhynchus, killer whales Orcinus orca and orangutans Pongo spp. The vocal systems of most of these species are more consistent with a non-Markovian RP than with the Markovian models traditionally assumed. Our data suggest that non-Markovian vocal sequences may be more common than Markov sequences, which must be taken into account when evaluating alternative hypotheses for the evolution of signalling complexity, and perhaps human language origins. PMID:25143037

  19. Ancestry inference in complex admixtures via variable-length Markov chain linkage models.

    Science.gov (United States)

    Rodriguez, Jesse M; Bercovici, Sivan; Elmore, Megan; Batzoglou, Serafim

    2013-03-01

    Inferring the ancestral origin of chromosomal segments in admixed individuals is key for genetic applications, ranging from analyzing population demographics and history, to mapping disease genes. Previous methods addressed ancestry inference by using either weak models of linkage disequilibrium, or large models that make explicit use of ancestral haplotypes. In this paper we introduce ALLOY, an efficient method that incorporates generalized, but highly expressive, linkage disequilibrium models. ALLOY applies a factorial hidden Markov model to capture the parallel process producing the maternal and paternal admixed haplotypes, and models the background linkage disequilibrium in the ancestral populations via an inhomogeneous variable-length Markov chain. We test ALLOY in a broad range of scenarios ranging from recent to ancient admixtures with up to four ancestral populations. We show that ALLOY outperforms the previous state of the art, and is robust to uncertainties in model parameters. PMID:23421795

  20. Analysis on the Spatial-Temporal Dynamics of Financial Agglomeration with Markov Chain Approach in China

    Directory of Open Access Journals (Sweden)

    Weimin Chen

    2014-01-01

    Full Text Available The standard approach to studying financial industrial agglomeration is to construct measures of the degree of agglomeration within financial industry. But such measures often fail to exploit the convergence or divergence of financial agglomeration. In this paper, we apply Markov chain approach to diagnose the convergence of financial agglomeration in China based on the location quotient coefficients across the provincial regions over 1993–2011. The estimation of Markov transition probability matrix offers more detailed insights into the mechanics of financial agglomeration evolution process in China during the research period. The results show that the spatial evolution of financial agglomeration changes faster in the period of 2003–2011 than that in the period of 1993–2002. Furthermore, there exists a very uneven financial development patterns, but there is regional convergence for financial agglomeration in China.

  1. Markov chain model helps predict pitting corrosion depth and rate in underground pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Caleyo, F.; Velazquez, J.C.; Hallen, J. M. [ESIQIE, Instituto Politecnico Nacional, Mexico D. F. (Mexico); Esquivel-Amezcua, A. [PEMEX PEP Region Sur, Villahermosa, Tabasco (Mexico); Valor, A. [Universidad de la Habana, Vedado, La Habana (Cuba)

    2010-07-01

    Recent reports place pipeline corrosion costs in North America at seven billion dollars per year. Pitting corrosion causes the higher percentage of failures among other corrosion mechanisms. This has motivated multiple modelling studies to be focused on corrosion pitting of underground pipelines. In this study, a continuous-time, non-homogenous pure birth Markov chain serves to model external pitting corrosion in buried pipelines. The analytical solution of Kolmogorov's forward equations for this type of Markov process gives the transition probability function in a discrete space of pit depths. The transition probability function can be completely identified by making a correlation between the stochastic pit depth mean and the deterministic mean obtained experimentally. The model proposed in this study can be applied to pitting corrosion data from repeated in-line pipeline inspections. Case studies presented in this work show how pipeline inspection and maintenance planning can be improved by using the proposed Markovian model for pitting corrosion.

  2. Markov chain modeling of evolution of strains in reinforced concrete flexural beams

    Directory of Open Access Journals (Sweden)

    Anoop, M. B.

    2012-09-01

    Full Text Available From the analysis of experimentally observed variations in surface strains with loading in reinforced concrete beams, it is noted that there is a need to consider the evolution of strains (with loading as a stochastic process. Use of Markov Chains for modeling stochastic evolution of strains with loading in reinforced concrete flexural beams is studied in this paper. A simple, yet practically useful, bi-level homogeneous Gaussian Markov Chain (BLHGMC model is proposed for determining the state of strain in reinforced concrete beams. The BLHGMC model will be useful for predicting behavior/response of reinforced concrete beams leading to more rational design.A través del análisis de la evolución de la deformación superficial observada experimentalmente en vigas de hormigón armado al entrar en carga, se constata que dicho proceso debe considerarse estocástico. En este trabajo se estudia la utilización de cadenas de Markov para modelizar la evolución estocástica de la deformación de vigas flexotraccionadas. Se propone, para establecer el estado de deformación de estas, un modelo con distribución gaussiana tipo cadena de Markov homogénea de dos niveles (BLHGMC por sus siglas en inglés, cuyo empleo resulta sencillo y práctico. Se comprueba la utilidad del modelo BLHGMC para prever el comportamiento de estos elementos, lo que determina a su vez una mayor racionalidad a la hora de su cálculo y diseño

  3. Markov chain Monte Carlo based analysis of post-translationally modified VDAC1 gating kinetics

    Directory of Open Access Journals (Sweden)

    Shivendra eTewari

    2015-01-01

    Full Text Available The voltage-dependent anion channel (VDAC is the main conduit for permeation of solutes (including nucleotides and metabolites of up to 5 kDa across the mitochondrial outer membrane (MOM. Recent studies suggest that VDAC activity is regulated via post-translational modifications (PTMs. Yet the nature and effect of these modifications is not understood. Herein, single channel currents of wild-type, nitrosated and phosphorylated VDAC are analyzed using a generalized continuous-time Markov chain Monte Carlo (MCMC method. This developed method describes three distinct conducting states (open, half-open, and closed of VDAC1 activity. Lipid bilayer experiments are also performed to record single VDAC activity under un-phosphorylated and phosphorylated conditions, and are analyzed using the developed stochastic search method. Experimental data show significant alteration in VDAC gating kinetics and conductance as a result of PTMs. The effect of PTMs on VDAC kinetics is captured in the parameters associated with the identified Markov model. Stationary distributions of the Markov model suggests that nitrosation of VDAC not only decreased its conductance but also significantly locked VDAC in a closed state. On the other hand, stationary distributions of the model associated with un-phosphorylated and phosphorylated VDAC suggest a reversal in channel conformation from relatively closed state to an open state. Model analyses of the nitrosated data suggest that faster reaction of nitric oxide with Cys-127 thiol group might be responsible for the biphasic effect of nitric oxide on basal VDAC conductance.

  4. Observational constraints on G-corrected holographic dark energy using a Markov chain Monte Carlo method

    Science.gov (United States)

    Alavirad, Hamzeh; Malekjani, Mohammad

    2014-02-01

    We constrain holographic dark energy (HDE) with time varying gravitational coupling constant in the framework of the modified Friedmann equations using cosmological data from type Ia supernovae, baryon acoustic oscillations, cosmic microwave background radiation and X-ray gas mass fraction. Applying a Markov Chain Monte Carlo (MCMC) simulation, we obtain the best fit values of the model and cosmological parameters within 1 σ confidence level (CL) in a flat universe as: , , and the HDE constant . Using the best fit values, the equation of state of the dark component at the present time w d0 at 1 σ CL can cross the phantom boundary w=-1.

  5. Cumulative Optimality in Risk-Sensitive and Risk-Neutral Markov Reward Chains

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    Jihlava: College of Polytechnics Jihlava, 2013 - (Vojáčková, H.) ISBN 978-80-87035-76-4. [International Conference on Mathematical Methods in Economics 2013 /31./. Jihlava (CZ), 11.09.2013-13.09.2013] R&D Projects: GA ČR GA13-14445S; GA ČR GAP402/11/0150 Institutional support: RVO:67985556 Keywords : dynamic programming * stochastic models * risk analysis and management Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/sladky-cumulative optimality in risk-sensitive and risk-neutral markov reward chains.pdf

  6. Sequential Tracking of a Hidden Markov Chain Using Point Process Observations

    CERN Document Server

    Bayraktar, Erhan

    2007-01-01

    We study finite horizon optimal switching problems for hidden Markov chain models under partially observable Poisson processes. The controller possesses a finite range of strategies and attempts to track the state of the unobserved state variable using Bayesian updates over the discrete observations. Such a model has applications in economic policy making, staffing under variable demand levels and generalized Poisson disorder problems. We show regularity of the value function and explicitly characterize an optimal strategy. We also provide an efficient numerical scheme and illustrate our results with several computational examples.

  7. Using Markov Chain Monte Carlo methods to solve full Bayesian modeling of PWR vessel flaw distributions

    International Nuclear Information System (INIS)

    We present a hierarchical Bayesian method for estimating the density and size distribution of subclad-flaws in French Pressurized Water Reactor (PWR) vessels. This model takes into account in-service inspection (ISI) data, a flaw size-dependent probability of detection (different functions are considered) with a threshold of detection, and a flaw sizing error distribution (different distributions are considered). The resulting model is identified through a Markov Chain Monte Carlo (MCMC) algorithm. The article includes discussion for choosing the prior distribution parameters and an illustrative application is presented highlighting the model's ability to provide good parameter estimates even when a small number of flaws are observed

  8. An irreversible Markov-chain Monte Carlo method with skew detailed balance conditions

    International Nuclear Information System (INIS)

    An irreversible Markov-chain Monte Carlo (MCMC) method based on a skew detailed balance condition is discussed. Some recent theoretical works concerned with the irreversible MCMC method are reviewed and the irreversible Metropolis-Hastings algorithm for the method is described. We apply the method to ferromagnetic Ising models in two and three dimensions. Relaxation dynamics of the order parameter and the dynamical exponent are studied in comparison to those with the conventional reversible MCMC method with the detailed balance condition. We also examine how the efficiency of exchange Monte Carlo method is affected by the combined use of the irreversible MCMC method

  9. A multi-level solution algorithm for steady-state Markov chains

    Science.gov (United States)

    Horton, Graham; Leutenegger, Scott T.

    1993-01-01

    A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.

  10. A Markov chain technique for determining the acquisition behavior of a digital tracking loop

    Science.gov (United States)

    Chadwick, H. D.

    1972-01-01

    An iterative procedure is presented for determining the acquisition behavior of discrete or digital implementations of a tracking loop. The technique is based on the theory of Markov chains and provides the cumulative probability of acquisition in the loop as a function of time in the presence of noise and a given set of initial condition probabilities. A digital second-order tracking loop to be used in the Viking command receiver for continuous tracking of the command subcarrier phase was analyzed using this technique, and the results agree closely with experimental data.

  11. Markov chain analysis of weekly rainfall data in determining drought-proneness

    OpenAIRE

    M. Sayedur Rahman; Abhyudy Mandal; Pabitra Banik

    2002-01-01

    Markov chain models have been used to evaluate probabilities of getting a sequence of wet and dry weeks during South-West monsoon period over the districts Purulia in West Bengal and Giridih in Bihar state and dry farming tract in the state of Maharashtra of India. An index based on the parameters of this model has been suggested to indicate the extend of drought-proneness of a region. This study will be useful to agricultural planners and irrigation engineers to identifying the areas where a...

  12. A new method based on Markov chains for deriving SB2 orbits directly from their spectra

    CERN Document Server

    Salomon, J -B; Guillout, P; Halbwachs, J -L; Arenou, F; Famaey, B; Lebreton, Y; Mazeh, T; Pourbaix, D; Tal-Or, L

    2012-01-01

    We present a new method to derive orbital elements of double-lined spectroscopic binaries (SB2). The aim is to have accurate orbital parameters of a selection of SB2 in order to prepare the exploitation of astrometric Gaia observations. Combined with our results, they should allow one to measure the mass of each star with a precision of better than 1%. The new method presented here consists of using the spectra at all epochs simultaneously to derive the orbital elements without templates. It is based on a Markov chain including a new method for disentangling the spectra.

  13. Reliability measures for indexed semi-Markov chains applied to wind energy production

    CERN Document Server

    D'Amico, Guglielmo; Prattico, Flavio

    2013-01-01

    The computation of the dependability measures is a crucial point in the planning and development of a wind farm. In this paper we address the issue of energy production by wind turbine by using an indexed semi-Markov chain as a model of wind speed. We present the mathematical model, we describe the data and technical characteristics of a commercial wind turbine (Aircon HAWT-10kW). We show how to compute some of the main dependability measures such as reliability, availability and maintainability functions. We compare the results of the model with real energy production obtained from data available in the Lastem station (Italy) and sampled every 10 minutes.

  14. Markov chain Monte Carlo linkage analysis of a complex qualitative phenotype.

    Science.gov (United States)

    Hinrichs, A; Lin, J H; Reich, T; Bierut, L; Suarez, B K

    1999-01-01

    We tested a new computer program, LOKI, that implements a reversible jump Markov chain Monte Carlo (MCMC) technique for segregation and linkage analysis. Our objective was to determine whether this software, designed for use with continuously distributed phenotypes, has any efficacy when applied to the discrete disease states of the simulated data from the Mordor data from GAW Problem 1. Although we were able to identify the genomic location for two of the three quantitative trait loci by repeated application of the software, the MCMC sampler experienced significant mixing problems indicating that the method, as currently formulated in LOKI, was not suitable for the discrete phenotypes in this data set. PMID:10597502

  15. A mixture representation of π with applications in Markov chain Monte Carlo and perfect sampling

    OpenAIRE

    Hobert, James P.; Robert, Christian P.

    2004-01-01

    Let X={Xn:n=0,1,2,…} be an irreducible, positive recurrent Markov chain with invariant probability measure π. We show that if X satisfies a one-step minorization condition, then π can be represented as an infinite mixture. The distributions in the mixture are associated with the hitting times on an accessible atom introduced via the splitting construction of Athreya and Ney [Trans. Amer. Math. Soc. 245 (1978) 493–501] and Nummelin [Z. Wahrsch. Verw. Gebiete 43 (1978) 309–318]. When the small ...

  16. Important Markov-Chain Properties of (1,lambda)-ES Linear Optimization Models

    Czech Academy of Sciences Publication Activity Database

    Chotard, A.; Holeňa, Martin

    Prague: Institute of Computer Science AS CR, 2014 - (Kůrková, V.; Bajer, L.; Peška, L.; Vojtáš, R.; Holeňa, M.; Nehéz, M.), s. 44-52 ISBN 978-80-87136-19-5. [ITAT 2014. European Conference on Information Technologies - Applications and Theory /14./. Demänovská dolina (SK), 25.09.2014-29.09.2014] R&D Projects: GA ČR GA13-17187S Institutional support: RVO:67985807 Keywords : evolution strategies * random steps * linear optimization * Markov chain models * Archimedean copulas Subject RIV: IN - Informatics, Computer Science

  17. A Generalized Markov-Chain Modelling Approach to (1,lambda)-ES Linear Optimization

    Czech Academy of Sciences Publication Activity Database

    Chotard, A.; Holeňa, Martin

    Cham: Springer, 2014 - (Bartz-Beielstein, T.; Branke, J.; Filipič, B.; Smith, J.), s. 902-911. (Lecture Notes in Computer Science . 8672). ISBN 978-3-319-10761-5. ISSN 0302-9743. [PPSN 2014. International Conference on Parallel Problem Solving from Nature /13./. Ljubljana (SI), 13.09.2014-17.09.2014] R&D Projects: GA ČR GA13-17187S Institutional support: RVO:67985807 Keywords : evolution strategies * continuous optimization * linear optimization * linear constraint * linear function * Markov chain models * Archimedean copulas Subject RIV: IN - Informatics, Computer Science

  18. A neurocomputational model of stimulus-specific adaptation to oddball and Markov sequences.

    Directory of Open Access Journals (Sweden)

    Robert Mill

    2011-08-01

    Full Text Available Stimulus-specific adaptation (SSA occurs when the spike rate of a neuron decreases with repetitions of the same stimulus, but recovers when a different stimulus is presented. It has been suggested that SSA in single auditory neurons may provide information to change detection mechanisms evident at other scales (e.g., mismatch negativity in the event related potential, and participate in the control of attention and the formation of auditory streams. This article presents a spiking-neuron model that accounts for SSA in terms of the convergence of depressing synapses that convey feature-specific inputs. The model is anatomically plausible, comprising just a few homogeneously connected populations, and does not require organised feature maps. The model is calibrated to match the SSA measured in the cortex of the awake rat, as reported in one study. The effect of frequency separation, deviant probability, repetition rate and duration upon SSA are investigated. With the same parameter set, the model generates responses consistent with a wide range of published data obtained in other auditory regions using other stimulus configurations, such as block, sequential and random stimuli. A new stimulus paradigm is introduced, which generalises the oddball concept to Markov chains, allowing the experimenter to vary the tone probabilities and the rate of switching independently. The model predicts greater SSA for higher rates of switching. Finally, the issue of whether rarity or novelty elicits SSA is addressed by comparing the responses of the model to deviants in the context of a sequence of a single standard or many standards. The results support the view that synaptic adaptation alone can explain almost all aspects of SSA reported to date, including its purported novelty component, and that non-trivial networks of depressing synapses can intensify this novelty response.

  19. A neurocomputational model of stimulus-specific adaptation to oddball and Markov sequences.

    Science.gov (United States)

    Mill, Robert; Coath, Martin; Wennekers, Thomas; Denham, Susan L

    2011-08-01

    Stimulus-specific adaptation (SSA) occurs when the spike rate of a neuron decreases with repetitions of the same stimulus, but recovers when a different stimulus is presented. It has been suggested that SSA in single auditory neurons may provide information to change detection mechanisms evident at other scales (e.g., mismatch negativity in the event related potential), and participate in the control of attention and the formation of auditory streams. This article presents a spiking-neuron model that accounts for SSA in terms of the convergence of depressing synapses that convey feature-specific inputs. The model is anatomically plausible, comprising just a few homogeneously connected populations, and does not require organised feature maps. The model is calibrated to match the SSA measured in the cortex of the awake rat, as reported in one study. The effect of frequency separation, deviant probability, repetition rate and duration upon SSA are investigated. With the same parameter set, the model generates responses consistent with a wide range of published data obtained in other auditory regions using other stimulus configurations, such as block, sequential and random stimuli. A new stimulus paradigm is introduced, which generalises the oddball concept to Markov chains, allowing the experimenter to vary the tone probabilities and the rate of switching independently. The model predicts greater SSA for higher rates of switching. Finally, the issue of whether rarity or novelty elicits SSA is addressed by comparing the responses of the model to deviants in the context of a sequence of a single standard or many standards. The results support the view that synaptic adaptation alone can explain almost all aspects of SSA reported to date, including its purported novelty component, and that non-trivial networks of depressing synapses can intensify this novelty response. PMID:21876661

  20. A Scalable Multi-chain Markov Chain Monte Carlo Method for Inverting Subsurface Hydraulic and Geological Properties

    Science.gov (United States)

    Bao, J.; Ren, H.; Hou, Z.; Ray, J.; Swiler, L.; Huang, M.

    2015-12-01

    We developed a novel scalable multi-chain Markov chain Monte Carlo (MCMC) method for high-dimensional inverse problems. The method is scalable in terms of number of chains and processors, and is useful for Bayesian calibration of computationally expensive simulators typically used for scientific and engineering calculations. In this study, we demonstrate two applications of this method for hydraulic and geological inverse problems. The first one is monitoring soil moisture variations using tomographic ground penetrating radar (GPR) travel time data, where challenges exist in the inversion of GPR tomographic data for handling non-uniqueness and nonlinearity and high-dimensionality of unknowns. We integrated the multi-chain MCMC framework with the pilot point concept, a curved-ray GPR forward model, and a sequential Gaussian simulation (SGSIM) algorithm for estimating the dielectric permittivity at pilot point locations distributed within the tomogram, as well as its spatial correlation range, which are used to construct the whole field of dielectric permittivity using SGSIM. The second application is reservoir porosity and saturation estimation using the multi-chain MCMC approach to jointly invert marine seismic amplitude versus angle (AVA) and controlled-source electro-magnetic (CSEM) data for a layered reservoir model, where the unknowns to be estimated include the porosity and fluid saturation in each reservoir layer and the electrical conductivity of the overburden and bedrock. The computational efficiency, accuracy, and convergence behaviors of the inversion approach are systematically evaluated.

  1. Brief Communication: Earthquake sequencing: analysis of time series constructed from the Markov chain model

    Science.gov (United States)

    Cavers, M. S.; Vasudevan, K.

    2015-10-01

    Directed graph representation of a Markov chain model to study global earthquake sequencing leads to a time series of state-to-state transition probabilities that includes the spatio-temporally linked recurrent events in the record-breaking sense. A state refers to a configuration comprised of zones with either the occurrence or non-occurrence of an earthquake in each zone in a pre-determined time interval. Since the time series is derived from non-linear and non-stationary earthquake sequencing, we use known analysis methods to glean new information. We apply decomposition procedures such as ensemble empirical mode decomposition (EEMD) to study the state-to-state fluctuations in each of the intrinsic mode functions. We subject the intrinsic mode functions, derived from the time series using the EEMD, to a detailed analysis to draw information content of the time series. Also, we investigate the influence of random noise on the data-driven state-to-state transition probabilities. We consider a second aspect of earthquake sequencing that is closely tied to its time-correlative behaviour. Here, we extend the Fano factor and Allan factor analysis to the time series of state-to-state transition frequencies of a Markov chain. Our results support not only the usefulness of the intrinsic mode functions in understanding the time series but also the presence of power-law behaviour exemplified by the Fano factor and the Allan factor.

  2. Analysis of the Navigation Behavior of the Users' using Grey Relational Pattern Analysis with Markov Chains

    Directory of Open Access Journals (Sweden)

    BINDU MADHURI .Ch,

    2010-10-01

    Full Text Available Generally user page visits are sequential in nature. The large number of Web pages on many Web sites has raised navigational problems. Markov chains have been used to model user sequential navigational behavior on the World Wide Web (WWW.The enormous growth in the number of documents in the WWW increases the need for improved link navigation and path analysis models. Link prediction and path analysis are important problems with a wide range of applications ranging from personalization to websites. The complete size of the WWW coupled with the variation in users' navigation patterns makes this a very difficult sequence modeling problem. This paper generalizes the concept of grey relational analysis to develop a technique, called grey relational pattern analysis associated with Markov chains for sequential web data, for analyzing the similarity between given patterns. Based on this technique, a clustering algorithm” Grey Clustering algorithm for Sequential Data” is proposed to finding cluster of a given data set .The problem of determining the optimal number of clusters . We develop an evaluationframework in which the Sum of Squared Error (SSE is calculated to get the efficiency of proposed algorithm. The analyzed behavior of the users used in application areas for Web usage mining Personalization, System Improvement, Site Modification, Business Intelligence, and Usage Characterization.

  3. Short-term droughts forecast using Markov chain model in Victoria, Australia

    Science.gov (United States)

    Rahmat, Siti Nazahiyah; Jayasuriya, Niranjali; Bhuiyan, Muhammed A.

    2016-04-01

    A comprehensive risk management strategy for dealing with drought should include both short-term and long-term planning. The objective of this paper is to present an early warning method to forecast drought using the Standardised Precipitation Index (SPI) and a non-homogeneous Markov chain model. A model such as this is useful for short-term planning. The developed method has been used to forecast droughts at a number of meteorological monitoring stations that have been regionalised into six (6) homogenous clusters with similar drought characteristics based on SPI. The non-homogeneous Markov chain model was used to estimate drought probabilities and drought predictions up to 3 months ahead. The drought severity classes defined using the SPI were computed at a 12-month time scale. The drought probabilities and the predictions were computed for six clusters that depict similar drought characteristics in Victoria, Australia. Overall, the drought severity class predicted was quite similar for all the clusters, with the non-drought class probabilities ranging from 49 to 57 %. For all clusters, the near normal class had a probability of occurrence varying from 27 to 38 %. For the more moderate and severe classes, the probabilities ranged from 2 to 13 % and 3 to 1 %, respectively. The developed model predicted drought situations 1 month ahead reasonably well. However, 2 and 3 months ahead predictions should be used with caution until the models are developed further.

  4. Markov chain modelling of reliability analysis and prediction under mixed mode loading

    Science.gov (United States)

    Singh, Salvinder; Abdullah, Shahrum; Nik Mohamed, Nik Abdullah; Mohd Noorani, Mohd Salmi

    2015-03-01

    The reliability assessment for an automobile crankshaft provides an important understanding in dealing with the design life of the component in order to eliminate or reduce the likelihood of failure and safety risks. The failures of the crankshafts are considered as a catastrophic failure that leads towards a severe failure of the engine block and its other connecting subcomponents. The reliability of an automotive crankshaft under mixed mode loading using the Markov Chain Model is studied. The Markov Chain is modelled by using a two-state condition to represent the bending and torsion loads that would occur on the crankshaft. The automotive crankshaft represents a good case study of a component under mixed mode loading due to the rotating bending and torsion stresses. An estimation of the Weibull shape parameter is used to obtain the probability density function, cumulative distribution function, hazard and reliability rate functions, the bathtub curve and the mean time to failure. The various properties of the shape parameter is used to model the failure characteristic through the bathtub curve is shown. Likewise, an understanding of the patterns posed by the hazard rate onto the component can be used to improve the design and increase the life cycle based on the reliability and dependability of the component. The proposed reliability assessment provides an accurate, efficient, fast and cost effective reliability analysis in contrast to costly and lengthy experimental techniques.

  5. On the optimization of free resources using non-homogeneous Markov chain software rejuvenation model

    International Nuclear Information System (INIS)

    Software rejuvenation is an important way to counteract the phenomenon of software aging and system failures. It is a preventive and proactive technique, which consists of periodically restarting an application at a clean internal state. Starting an application generally means that an amount of memory is captured and closing an application engenders the release of an amount of memory. In general, when an application is initiated an amount of memory is captured and when terminated an amount of memory is released. In this paper a model describing the amount of free memory on a system is presented. The modelling is formulated under a continuous time Markov chain framework. Additionally the cost of performing rejuvenation is also taken into consideration, a cost function for the model is produced and a rejuvenation policy is proposed. The contribution of this paper consists of using a cyclic non-homogeneous Markov chain in order to study the overall behaviour of the system capturing time dependence of the rejuvenation rates and deriving an optimal rejuvenation policy. Finally, a case study is presented in order to illustrate the results of the cost analysis

  6. Atmospheric Dispersion Unknown Source Parameters Determination Using AERMOD and Bayesian Inference Along Markov Chain Monte Carlo

    International Nuclear Information System (INIS)

    Occurrence of hazardous accident in nuclear power plants and industrial units usually lead to release of radioactive materials and pollutants in environment. These materials and pollutants can be transported to a far downstream by the wind flow. In this paper, we implemented an atmospheric dispersion code to solve the inverse problem. Having received and detected the pollutants in one region, we may estimate the rate and location of the unknown source. For the modeling, one needs a model with ability of atmospheric dispersion calculation. Furthermore, it is required to implement a mathematical approach to infer the source location and the related rates. In this paper the AERMOD software and Bayesian inference along the Markov Chain Monte Carlo have been applied. Implementing, Bayesian approach and Markov Chain Monte Carlo for the aforementioned subject is not a new approach, but the AERMOD model coupled with the said methods is a new and well known regulatory software, and enhances the reliability of outcomes. To evaluate the method, an example is considered by defining pollutants concentration in a specific region and then obtaining the source location and intensity by a direct calculation. The result of the calculation estimates the average source location at a distance of 7km with an accuracy of 5m which is good enough to support the ability of the proposed algorithm.

  7. First and second order Markov chain models for synthetic generation of wind speed time series

    International Nuclear Information System (INIS)

    Hourly wind speed time series data of two meteorological stations in Malaysia have been used for stochastic generation of wind speed data using the transition matrix approach of the Markov chain process. The transition probability matrices have been formed using two different approaches: the first approach involves the use of the first order transition probability matrix of a Markov chain, and the second involves the use of a second order transition probability matrix that uses the current and preceding values to describe the next wind speed value. The algorithm to generate the wind speed time series from the transition probability matrices is described. Uniform random number generators have been used for transition between successive time states and within state wind speed values. The ability of each approach to retain the statistical properties of the generated speed is compared with the observed ones. The main statistical properties used for this purpose are mean, standard deviation, median, percentiles, Weibull distribution parameters, autocorrelations and spectral density of wind speed values. The comparison of the observed wind speed and the synthetically generated ones shows that the statistical characteristics are satisfactorily preserved

  8. Controlling influenza disease: Comparison between discrete time Markov chain and deterministic model

    Science.gov (United States)

    Novkaniza, F.; Ivana, Aldila, D.

    2016-04-01

    Mathematical model of respiratory diseases spread with Discrete Time Markov Chain (DTMC) and deterministic approach for constant total population size are analyzed and compared in this article. Intervention of medical treatment and use of medical mask included in to the model as a constant parameter to controlling influenza spreads. Equilibrium points and basic reproductive ratio as the endemic criteria and it level set depend on some variable are given analytically and numerically as a results from deterministic model analysis. Assuming total of human population is constant from deterministic model, number of infected people also analyzed with Discrete Time Markov Chain (DTMC) model. Since Δt → 0, we could assume that total number of infected people might change only from i to i + 1, i - 1, or i. Approximation probability of an outbreak with gambler's ruin problem will be presented. We find that no matter value of basic reproductive ℛ0, either its larger than one or smaller than one, number of infection will always tends to 0 for t → ∞. Some numerical simulation to compare between deterministic and DTMC approach is given to give a better interpretation and a better understanding about the models results.

  9. MBMC: An Effective Markov Chain Approach for Binning Metagenomic Reads from Environmental Shotgun Sequencing Projects.

    Science.gov (United States)

    Wang, Ying; Hu, Haiyan; Li, Xiaoman

    2016-08-01

    Metagenomics is a next-generation omics field currently impacting postgenomic life sciences and medicine. Binning metagenomic reads is essential for the understanding of microbial function, compositions, and interactions in given environments. Despite the existence of dozens of computational methods for metagenomic read binning, it is still very challenging to bin reads. This is especially true for reads from unknown species, from species with similar abundance, and/or from low-abundance species in environmental samples. In this study, we developed a novel taxonomy-dependent and alignment-free approach called MBMC (Metagenomic Binning by Markov Chains). Different from all existing methods, MBMC bins reads by measuring the similarity of reads to the trained Markov chains for different taxa instead of directly comparing reads with known genomic sequences. By testing on more than 24 simulated and experimental datasets with species of similar abundance, species of low abundance, and/or unknown species, we report here that MBMC reliably grouped reads from different species into separate bins. Compared with four existing approaches, we demonstrated that the performance of MBMC was comparable with existing approaches when binning reads from sequenced species, and superior to existing approaches when binning reads from unknown species. MBMC is a pivotal tool for binning metagenomic reads in the current era of Big Data and postgenomic integrative biology. The MBMC software can be freely downloaded at http://hulab.ucf.edu/research/projects/metagenomics/MBMC.html . PMID:27447888

  10. Testing and Evaluation for Web Usability Based on Extended Markov Chain Model

    Institute of Scientific and Technical Information of China (English)

    MAO Cheng-ying; LU Yan-sheng

    2004-01-01

    As the increasing popularity and complexity of Web applications and the emergence of their new characteristics, the testing and maintenance of large, complex Web applications are becoming more complex and difficult.Web applications generally contain lots of pages and are used by enormous users.Statistical testing is an effective way of ensuring their quality.Web usage can be accurately described by Markov chain which has been proved to be an ideal model for software statistical testing.The results of unit testing can be utilized in the latter stages, which is an important strategy for bottom-to-top integration testing, and the other improvement of extended Markov chain model (EMM) is to present the error type vector which is treated as a part of page node.This paper also proposes the algorithm for generating test cases of usage paths.Finally, optional usage reliability evaluation methods and an incremental usability regression testing model for testing and evaluation are presented.

  11. Detection and Projection of Forest Changes by Using the Markov Chain Model and Cellular Automata

    Directory of Open Access Journals (Sweden)

    Griselda Vázquez-Quintero

    2016-03-01

    Full Text Available The spatio-temporal analysis of land use changes could provide basic information for managing the protection, conservation and production of forestlands, which promotes a sustainable resource use of temperate ecosystems. In this study we modeled and analyzed the spatial and temporal dynamics of land use of a temperate forests in the region of Pueblo Nuevo, Durango, Mexico. Data from the Landsat images Multispectral Scanner (MSS 1973, Thematic Mapper (TM 1990, and Operational Land Imager (OLI 2014 were used. Supervised classification methods were then applied to generate the land use for these years. To validate the land use classifications on the images, the Kappa coefficient was used. The resulting Kappa coefficients were 91%, 92% and 90% for 1973, 1990 and 2014, respectively. The analysis of the change dynamics was assessed with Markov Chains and Cellular Automata (CA, which are based on probabilistic modeling techniques. The Markov Chains and CA show constant changes in land use. The class most affected by these changes is the pine forest. Changes in the extent of temperate forest of the study area were further projected until 2028, indicating that the area of pine forest could be continuously reduced. The results of this study could provide quantitative information, which represents a base for assessing the sustainability in the management of these temperate forest ecosystems and for taking actions to mitigate their degradation.

  12. A new method for RGB to CIELAB color space transformation based on Markov chain Monte Carlo

    Science.gov (United States)

    Chen, Yajun; Liu, Ding; Liang, Junli

    2013-10-01

    During printing quality inspection, the inspection of color error is an important content. However, the RGB color space is device-dependent, usually RGB color captured from CCD camera must be transformed into CIELAB color space, which is perceptually uniform and device-independent. To cope with the problem, a Markov chain Monte Carlo (MCMC) based algorithms for the RGB to the CIELAB color space transformation is proposed in this paper. Firstly, the modeling color targets and testing color targets is established, respectively used in modeling and performance testing process. Secondly, we derive a Bayesian model for estimation the coefficients of a polynomial, which can be used to describe the relation between RGB and CIELAB color space. Thirdly, a Markov chain is set up base on Gibbs sampling algorithm (one of the MCMC algorithm) to estimate the coefficients of polynomial. Finally, the color difference of testing color targets is computed for evaluating the performance of the proposed method. The experimental results showed that the nonlinear polynomial regression based on MCMC algorithm is effective, whose performance is similar to the least square approach and can accurately model the RGB to the CIELAB color space conversion and guarantee the color error evaluation for printing quality inspection system.

  13. Sampling graphs with a prescribed joint degree distribution using Markov Chains.

    Energy Technology Data Exchange (ETDEWEB)

    Pinar, Ali; Stanton, Isabelle (UC Berkeley)

    2010-10-01

    One of the most influential results in network analysis is that many natural networks exhibit a power-law or log-normal degree distribution. This has inspired numerous generative models that match this property. However, more recent work has shown that while these generative models do have the right degree distribution, they are not good models for real life networks due to their differences on other important metrics like conductance. We believe this is, in part, because many of these real-world networks have very different joint degree distributions, i.e. the probability that a randomly selected edge will be between nodes of degree k and l. Assortativity is a sufficient statistic of the joint degree distribution, and it has been previously noted that social networks tend to be assortative, while biological and technological networks tend to be disassortative. We suggest that the joint degree distribution of graphs is an interesting avenue of study for further research into network structure. We provide a simple greedy algorithm for constructing simple graphs from a given joint degree distribution, and a Monte Carlo Markov Chain method for sampling them. We also show that the state space of simple graphs with a fixed degree distribution is connected via endpoint switches. We empirically evaluate the mixing time of this Markov Chain by using experiments based on the autocorrelation of each edge.

  14. Population synthesis of radio and gamma-ray millisecond pulsars using Markov Chain Monte Carlo techniques

    Science.gov (United States)

    Gonthier, Peter L.; Koh, Yew-Meng; Kust Harding, Alice

    2016-04-01

    We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and gamma-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of thirteen radio surveys as well as the MSP birth rate in the Galaxy and the number of MSPs detected by Fermi. We explore various high-energy emission geometries like the slot gap, outer gap, two pole caustic and pair starved polar cap models. The parameters associated with the birth distributions for the mass accretion rate, magnetic field, and period distributions are well constrained. With the set of four free parameters, we employ Markov Chain Monte Carlo simulations to explore the model parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and gamma-ray pulsar characteristics. We estimate the contribution of MSPs to the diffuse gamma-ray background with a special focus on the Galactic Center.We express our gratitude for the generous support of the National Science Foundation (RUI: AST-1009731), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program (NNX09AQ71G).

  15. Relocation hypocenter of microearthquake using Markov Chain simulation: Case study on geothermal field

    Science.gov (United States)

    Adu, Nurlia; Indriati Retno, P.; Suharsono

    2016-02-01

    Monitoring of micro seismic activity in the geothermal field is useful to know the fracture controllers in the geothermal reservoir area. However, in determining the point of micro earthquake, hypocenters still contain inherent uncertainties due to several factors such as mismatches velocity model used by the actual subsurface conditions. For that reason, hypocenter relocation by Markov Chain method is used, to simulate the hypocenter point spatially based opportunities transition containing the principle of conditional probability. The purpose of this relocation is to improve the models of the hypocenter so that the interpretation of the subsurface structure is better. From the result of the relocation of using Markov Chain identified fault structures trending below the surface of the northeast-southwest (NE-SW) with approximately N38°E. This structure is suspected as the continuity of the structure in the surface. The depth of the hypocenter is located 758 m above mean sea level more than 800 m below mean sea level.

  16. Markov Chain Modelling of Reliability Analysis and Prediction under Mixed Mode Loading

    Institute of Scientific and Technical Information of China (English)

    SINGH Salvinder; ABDULLAH Shahrum; NIK MOHAMED Nik Abdullah; MOHD NOORANI Mohd Salmi

    2015-01-01

    The reliability assessment for an automobile crankshaft provides an important understanding in dealing with the design life of the component in order to eliminate or reduce the likelihood of failure and safety risks. The failures of the crankshafts are considered as a catastrophic failure that leads towards a severe failure of the engine block and its other connecting subcomponents. The reliability of an automotive crankshaft under mixed mode loading using the Markov Chain Model is studied. The Markov Chain is modelled by using a two-state condition to represent the bending and torsion loads that would occur on the crankshaft. The automotive crankshaft represents a good case study of a component under mixed mode loading due to the rotating bending and torsion stresses. An estimation of the Weibull shape parameter is used to obtain the probability density function, cumulative distribution function, hazard and reliability rate functions, the bathtub curve and the mean time to failure. The various properties of the shape parameter is used to model the failure characteristic through the bathtub curve is shown. Likewise, an understanding of the patterns posed by the hazard rate onto the component can be used to improve the design and increase the life cycle based on the reliability and dependability of the component. The proposed reliability assessment provides an accurate, efficient, fast and cost effective reliability analysis in contrast to costly and lengthy experimental techniques.

  17. Fisher information and asymptotic normality in system identification for quantum Markov chains

    International Nuclear Information System (INIS)

    This paper deals with the problem of estimating the coupling constant θ of a mixing quantum Markov chain. For a repeated measurement on the chain's output we show that the outcomes' time average has an asymptotically normal (Gaussian) distribution, and we give the explicit expressions of its mean and variance. In particular, we obtain a simple estimator of θ whose classical Fisher information can be optimized over different choices of measured observables. We then show that the quantum state of the output together with the system is itself asymptotically Gaussian and compute its quantum Fisher information, which sets an absolute bound to the estimation error. The classical and quantum Fisher information are compared in a simple example. In the vicinity of θ=0 we find that the quantum Fisher information has a quadratic rather than linear scaling in output size, and asymptotically the Fisher information is localized in the system, while the output is independent of the parameter.

  18. Entropy and long-range memory in random symbolic additive Markov chains

    Science.gov (United States)

    Melnik, S. S.; Usatenko, O. V.

    2016-06-01

    The goal of this paper is to develop an estimate for the entropy of random symbolic sequences with elements belonging to a finite alphabet. As a plausible model, we use the high-order additive stationary ergodic Markov chain with long-range memory. Supposing that the correlations between random elements of the chain are weak, we express the conditional entropy of the sequence by means of the symbolic pair correlation function. We also examine an algorithm for estimating the conditional entropy of finite symbolic sequences. We show that the entropy contains two contributions, i.e., the correlation and the fluctuation. The obtained analytical results are used for numerical evaluation of the entropy of written English texts and DNA nucleotide sequences. The developed theory opens the way for constructing a more consistent and sophisticated approach to describe the systems with strong short-range and weak long-range memory.

  19. Bayesian Inference for LISA Pathfinder using Markov Chain Monte Carlo Methods

    CERN Document Server

    Ferraioli, Luigi; Plagnol, Eric

    2012-01-01

    We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of a space based gravitational wave detector. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to...

  20. Population Synthesis of Normal Radio and Gamma-ray Pulsars Using Markov Chain Monte Carlo Techniques

    CERN Document Server

    Gonthier, Peter L; Harding, Alice K

    2012-01-01

    We present preliminary results of a pulsar population synthesis of normal pulsars from the Galactic disk using a Markov Chain Monte Carlo method to better understand the parameter space of the assumed model. We use the Kuiper test, similar to the Kolmogorov-Smirnov test, to compare the cumulative distributions of chosen observables of detected radio pulsars with those simulated for various parameters. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present, given radio and gamma-ray emission characteristics, filtered through ten selected radio surveys, and a {\\it Fermi} all-sky threshold map. Each chain begins with a different random seed and searches a ten-dimensional parameter space for regions of high probability for a total of one thousand different simulations before ending. The code investigates both the "large world" as well as the "small world...

  1. On the utility of the multi-level algorithm for the solution of nearly completely decomposable Markov chains

    Science.gov (United States)

    Leutenegger, Scott T.; Horton, Graham

    1994-01-01

    Recently the Multi-Level algorithm was introduced as a general purpose solver for the solution of steady state Markov chains. In this paper, we consider the performance of the Multi-Level algorithm for solving Nearly Completely Decomposable (NCD) Markov chains, for which special-purpose iteractive aggregation/disaggregation algorithms such as the Koury-McAllister-Stewart (KMS) method have been developed that can exploit the decomposability of the the Markov chain. We present experimental results indicating that the general-purpose Multi-Level algorithm is competitive, and can be significantly faster than the special-purpose KMS algorithm when Gauss-Seidel and Gaussian Elimination are used for solving the individual blocks.

  2. A multiple-state discrete-time Markov chain for estimating suspended sediment concentrations in open channel flow

    Science.gov (United States)

    Tsai, Christina; Wu, Nai-Kuang

    2015-04-01

    In this study, transport processes of uniform size sediment particles under steady and uniform flow are described by the multi-state discrete-time Markov chain. The multi-state discrete-time Markov chain is employed to estimate the suspended sediment concentration distribution versus water depth for various steady and uniform flow conditions. Model results are validated against available measurement data and the Rouse profile. Moreover, the multi-state discrete-time Markov chain can be used to quantify the average time spent for the flow to reach the dynamic equilibrium of particle deposition and entrainment processes. In the first part of this study, suspended sediment concentration under three different flow conditions are discussed. As the Rouse number decreases, the difference between the suspended sediment concentration estimated by the Markov chain model and the Rouse profile becomes more significant, and such discrepancy can be observed at a larger relative height from the bed. It can be attributed to the fact that the use of the terminal settling velocity in the transport process can lead to underestimation of the model residence probability and overestimation of the deposition probability. In the second part, laboratory experiments are used to validate the proposed multi-state discrete-time Markov chain model. It is observed that it would take more time for the sediment concentration to reach a dynamic equilibrium as the Rouse number decreases. In addition, the flow depth is found to be a contributing factor that impacts the time spent to reach the concentration dynamic equilibrium. It is recognized that the performance of the proposed multi-state discrete-time Markov chain model relies significantly on the knowledge of the vertical distribution of the turbulence intensity.

  3. Adjoint sensitivity analysis procedure of Markov chains with applications on reliability of IFMIF accelerator-system facilities

    International Nuclear Information System (INIS)

    This work presents the implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for the Continuous Time, Discrete Space Markov chains (CTMC), as an alternative to the other computational expensive methods. In order to develop this procedure as an end product in reliability studies, the reliability of the physical systems is analyzed using a coupled Fault-Tree - Markov chain technique, i.e. the abstraction of the physical system is performed using as the high level interface the Fault-Tree and afterwards this one is automatically converted into a Markov chain. The resulting differential equations based on the Markov chain model are solved in order to evaluate the system reliability. Further sensitivity analyses using ASAP applied to CTMC equations are performed to study the influence of uncertainties in input data to the reliability measures and to get the confidence in the final reliability results. The methods to generate the Markov chain and the ASAP for the Markov chain equations have been implemented into the new computer code system QUEFT/MARKOMAGS/MCADJSEN for reliability and sensitivity analysis of physical systems. The validation of this code system has been carried out by using simple problems for which analytical solutions can be obtained. Typical sensitivity results show that the numerical solution using ASAP is robust, stable and accurate. The method and the code system developed during this work can be used further as an efficient and flexible tool to evaluate the sensitivities of reliability measures for any physical system analyzed using the Markov chain. Reliability and sensitivity analyses using these methods have been performed during this work for the IFMIF Accelerator System Facilities. The reliability studies using Markov chain have been concentrated around the availability of the main subsystems of this complex physical system for a typical mission time. The sensitivity studies for two typical responses using ASAP have been

  4. Renormalization group for centrosymmetric gauge transformations of the dynamic motion for a Markov-ordered polymer chain

    International Nuclear Information System (INIS)

    A method is proposed for calculating the vibrational-state density averaged over all configurations for a polymer chain with Markov disorder. The method is based on using a group of centrally symmetric gauge transformations that reduce the dynamic matrix for along polymer chain to renormalized dynamic matrices for short fragments. The short-range order is incorporated exactly in the averaging procedure, while the long-range order is incorporated in the self-consistent field approximation. Results are given for a simple skeletal model for a polymer containing tacticity deviations of Markov type

  5. Studying stellar binary systems with the Laser Interferometer Space Antenna using delayed rejection Markov chain Monte Carlo methods

    International Nuclear Information System (INIS)

    Bayesian analysis of Laser Interferometer Space Antenna (LISA) data sets based on Markov chain Monte Carlo methods has been shown to be a challenging problem, in part due to the complicated structure of the likelihood function consisting of several isolated local maxima that dramatically reduces the efficiency of the sampling techniques. Here we introduce a new fully Markovian algorithm, a delayed rejection Metropolis-Hastings Markov chain Monte Carlo method, to efficiently explore these kind of structures and we demonstrate its performance on selected LISA data sets containing a known number of stellar-mass binary signals embedded in Gaussian stationary noise.

  6. A Markov Chain-based quantitative study of angular distribution of photons through turbid slabs via isotropic light scattering

    Science.gov (United States)

    Li, Xuesong; Northrop, William F.

    2016-04-01

    This paper describes a quantitative approach to approximate multiple scattering through an isotropic turbid slab based on Markov Chain theorem. There is an increasing need to utilize multiple scattering for optical diagnostic purposes; however, existing methods are either inaccurate or computationally expensive. Here, we develop a novel Markov Chain approximation approach to solve multiple scattering angular distribution (AD) that can accurately calculate AD while significantly reducing computational cost compared to Monte Carlo simulation. We expect this work to stimulate ongoing multiple scattering research and deterministic reconstruction algorithm development with AD measurements.

  7. Enhanced modeling via network theory: Adaptive sampling of Markov state models

    OpenAIRE

    Bowman, Gregory R; Ensign, Daniel L.; Pande, Vijay S.

    2010-01-01

    Computer simulations can complement experiments by providing insight into molecular kinetics with atomic resolution. Unfortunately, even the most powerful supercomputers can only simulate small systems for short timescales, leaving modeling of most biologically relevant systems and timescales intractable. In this work, however, we show that molecular simulations driven by adaptive sampling of networks called Markov State Models (MSMs) can yield tremendous time and resource savings, allowing p...

  8. Unsupervised SAR images change detection with hidden Markov chains on a sliding window

    Science.gov (United States)

    Bouyahia, Zied; Benyoussef, Lamia; Derrode, Stéphane

    2007-10-01

    This work deals with unsupervised change detection in bi-date Synthetic Aperture Radar (SAR) images. Whatever the indicator of change used, e.g. log-ratio or Kullback-Leibler divergence, we have observed poor quality change maps for some events when using the Hidden Markov Chain (HMC) model we focus on in this work. The main reason comes from the stationary assumption involved in this model - and in most Markovian models such as Hidden Markov Random Fields-, which can not be justified in most observed scenes: changed areas are not necessarily stationary in the image. Besides the few non stationary Markov models proposed in the literature, the aim of this paper is to describe a pragmatic solution to tackle stationarity by using a sliding window strategy. In this algorithm, the criterion image is scanned pixel by pixel, and a classical HMC model is applied only on neighboring pixels. By moving the window through the image, the process is able to produce a change map which can better exhibit non stationary changes than the classical HMC applied directly on the whole criterion image. Special care is devoted to the estimation of the number of classes in each window, which can vary from one (no change) to three (positive change, negative change and no change) by using the corrected Akaike Information Criterion (AICc) suited to small samples. The quality assessment of the proposed approach is achieved with speckle-simulated images in which simulated changes is introduced. The windowed strategy is also evaluated with a pair of RADARSAT images bracketing the Nyiragongo volcano eruption event in January 2002. The available ground truth confirms the effectiveness of the proposed approach compared to a classical HMC-based strategy.

  9. The Study of Reinforcement Learning for Traffic Self-Adaptive Control under Multiagent Markov Game Environment

    Directory of Open Access Journals (Sweden)

    Lun-Hui Xu

    2013-01-01

    Full Text Available Urban traffic self-adaptive control problem is dynamic and uncertain, so the states of traffic environment are hard to be observed. Efficient agent which controls a single intersection can be discovered automatically via multiagent reinforcement learning. However, in the majority of the previous works on this approach, each agent needed perfect observed information when interacting with the environment and learned individually with less efficient coordination. This study casts traffic self-adaptive control as a multiagent Markov game problem. The design employs traffic signal control agent (TSCA for each signalized intersection that coordinates with neighboring TSCAs. A mathematical model for TSCAs’ interaction is built based on nonzero-sum markov game which has been applied to let TSCAs learn how to cooperate. A multiagent Markov game reinforcement learning approach is constructed on the basis of single-agent Q-learning. This method lets each TSCA learn to update its Q-values under the joint actions and imperfect information. The convergence of the proposed algorithm is analyzed theoretically. The simulation results show that the proposed method is convergent and effective in realistic traffic self-adaptive control setting.

  10. Retail Banking Loan Portfolio Equilibrium Mix : A Markov Chain Model Analysis

    Directory of Open Access Journals (Sweden)

    V. Thyagarajan

    2005-01-01

    Full Text Available The variance analysis of actual loan sanctions with the non-documented method of loan allocation of the selected retail bank, over a period of 24 months, revealed that there is a scope to improve their income earnings. Realizing its importance Markov Chain Market Share model was applied to inter temporal data of loan disbursements of the selected bank. By applying Estimate Transition Matrix, scope for probability of loan switching among its types was calculated to suggest the probable mix of loan portfolio. From the results it was suggested that the loan proportions among various types were as follows: Housing (32.0 %, Others (28.1 %, Business (20.0 % and Education (19.7 %. These proportions can be taken as guideline percentage within the government norms for the priority sector. Simulation studies were also done to calculate the expected income of interest using Markov proportions and compared with the actual interest earnings to prove the superiority of the model.

  11. Exit time tails from pairwise decorrelation in hidden Markov chains, with applications to dynamical percolation

    CERN Document Server

    Hammond, Alan; Pete, Gábor

    2011-01-01

    Consider a Markov process \\omega_t at equilibrium and some event C (a subset of the state-space of the process). A natural measure of correlations in the process is the pairwise correlation \\Pr[\\omega_0,\\omega_t \\in C] - \\Pr[\\omega_0 \\in C]^2. A second natural measure is the probability of the continual occurrence event \\{\\omega_s \\in C, \\forall s\\in [0,t]\\}. We show that for reversible Markov chains, and any event C, pairwise decorrelation of the event C implies a decay of the probability of the continual occurrence event \\{\\omega_s \\in C, \\forall s \\in [0,t]\\} as t\\to\\infty. We provide examples showing that our results are often sharp. Our main applications are to dynamical critical percolation. Let C be the left-right crossing event of a large box, and let us scale time so that the expected number of changes to C is order 1 in unit time. We show that the continual connection event has superpolynomial decay. Furthermore, on the infinite lattice without any time scaling, the first exceptional time with an in...

  12. Reliability Measures of Second-Order Semi-Markov Chain Applied to Wind Energy Production

    Directory of Open Access Journals (Sweden)

    Guglielmo D'Amico

    2013-01-01

    Full Text Available We consider the problem of wind energy production by using a second-order semi-Markov chain in state and duration as a model of wind speed. The model used in this paper is based on our previous work where we have shown the ability of second-order semi-Markov process in reproducing statistical features of wind speed. Here we briefly present the mathematical model and describe the data and technical characteristics of a commercial wind turbine (Aircon HAWT-10 kW. We show how, by using our model, it is possible to compute some of the main dependability measures such as reliability, availability, and maintainability functions. We compare, by means of Monte Carlo simulations, the results of the model with real energy production obtained from data available in the Lastem station (Italy and sampled every 10 minutes. The computation of the dependability measures is a crucial point in the planning and development of a wind farm. Through our model, we show how the values of this quantity can be obtained both analytically and computationally.

  13. Markov chain modeling of precipitation time series: Modeling waiting times between tipping bucket rain gauge tips

    DEFF Research Database (Denmark)

    Sørup, Hjalte Jomo Danielsen; Madsen, Henrik; Arnbjerg-Nielsen, Karsten

    2011-01-01

    A very fine temporal and volumetric resolution precipitation time series is modeled using Markov models. Both 1st and 2nd order Markov models as well as seasonal and diurnal models are investigated and evaluated using likelihood based techniques. The 2nd order Markov model is found to be insignif......A very fine temporal and volumetric resolution precipitation time series is modeled using Markov models. Both 1st and 2nd order Markov models as well as seasonal and diurnal models are investigated and evaluated using likelihood based techniques. The 2nd order Markov model is found to be...

  14. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    Science.gov (United States)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  15. A methodology for stochastic analysis of share prices as Markov chains with finite states.

    Science.gov (United States)

    Mettle, Felix Okoe; Quaye, Enoch Nii Boi; Laryea, Ravenhill Adjetey

    2014-01-01

    Price volatilities make stock investments risky, leaving investors in critical position when uncertain decision is made. To improve investor evaluation confidence on exchange markets, while not using time series methodology, we specify equity price change as a stochastic process assumed to possess Markov dependency with respective state transition probabilities matrices following the identified state pace (i.e. decrease, stable or increase). We established that identified states communicate, and that the chains are aperiodic and ergodic thus possessing limiting distributions. We developed a methodology for determining expected mean return time for stock price increases and also establish criteria for improving investment decision based on highest transition probabilities, lowest mean return time and highest limiting distributions. We further developed an R algorithm for running the methodology introduced. The established methodology is applied to selected equities from Ghana Stock Exchange weekly trading data. PMID:25520904

  16. Observational constraints on G-corrected holographic dark energy using a Markov chain Monte Carlo method

    CERN Document Server

    Alavirad, Hamzeh

    2013-01-01

    We constrain holographic dark energy (HDE) with time varying gravitational coupling constant in the framework of the modified Friedmann equations using cosmological data from type Ia supernovae, baryon acoustic oscillations, cosmic microwave background radiation and X-ray gas mass fraction. Applying a Markov Chain Monte Carlo (MCMC) simulation, we obtain the best fit values of the model and cosmological parameters within $1\\sigma$ confidence level (CL) in a flat universe as: $\\Omega_{\\rm b}h^2=0.0222^{+0.0018}_{-0.0013}$, $\\Omega_{\\rm c}h^2 =0.1121^{+0.0110}_{-0.0079}$, $\\alpha_{\\rm G}\\equiv \\dot{G}/(HG) =0.1647^{+0.3547}_{-0.2971}$ and the HDE constant $c=0.9322^{+0.4569}_{-0.5447}$. Using the best fit values, the equation of state of the dark component at the present time $w_{\\rm d0}$ at $1\\sigma$ CL can cross the phantom boundary $w=-1$.

  17. From complex spatial dynamics to simple Markov chain models: do predators and prey leave footprints?

    DEFF Research Database (Denmark)

    Nachman, Gøsta Støger; Borregaard, Michael Krabbe

    2010-01-01

    patches with both prey and predators, with prey only, with predators only, and with neither species, along with the number of patches that change from one state to another in each time step. The average number of patches in the four states, as well as the average transition probabilities from one state to...... species) are reflected by different footprints The transition probabilities can be used to forecast the expected fate of a system given its current state. However, the transition probabilities in the modeled system depend on the number of patches in each state. We develop a model for the dependence of...... transition probabilities on state variables, and combine this information in a Markov chain transition matrix model. Finally, we use this extended model to predict the long-term dynamics of the system and to reveal its asymptotic steady state properties....

  18. A Markov Chain Model for the Analysis of Round-Robin Scheduling Scheme

    Directory of Open Access Journals (Sweden)

    D. Shukla

    2009-07-01

    Full Text Available In the literature of Round-Robin scheduling scheme, each job is processed, one after the another after giving a fix quantum. In case of First-come first-served, each process is executed, if the previously arrived processed is completed. Both these scheduling schemes are used in this paper as its special cases. A Markov chain model is used to compare several scheduling schemes of the class. An index measure is defined to compare the model based efficiency of different scheduling schemes. One scheduling scheme which is the mixture of FIFO and round robin is found efficient in terms of model based study. The system simulation procedure is used to derive the conclusion of the content.

  19. Image Edge Detection Using Hidden Markov Chain Model Based on the Non-decimated Wavelet

    Directory of Open Access Journals (Sweden)

    Renqi Zhang

    2009-03-01

    Full Text Available Edge detection plays an important role in digital image processing. Based on the non-decimated wavelet which is shift invariant, in this paper, we develop a new edge detection technique using Hidden Markov Chain (HMC model. With this proposed model (NWHMC, each wavelet coefficient contains a hidden state, herein, we adopt Laplacian model and Gaussian model to represent the information of the state “big” and the state “small”. The model can be trained by EM algorithm, and then we employ Viterbi algorithm to reveal the hidden state of each coefficient according to MAP estimation. The detecting results of several images are provided to evaluate the algorithm. In addition, the algorithm can be applied to noisy images efficiently.

  20. Markov Chain Monte Carlo methods applied to measuring the fine structure constant from quasar spectroscopy

    Science.gov (United States)

    King, Julian; Mortlock, Daniel; Webb, John; Murphy, Michael

    2010-11-01

    Recent attempts to constrain cosmological variation in the fine structure constant, α, using quasar absorption lines have yielded two statistical samples which initially appear to be inconsistent. One of these samples was subsequently demonstrated to not pass consistency tests; it appears that the optimisation algorithm used to fit the model to the spectra failed. Nevertheless, the results of the other hinge on the robustness of the spectral fitting program VPFIT, which has been tested through simulation but not through direct exploration of the likelihood function. We present the application of Markov Chain Monte Carlo (MCMC) methods to this problem, and demonstrate that VPFIT produces similar values and uncertainties for Δα/α, the fractional change in the fine structure constant, as our MCMC algorithm, and thus that VPFIT is reliable.

  1. Markov Chain Monte Carlo methods applied to measuring the fine structure constant from quasar spectroscopy

    CERN Document Server

    King, Julian A; Webb, John K; Murphy, Michael T

    2009-01-01

    Recent attempts to constrain cosmological variation in the fine structure constant, alpha, using quasar absorption lines have yielded two statistical samples which initially appear to be inconsistent. One of these samples was subsequently demonstrated to not pass consistency tests; it appears that the optimisation algorithm used to fit the model to the spectra failed. Nevertheless, the results of the other hinge on the robustness of the spectral fitting program VPFIT, which has been tested through simulation but not through direct exploration of the likelihood function. We present the application of Markov Chain Monte Carlo (MCMC) methods to this problem, and demonstrate that VPFIT produces similar values and uncertainties for (Delta alpha)/(alpha), the fractional change in the fine structure constant, as our MCMC algorithm, and thus that VPFIT is reliable.

  2. Markov Chain Monte Carlo Exploration of Minimal Supergravity with Implications for Dark Matter

    International Nuclear Information System (INIS)

    We explore the full parameter space of Minimal Supergravity (mSUGRA), allowing all four continuous parameters (the scalar mass m0, the gaugino mass m1/2, the trilinear coupling A0, and the ratio of Higgs vacuum expectation values tan β) to vary freely. We apply current accelerator constraints on sparticle and Higgs masses, and on the b → sγ branching ratio, and discuss the impact of the constraints on gμ-2. To study dark matter, we apply the WMAP constraint on the cold dark matter density. We develop Markov Chain Monte Carlo (MCMC) techniques to explore the parameter regions consistent with WMAP, finding them to be considerably superior to previously used methods for exploring supersymmetric parameter spaces. Finally, we study the reach of current and future direct detection experiments in light of the WMAP constraint

  3. A multiple shock model for common cause failures using discrete Markov chain

    International Nuclear Information System (INIS)

    The most widely used models in common cause analysis are (single) shock models such as the BFR, and the MFR. But, single shock model can not treat the individual common cause separately and has some irrational assumptions. Multiple shock model for common cause failures is developed using Markov chain theory. This model treats each common cause shock as separately and sequently occuring event to implicate the change in failure probability distribution due to each common cause shock. The final failure probability distribution is evaluated and compared with that from the BFR model. The results show that multiple shock model which minimizes the assumptions in the BFR model is more realistic and conservative than the BFR model. The further work for application is the estimations of parameters such as common cause shock rate and component failure probability given a shock,p, through the data analysis

  4. Potential-Decomposition Strategy in Markov Chain Monte Carlo Sampling Algorithms

    International Nuclear Information System (INIS)

    We introduce the potential-decomposition strategy (PDS), which can he used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in phase space, then, by rejecting some trial samples, the target distributions can be sampled in an unbiased manner. Furthermore, if the accepted trial samples are insufficient, they can be recycled as initial states to form more unbiased samples. This strategy can greatly improve efficiency when the original potential has multiple metastable states separated by large barriers. We apply PDS to the 2d Ising model and a double-well potential model with a large barrier, demonstrating in these two representative examples that convergence is accelerated by orders of magnitude.

  5. Of bugs and birds: Markov Chain Monte Carlo for hierarchical modeling in wildlife research

    Science.gov (United States)

    Link, W.A.; Cam, E.; Nichols, J.D.; Cooch, E.G.

    2002-01-01

    Markov chain Monte Carlo (MCMC) is a statistical innovation that allows researchers to fit far more complex models to data than is feasible using conventional methods. Despite its widespread use in a variety of scientific fields, MCMC appears to be underutilized in wildlife applications. This may be due to a misconception that MCMC requires the adoption of a subjective Bayesian analysis, or perhaps simply to its lack of familiarity among wildlife researchers. We introduce the basic ideas of MCMC and software BUGS (Bayesian inference using Gibbs sampling), stressing that a simple and satisfactory intuition for MCMC does not require extraordinary mathematical sophistication. We illustrate the use of MCMC with an analysis of the association between latent factors governing individual heterogeneity in breeding and survival rates of kittiwakes (Rissa tridactyla). We conclude with a discussion of the importance of individual heterogeneity for understanding population dynamics and designing management plans.

  6. Rao-Blackwellised Interacting Markov Chain Monte Carlo for Electromagnetic Scattering Inversion

    International Nuclear Information System (INIS)

    The following electromagnetism (EM) inverse problem is addressed. It consists in estimating local radioelectric properties of materials recovering an object from the global EM scattering measurement, at various incidences and wave frequencies. This large scale ill-posed inverse problem is explored by an intensive exploitation of an efficient 2D Maxwell solver, distributed on High Performance Computing (HPC) machines. Applied to a large training data set, a statistical analysis reduces the problem to a simpler probabilistic metamodel, on which Bayesian inference can be performed. Considering the radioelectric properties as a dynamic stochastic process, evolving in function of the frequency, it is shown how advanced Markov Chain Monte Carlo methods, called Sequential Monte Carlo (SMC) or interacting particles, can provide estimations of the EM properties of each material, and their associated uncertainties.

  7. Data Model Approach And Markov Chain Based Analysis Of Multi-Level Queue Scheduling

    Directory of Open Access Journals (Sweden)

    Diwakar Shukla

    2010-01-01

    Full Text Available There are many CPU scheduling algorithms inliterature like FIFO, Round Robin, Shortest-Job-First and so on.The Multilevel-Queue-Scheduling is superior to these due to itsbetter management of a variety of processes. In this paper, aMarkov chain model is used for a general setup of Multilevelqueue-scheduling and the scheduler is assumed to performrandom movement on queue over the quantum of time.Performance of scheduling is examined through a rowdependent data model. It is found that with increasing value of αand d, the chance of system going over the waiting state reduces.At some of the interesting combinations of α and d, it diminishesto zero, thereby, provides us some clue regarding better choice ofqueues over others for high priority jobs. It is found that ifqueue priorities are added in the scheduling intelligently thenbetter performance could be obtained. Data model helpschoosing appropriate preferences.

  8. Bayesian Lorentzian profile fitting using Markov-Chain Monte Carlo: An observer's approach

    CERN Document Server

    Gruberbauer, M; Weiss, W W

    2008-01-01

    Aims. Investigating stochastically driven pulsation puts strong requirements on the quality of (observed) pulsation frequency spectra, such as the accuracy of frequencies, amplitudes, and mode life times and -- important when fitting these parameters with models -- a realistic error estimate which can be quite different to the formal error. As has been shown by other authors, the method of fitting Lorentzian profiles to the power spectrum of time-resolved photometric or spectroscopic data via the Maximum Likelihood Estimation (MLE) procedure delivers good approximations for these quantities. We, however, intend to demonstrate that a conservative Bayesian approach allows to treat this problem in a more consistent way. Methods. We derive a conservative Bayesian treatment for the probability of Lorentzian profiles being present in a power spectrum and describe its implementation via evaluating the probability density distribution of parameters by using the Markov-Chain Monte Carlo (MCMC) technique. In addition, ...

  9. Assessing confidence in phylogenetic trees : bootstrap versus Markov chain Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Burr, Tom; Doak, J. E. (Justin E.); Gattiker, J. R. (James R.); Stanbro, W. D. (William D.)

    2002-01-01

    Recent implementations of Bayesian approaches are one of the largest advances in phylogenetic tree estimation in the last 10 years. Markov chain Monte Carlo (MCMC) is used in these new approaches to estimate the Bayesian posterior probability for each tree topology of interest. Our goal is to assess the confidence in the estimated tree (particularly in whether prespecified groups are monophyletic) using MCMC and to compare the Bayesian estimate of confidence to a bootstrap-based estimate of confidence. We compare the Bayesian posterior probability to the bootstrap probability for specified groups in two real sets of influenza sequences and two sets of simulated sequences for our comparison. We conclude that the bootstrap estimate is adequate compared to the MCMC estimate except perhaps if the number of DNA sites is small.

  10. Towards Analyzing Crossover Operators in Evolutionary Search via General Markov Chain Switching Theorem

    CERN Document Server

    Yu, Yang; Zhou, Zhi-Hua

    2011-01-01

    Evolutionary algorithms (EAs), simulating the evolution process of natural species, are used to solve optimization problems. Crossover (also called recombination), originated from simulating the chromosome exchange phenomena in zoogamy reproduction, is widely employed in EAs to generate offspring solutions, of which the effectiveness has been examined empirically in applications. However, due to the irregularity of crossover operators and the complicated interactions to mutation, crossover operators are hard to analyze and thus have few theoretical results. Therefore, analyzing crossover not only helps in understanding EAs, but also helps in developing novel techniques for analyzing sophisticated metaheuristic algorithms. In this paper, we derive the General Markov Chain Switching Theorem (GMCST) to facilitate theoretical studies of crossover-enabled EAs. The theorem allows us to analyze the running time of a sophisticated EA from an easy-to-analyze EA. Using this tool, we analyze EAs with several crossover o...

  11. Study of behavior and determination of customer lifetime value(CLV) using Markov chain model

    International Nuclear Information System (INIS)

    Customer Lifetime Value or CLV is a restriction on interactive marketing to help a company in arranging financial for the marketing of new customer acquisition and customer retention. Additionally CLV can be able to segment customers for financial arrangements. Stochastic models for the fairly new CLV used a Markov chain. In this model customer retention probability and new customer acquisition probability play an important role. This model is originally introduced by Pfeifer and Carraway in 2000 [1]. They introduced several CLV models, one of them only involves customer and former customer. In this paper we expand the model by adding the assumption of the transition from former customer to customer. In the proposed model, the CLV value is higher than the CLV value obtained by Pfeifer and Caraway model. But our model still requires a longer convergence time

  12. Study of behavior and determination of customer lifetime value(CLV) using Markov chain model

    Energy Technology Data Exchange (ETDEWEB)

    Permana, Dony, E-mail: donypermana@students.itb.ac.id [Statistics Research Division, Faculty of Mathematics and Natural Science, Bandung Institute of Technology, Indonesia and Statistics Study Program, Faculty of Mathematics and Natural Sciences, Padang State University (Indonesia); Indratno, Sapto Wahyu; Pasaribu, Udjianna S. [Statistics Research Division, Faculty of Mathematics and Natural Science, Bandung Institute of Technology (Indonesia)

    2014-03-24

    Customer Lifetime Value or CLV is a restriction on interactive marketing to help a company in arranging financial for the marketing of new customer acquisition and customer retention. Additionally CLV can be able to segment customers for financial arrangements. Stochastic models for the fairly new CLV used a Markov chain. In this model customer retention probability and new customer acquisition probability play an important role. This model is originally introduced by Pfeifer and Carraway in 2000 [1]. They introduced several CLV models, one of them only involves customer and former customer. In this paper we expand the model by adding the assumption of the transition from former customer to customer. In the proposed model, the CLV value is higher than the CLV value obtained by Pfeifer and Caraway model. But our model still requires a longer convergence time.

  13. Analysis of aerial survey data on Florida manatee using Markov chain Monte Carlo.

    Science.gov (United States)

    Craig, B A; Newton, M A; Garrott, R A; Reynolds, J E; Wilcox, J R

    1997-06-01

    We assess population trends of the Atlantic coast population of Florida manatee, Trichechus manatus latirostris, by reanalyzing aerial survey data collected between 1982 and 1992. To do so, we develop an explicit biological model that accounts for the method by which the manatees are counted, the mammals' movement between surveys, and the behavior of the population total over time. Bayesian inference, enabled by Markov chain Monte Carlo, is used to combine the survey data with the biological model. We compute marginal posterior distributions for all model parameters and predictive distributions for future counts. Several conclusions, such as a decreasing population growth rate and low sighting probabilities, are consistent across different prior specifications. PMID:9192449

  14. Markov Chain Monte Carlo (MCMC) methods for parameter estimation of a novel hybrid redundant robot

    International Nuclear Information System (INIS)

    This paper presents a statistical method for the calibration of a redundantly actuated hybrid serial-parallel robot IWR (Intersector Welding Robot). The robot under study will be used to carry out welding, machining, and remote handing for the assembly of vacuum vessel of International Thermonuclear Experimental Reactor (ITER). The robot has ten degrees of freedom (DOF), among which six DOF are contributed by the parallel mechanism and the rest are from the serial mechanism. In this paper, a kinematic error model which involves 54 unknown geometrical error parameters is developed for the proposed robot. Based on this error model, the mean values of the unknown parameters are statistically analyzed and estimated by means of Markov Chain Monte Carlo (MCMC) approach. The computer simulation is conducted by introducing random geometric errors and measurement poses which represent the corresponding real physical behaviors. The simulation results of the marginal posterior distributions of the estimated model parameters indicate that our method is reliable and robust.

  15. Soft Uncoupling of Markov Chains for Permeable Language Distinction: A New Algorithm

    CERN Document Server

    Nock, Richard; Nielsen, Frank; Henry, Claudia

    2008-01-01

    Without prior knowledge, distinguishing different languages may be a hard task, especially when their borders are permeable. We develop an extension of spectral clustering -- a powerful unsupervised classification toolbox -- that is shown to resolve accurately the task of soft language distinction. At the heart of our approach, we replace the usual hard membership assignment of spectral clustering by a soft, probabilistic assignment, which also presents the advantage to bypass a well-known complexity bottleneck of the method. Furthermore, our approach relies on a novel, convenient construction of a Markov chain out of a corpus. Extensive experiments with a readily available system clearly display the potential of the method, which brings a visually appealing soft distinction of languages that may define altogether a whole corpus.

  16. Very short-term wind speed prediction: A new artificial neural network-Markov chain model

    Energy Technology Data Exchange (ETDEWEB)

    Pourmousavi Kani, S.A. [Electrical and Computer Engineering Department, 627 Cobleigh Hall, Montana State University, Bozeman, MT 59717 (United States); Ardehali, M.M. [Energy Research Center, Department of Electrical Engineering, Amirkabir University of Technology (Tehran Polytechnic), 424 Hafez Ave., Tehran 15914 (Iran, Islamic Republic of)

    2011-01-15

    As the objective of this study, artificial neural network (ANN) and Markov chain (MC) are used to develop a new ANN-MC model for forecasting wind speed in very short-term time scale. For prediction of very short-term wind speed in a few seconds in the future, data patterns for short-term (about an hour) and very short-term (about minutes or seconds) recorded prior to current time are considered. In this study, the short-term patterns in wind speed data are captured by ANN and the long-term patterns are considered utilizing MC approach and four neighborhood indices. The results are validated and the effectiveness of the new ANN-MC model is demonstrated. It is found that the prediction errors can be decreased, while the uncertainty of the predictions and calculation time are reduced. (author)

  17. Study of behavior and determination of customer lifetime value(CLV) using Markov chain model

    Science.gov (United States)

    Permana, Dony; Indratno, Sapto Wahyu; Pasaribu, Udjianna S.

    2014-03-01

    Customer Lifetime Value or CLV is a restriction on interactive marketing to help a company in arranging financial for the marketing of new customer acquisition and customer retention. Additionally CLV can be able to segment customers for financial arrangements. Stochastic models for the fairly new CLV used a Markov chain. In this model customer retention probability and new customer acquisition probability play an important role. This model is originally introduced by Pfeifer and Carraway in 2000 [1]. They introduced several CLV models, one of them only involves customer and former customer. In this paper we expand the model by adding the assumption of the transition from former customer to customer. In the proposed model, the CLV value is higher than the CLV value obtained by Pfeifer and Caraway model. But our model still requires a longer convergence time.

  18. Consensus protocol for heterogeneous multi-agent systems: A Markov chain approach

    International Nuclear Information System (INIS)

    This paper deals with the consensus problem for heterogeneous multi-agent systems. Different from most existing consensus protocols, we consider the consensus seeking of two types of agents, namely, active agents and passive agents. The objective is to directly control the active agents such that the states of all the agents would achieve consensus. In order to obtain a computational approach, we subtly introduce an appropriate Markov chain to cast the heterogeneous systems into a unified framework. Such a framework is helpful for tackling the constraints from passive agents. Furthermore, a sufficient and necessary condition is established to guarantee the consensus in heterogeneous multi-agent systems. Finally, simulation results are provided to verify the theoretical analysis and the effectiveness of the proposed protocol. (interdisciplinary physics and related areas of science and technology)

  19. Very short-term wind speed prediction: A new artificial neural network-Markov chain model

    International Nuclear Information System (INIS)

    As the objective of this study, artificial neural network (ANN) and Markov chain (MC) are used to develop a new ANN-MC model for forecasting wind speed in very short-term time scale. For prediction of very short-term wind speed in a few seconds in the future, data patterns for short-term (about an hour) and very short-term (about minutes or seconds) recorded prior to current time are considered. In this study, the short-term patterns in wind speed data are captured by ANN and the long-term patterns are considered utilizing MC approach and four neighborhood indices. The results are validated and the effectiveness of the new ANN-MC model is demonstrated. It is found that the prediction errors can be decreased, while the uncertainty of the predictions and calculation time are reduced.

  20. Unifying Markov Chain Approach for Disease and Rumor Spreading in Complex Networks

    CERN Document Server

    de Arruda, Guilherme Ferraz; Rodriiguez, Pablo Martin; Cozzo, Emanuele; Moreno, Yamir

    2016-01-01

    Spreading processes are ubiquitous in natural and artificial systems. They can be studied via a plethora of models, depending on the specific details of the phenomena under study. Disease contagion and rumor spreading are among the most important of these processes due to their practical relevance. However, despite the similarities between them, current models address both spreading dynamics separately. In this paper, we propose a general information spreading model that is based on discrete time Markov chains. The model includes all the transitions that are plausible for both a disease contagion process and rumor propagation. We show that our model not only covers the traditional spreading schemes, but that it also contains some features relevant in social dynamics, such as apathy, forgetting, and lost/recovering of interest. The model is evaluated analytically to obtain the spreading thresholds and the early time dynamical behavior for the contact and reactive processes in several scenarios. Comparison with...

  1. Markov-Chain Monte Carlo reconstruction for cascades in IceCube

    International Nuclear Information System (INIS)

    In particle detector experiments, it is often necessary to reconstruct information about the incoming particle based on the detector response. One technique is to describe the likelihood of a certain detector response given an event hypothesis and then vary the hypothesis to maximize the likelihood. Markov-Chain Monte Carlo (MCMC) techniques offer the ability to efficiently sample such a likelihood function in the most significant regions of a large parameter space. The MCMC generates a set of points in parameter space whose distribution is proportional to the likelihood function. The characteristics of this distribution can be used to judge the quality of a reconstruction and filter out poorly-reconstructed events. I discuss the application of MCMC techniques to the reconstruction of neutrino-induced cascade events in the IceCube neutrino detector

  2. A stochastic Markov chain model to describe lung cancer growth and metastasis.

    Directory of Open Access Journals (Sweden)

    Paul K Newton

    Full Text Available A stochastic Markov chain model for metastatic progression is developed for primary lung cancer based on a network construction of metastatic sites with dynamics modeled as an ensemble of random walkers on the network. We calculate a transition matrix, with entries (transition probabilities interpreted as random variables, and use it to construct a circular bi-directional network of primary and metastatic locations based on postmortem tissue analysis of 3827 autopsies on untreated patients documenting all primary tumor locations and metastatic sites from this population. The resulting 50 potential metastatic sites are connected by directed edges with distributed weightings, where the site connections and weightings are obtained by calculating the entries of an ensemble of transition matrices so that the steady-state distribution obtained from the long-time limit of the Markov chain dynamical system corresponds to the ensemble metastatic distribution obtained from the autopsy data set. We condition our search for a transition matrix on an initial distribution of metastatic tumors obtained from the data set. Through an iterative numerical search procedure, we adjust the entries of a sequence of approximations until a transition matrix with the correct steady-state is found (up to a numerical threshold. Since this constrained linear optimization problem is underdetermined, we characterize the statistical variance of the ensemble of transition matrices calculated using the means and variances of their singular value distributions as a diagnostic tool. We interpret the ensemble averaged transition probabilities as (approximately normally distributed random variables. The model allows us to simulate and quantify disease progression pathways and timescales of progression from the lung position to other sites and we highlight several key findings based on the model.

  3. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET

    Science.gov (United States)

    Hatt, M.; Lamare, F.; Boussion, N.; Turzo, A.; Collet, C.; Salzenstein, F.; Roux, C.; Jarritt, P.; Carson, K.; Cheze-LeRest, C.; Visvikis, D.

    2007-07-01

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both

  4. Bayesian inference along Markov Chain Monte Carlo approach for PWR core loading pattern optimization

    International Nuclear Information System (INIS)

    Highlights: ► The BIMCMC method performs very well and is comparable to GA and PSO techniques. ► The potential of the technique is very well for optimization. ► It is observed that the performance of the method is quite adequate. ► The BIMCMC is very easy to implement. -- Abstract: Despite remarkable progress in optimization procedures, inherent complexities in nuclear reactor structure and strong interdependence among the fundamental indices namely, economic, neutronic, thermo-hydraulic and environmental effects make it necessary to evaluate the most efficient arrangement of a reactor core. In this paper a reactor core reloading technique based on Bayesian inference along Markov Chain Monte Carlo, BIMCMC, is addressed in the context of obtaining an optimal configuration of fuel assemblies in reactor cores. The Markov Chain Monte Carlo with Metropolis–Hastings algorithm has been applied for sampling variable and its acceptance. The proposed algorithm can be used for in-core fuel management optimization problems in pressurized water reactors. Considerable work has been expended for loading pattern optimization, but no preferred approach has yet emerged. To evaluate the proposed technique, increasing the effective multiplication factor Keff of a WWER-1000 core along flattening power with keeping power peaking factor below a specific limit as a first test case and flattening of power as a second test case are considered as objective functions; although other variables such as burn up and cycle length can also be taken into account. The results, convergence rate and reliability of the new method are compared to published data resulting from particle swarm optimization and genetic algorithm; the outcome is quite promising and demonstrating the potential of the technique very well for optimization applications in the nuclear engineering field.

  5. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET

    International Nuclear Information System (INIS)

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both

  6. Markov chains with hybrid repeating rows - upper-Hessenberg, quasi-Toeplitz structure of the block transition probability matrix

    OpenAIRE

    Dudin, Alexander; Kim, Chesoong; Klimenok, Valentina

    2008-01-01

    In this paper we consider discrete-time multidimensional Markov chains having a block transition probability matrix which is the sum of a matrix with repeating block rows and a matrix of upper-Hessenberg, quasi-Toeplitz structure. We derive sufficient conditions for the existence of the stationary distribution, and outline two algorithms for calculating the stationary distribution.

  7. Strong law of large numbers for Markov chains indexed by an infinite tree with uniformly bounded degree

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this paper,we study the strong law of large numbers and Shannon-McMillan (S-M) theorem for Markov chains indexed by an infinite tree with uniformly bounded degree.The results generalize the analogous results on a homogeneous tree.

  8. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  9. Assessing the Progress and the Underlying Nature of the Flows of Doctoral and Master Degree Candidates Using Absorbing Markov Chains

    Science.gov (United States)

    Nicholls, Miles G.

    2007-01-01

    In this paper, absorbing markov chains are used to analyse the flows of higher degree by research candidates (doctoral and master) within an Australian faculty of business. The candidates are analysed according to whether they are full time or part time. The need for such analysis stemmed from what appeared to be a rather poor completion rate (as…

  10. SEMI-BLIND CHANNEL ESTIMATION OF MULTIPLE-INPUT/MULTIPLE-OUTPUT SYSTEMS BASED ON MARKOV CHAIN MONTE CARLO METHODS

    Institute of Scientific and Technical Information of China (English)

    Jiang Wei; Xiang Haige

    2004-01-01

    This paper addresses the issues of channel estimation in a Multiple-Input/Multiple-Output (MIMO) system. Markov Chain Monte Carlo (MCMC) method is employed to jointly estimate the Channel State Information (CSI) and the transmitted signals. The deduced algorithms can work well under circumstances of low Signal-to-Noise Ratio (SNR). Simulation results are presented to demonstrate their effectiveness.

  11. Efficient variants of the minimal diffusion formulation of Markov chain ensembles.

    Science.gov (United States)

    Güler, Marifi

    2016-02-01

    This study is concerned with ensembles of continuous-time Markov chains evolving independently under a common transition rate matrix in some finite state space. In this context, our prior work [Phys. Rev. E 91, 062116 (2015)] has formulated an approximation scheme, called the minimal diffusion formulation, to deduce how the number of chains in a prescribed relevant state evolves in time. The formulation consists of two specifically coupled Ornstein-Uhlenbeck processes in a stochastic differential equation representation; it is minimal in the sense that its structure does not change with the state space size or the transition matrix density, and it requires no matrix square-root operations. In the present study, we first calculate the autocorrelation function of the relevant state density in the minimal diffusion formulation, which is fundamental to the identification of the ensemble dynamics. The obtained autocorrelation function is then employed to develop two diffusion formulations that reduce the structural complexity of the minimal diffusion formulation without significant loss of accuracy in the dynamics. One of these variant formulations includes one less noise term than the minimal diffusion formulation and still satisfies the above-mentioned autocorrelation function in its dynamics. The second variant is in the form of a one-dimensional Langevin equation, therefore it is the simplest possible diffusion formulation one can obtain for the problem, yet its autocorrelation function is first-order accurate in time gap. Numerical simulations supporting the theoretical analysis are delivered. PMID:26986304

  12. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains

    Directory of Open Access Journals (Sweden)

    Tataru Paula

    2011-12-01

    Full Text Available Abstract Background Continuous time Markov chains (CTMCs is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes are unaccessible and the past must be inferred from DNA sequence data observed in the present. Results We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD, the second on uniformization (UNI, and the third on integrals of matrix exponentials (EXPM. The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. Conclusions We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  13. Using Markov chains to predict the natural progression of diabetic retinopathy

    Institute of Scientific and Technical Information of China (English)

    Priyanka; Srikanth

    2015-01-01

    AIM: To study the natural progression of diabetic retinopathy in patients with type 2 diabetes.METHODS: This was an observational study of 153 cases with type 2 diabetes from 2010 to 2013. The state of patient was noted at end of each year and transition matrices were developed to model movement between years. Patients who progressed to severe non-proliferative diabetic retinopathy(NPDR) were treated.Markov Chains and Chi-square test were used for statistical analysis.RESULTS: We modelled the transition of 153 patients from NPDR to blindness on an annual basis. At the end of year 3, we compared results from the Markov model versus actual data. The results from Chi-square test confirmed that there was statistically no significant difference(P =0.70) which provided assurance that the model was robust to estimate mean sojourn times. The key finding was that a patient entering the system in mild NPDR state is expected to stay in that state for 5y followed by 1.07 y in moderate NPDR, be in the severe NPDR state for 1.33 y before moving into PDR for roughly8 y. It is therefore expected that such a patient entering the model in a state of mild NPDR will enter blindness after 15.29 y.CONCLUSION: Patients stay for long time periods in mild NPDR before transitioning into moderate NPDR.However, they move rapidly from moderate NPDR to proliferative diabetic retinopathy(PDR) and stay in that state for long periods before transitioning into blindness.

  14. Phase Transitions for Quantum Markov Chains Associated with Ising Type Models on a Cayley Tree

    Science.gov (United States)

    Mukhamedov, Farrukh; Barhoumi, Abdessatar; Souissi, Abdessatar

    2016-05-01

    The main aim of the present paper is to prove the existence of a phase transition in quantum Markov chain (QMC) scheme for the Ising type models on a Cayley tree. Note that this kind of models do not have one-dimensional analogous, i.e. the considered model persists only on trees. In this paper, we provide a more general construction of forward QMC. In that construction, a QMC is defined as a weak limit of finite volume states with boundary conditions, i.e. QMC depends on the boundary conditions. Our main result states the existence of a phase transition for the Ising model with competing interactions on a Cayley tree of order two. By the phase transition we mean the existence of two distinct QMC which are not quasi-equivalent and their supports do not overlap. We also study some algebraic property of the disordered phase of the model, which is a new phenomena even in a classical setting.

  15. Mapping absorption processes onto a Markov chain, conserving the mean first passage time

    Science.gov (United States)

    Biswas, Katja

    2013-04-01

    The dynamics of a multidimensional system is projected onto a discrete state master equation using the transition rates W(k → k‧ t, t + dt) between a set of states {k} represented by the regions {ζk} in phase or discrete state space. Depending on the dynamics Γi(t) of the original process and the choice of ζk, the discretized process can be Markovian or non-Markovian. For absorption processes, it is shown that irrespective of these properties of the projection, a master equation with time-independent transition rates \\bar{W}(k\\rightarrow k^{\\prime }) can be obtained, which conserves the total occupation time of the partitions of the phase or discrete state space of the original process. An expression for the transition probabilities \\bar{p}(k^{\\prime }|k) is derived based on either time-discrete measurements {ti} with variable time stepping Δ(i + 1)i = ti + 1 - ti or the theoretical knowledge at continuous times t. This allows computational methods of absorbing Markov chains to be used to obtain the mean first passage time (MFPT) of the system. To illustrate this approach, the procedure is applied to obtain the MFPT for the overdamped Brownian motion of particles subject to a system with dichotomous noise and the escape from an entropic barrier. The high accuracy of the simulation results confirms with the theory.

  16. Study on the Calculation Models of Bus Delay at Bays Using Queueing Theory and Markov Chain

    Directory of Open Access Journals (Sweden)

    Feng Sun

    2015-01-01

    Full Text Available Traffic congestion at bus bays has decreased the service efficiency of public transit seriously in China, so it is crucial to systematically study its theory and methods. However, the existing studies lack theoretical model on computing efficiency. Therefore, the calculation models of bus delay at bays are studied. Firstly, the process that buses are delayed at bays is analyzed, and it was found that the delay can be divided into entering delay and exiting delay. Secondly, the queueing models of bus bays are formed, and the equilibrium distribution functions are proposed by applying the embedded Markov chain to the traditional model of queuing theory in the steady state; then the calculation models of entering delay are derived at bays. Thirdly, the exiting delay is studied by using the queueing theory and the gap acceptance theory. Finally, the proposed models are validated using field-measured data, and then the influencing factors are discussed. With these models the delay is easily assessed knowing the characteristics of the dwell time distribution and traffic volume at the curb lane in different locations and different periods. It can provide basis for the efficiency evaluation of bus bays.

  17. Markov chain Monte Carlo analysis to constrain dark matter properties with directional detection

    International Nuclear Information System (INIS)

    Directional detection is a promising dark matter search strategy. Indeed, weakly interacting massive particle (WIMP)-induced recoils would present a direction dependence toward the Cygnus constellation, while background-induced recoils exhibit an isotropic distribution in the Galactic rest frame. Taking advantage of these characteristic features, and even in the presence of a sizeable background, it has recently been shown that data from forthcoming directional detectors could lead either to a competitive exclusion or to a conclusive discovery, depending on the value of the WIMP-nucleon cross section. However, it is possible to further exploit these upcoming data by using the strong dependence of the WIMP signal with: the WIMP mass and the local WIMP velocity distribution. Using a Markov chain Monte Carlo analysis of recoil events, we show for the first time the possibility to constrain the unknown WIMP parameters, both from particle physics (mass and cross section) and Galactic halo (velocity dispersion along the three axis), leading to an identification of non-baryonic dark matter.

  18. Sanov and central limit theorems for output statistics of quantum Markov chains

    International Nuclear Information System (INIS)

    In this paper, we consider the statistics of repeated measurements on the output of a quantum Markov chain. We establish a large deviations result analogous to Sanov’s theorem for the multi-site empirical measure associated to finite sequences of consecutive outcomes of a classical stochastic process. Our result relies on the construction of an extended quantum transition operator (which keeps track of previous outcomes) in terms of which we compute moment generating functions, and whose spectral radius is related to the large deviations rate function. As a corollary to this, we obtain a central limit theorem for the empirical measure. Such higher level statistics may be used to uncover critical behaviour such as dynamical phase transitions, which are not captured by lower level statistics such as the sample mean. As a step in this direction, we give an example of a finite system whose level-1 (empirical mean) rate function is independent of a model parameter while the level-2 (empirical measure) rate is not

  19. Radiation trapping in 1D using the Markov chain formalism: a computational physics project

    International Nuclear Information System (INIS)

    A computational model study for atomic radiation trapping is presented with an audience non-specialized in radiation transport in mind. The level of presentation is adequate for a final undergraduate or beginning graduate project in a computational physics instruction. The dynamics of resonance radiation transport is discussed using a theoretical model known as the multiple scattering representation. This model is compared with the alternative Holstein's ansatz, reinterpreting the fundamental mode as the one associated with a relaxed stationary spatial distribution of excitation. Its computational implementation is done making use of the stochastic Markov chain formalism. A comprehensive discussion of its rationale as well as fine implementation details are presented. The simplest case of complete frequency redistribution in a two-level system is considered for a unidimensional geometry. Nevertheless, the model study discusses at length the influence of the spectral distributions, overall opacity and emission quantum yield for trapping distorted ensemble quantities stressing physical insight and using only straightforward algorithmic concepts. Overall relaxation parameters (ensemble emission yield and lifetime) as well as steady-state quantities (spectra and spatial distribution) are calculated as a function of intrinsic emission yield, opacity and external excitation mode for Doppler, Lorentz and Voigt lineshapes, respectively, with the fundamental mode contribution singled out

  20. Cosmological constraints on generalized Chaplygin gas model: Markov Chain Monte Carlo approach

    International Nuclear Information System (INIS)

    We use the Markov Chain Monte Carlo method to investigate a global constraints on the generalized Chaplygin gas (GCG) model as the unification of dark matter and dark energy from the latest observational data: the Constitution dataset of type supernovae Ia (SNIa), the observational Hubble data (OHD), the cluster X-ray gas mass fraction, the baryon acoustic oscillation (BAO), and the cosmic microwave background (CMB) data. In a non-flat universe, the constraint results for GCG model are, Ωbh2 = 0.0235+0.0021−0.0018 (1σ) +0.0028−0.0022 (2σ), Ωk = 0.0035+0.0172−0.0182 (1σ) +0.0226−0.0204 (2σ), As = 0.753+0.037−0.035 (1σ) +0.045−0.044 (2σ), α = 0.043+0.102−0.106 (1σ) +0.134−0.117 (2σ), and H0 = 70.00+3.25−2.92 (1σ) +3.77−3.67 (2σ), which is more stringent than the previous results for constraint on GCG model parameters. Furthermore, according to the information criterion, it seems that the current observations much support ΛCDM model relative to the GCG model

  1. Exact asymptotics of probabilities of large deviations for Markov chains: the Laplace method

    International Nuclear Information System (INIS)

    We prove results on exact asymptotics as n→∞ for the expectations Ea exp{-θΣk=0n-1g(Xk)} and probabilities Pa{(1/n Σk=0n-1g(Xk)k}k=1∞ is a sequence of independent identically Laplace-distributed random variables, Xn=X0+Σk=1nξk, n≥1, is the corresponding random walk on R, g(x) is a positive continuous function satisfying certain conditions, and d>0, θ>0, a element of R are fixed numbers. Our results are obtained using a new method which is developed in this paper: the Laplace method for the occupation time of discrete-time Markov chains. For g(x) one can take |x|p, log (|x|p+1), p>0, |x| log (|x|+1), or eα|x|-1, 0<α<1/2, x element of R, for example. We give a detailed treatment of the case when g(x)=|x| using Bessel functions to make explicit calculations.

  2. Mapping absorption processes onto a Markov chain, conserving the mean first passage time

    International Nuclear Information System (INIS)

    The dynamics of a multidimensional system is projected onto a discrete state master equation using the transition rates W(k → k′; t, t + dt) between a set of states {k} represented by the regions {ζk} in phase or discrete state space. Depending on the dynamics Γi(t) of the original process and the choice of ζk, the discretized process can be Markovian or non-Markovian. For absorption processes, it is shown that irrespective of these properties of the projection, a master equation with time-independent transition rates W-bar (k→k') can be obtained, which conserves the total occupation time of the partitions of the phase or discrete state space of the original process. An expression for the transition probabilities p-bar (k'|k) is derived based on either time-discrete measurements {ti} with variable time stepping Δ(i+1)i = ti+1 − ti or the theoretical knowledge at continuous times t. This allows computational methods of absorbing Markov chains to be used to obtain the mean first passage time (MFPT) of the system. To illustrate this approach, the procedure is applied to obtain the MFPT for the overdamped Brownian motion of particles subject to a system with dichotomous noise and the escape from an entropic barrier. The high accuracy of the simulation results confirms with the theory. (paper)

  3. Solving inverse problem for Markov chain model of customer lifetime value using flower pollination algorithm

    Science.gov (United States)

    Al-Ma'shumah, Fathimah; Permana, Dony; Sidarto, Kuntjoro Adji

    2015-12-01

    Customer Lifetime Value is an important and useful concept in marketing. One of its benefits is to help a company for budgeting marketing expenditure for customer acquisition and customer retention. Many mathematical models have been introduced to calculate CLV considering the customer retention/migration classification scheme. A fairly new class of these models which will be described in this paper uses Markov Chain Models (MCM). This class of models has the major advantage for its flexibility to be modified to several different cases/classification schemes. In this model, the probabilities of customer retention and acquisition play an important role. From Pfeifer and Carraway, 2000, the final formula of CLV obtained from MCM usually contains nonlinear form of the transition probability matrix. This nonlinearity makes the inverse problem of CLV difficult to solve. This paper aims to solve this inverse problem, yielding the approximate transition probabilities for the customers, by applying metaheuristic optimization algorithm developed by Yang, 2013, Flower Pollination Algorithm. The major interpretation of obtaining the transition probabilities are to set goals for marketing teams in keeping the relative frequencies of customer acquisition and customer retention.

  4. Two-state Markov-chain Poisson nature of individual cellphone call statistics

    Science.gov (United States)

    Jiang, Zhi-Qiang; Xie, Wen-Jie; Li, Ming-Xia; Zhou, Wei-Xing; Sornette, Didier

    2016-07-01

    Unfolding the burst patterns in human activities and social interactions is a very important issue especially for understanding the spreading of disease and information and the formation of groups and organizations. Here, we conduct an in-depth study of the temporal patterns of cellphone conversation activities of 73 339 anonymous cellphone users, whose inter-call durations are Weibull distributed. We find that the individual call events exhibit a pattern of bursts, that high activity periods are alternated with low activity periods. In both periods, the number of calls are exponentially distributed for individuals, but power-law distributed for the population. Together with the exponential distributions of inter-call durations within bursts and of the intervals between consecutive bursts, we demonstrate that the individual call activities are driven by two independent Poisson processes, which can be combined within a minimal model in terms of a two-state first-order Markov chain, giving significant fits for nearly half of the individuals. By measuring directly the distributions of call rates across the population, which exhibit power-law tails, we purport the existence of power-law distributions, via the ‘superposition of distributions’ mechanism. Our findings shed light on the origins of bursty patterns in other human activities.

  5. DIM SUM: demography and individual migration simulated using a Markov chain.

    Science.gov (United States)

    Brown, Jeremy M; Savidge, Kevin; McTavish, Emily Jane B

    2011-03-01

    An increasing number of studies seek to infer demographic history, often jointly with genetic relationships. Despite numerous analytical methods for such data, few simulations have investigated the methods' power and robustness, especially when underlying assumptions have been violated. DIM SUM (Demography and Individual Migration Simulated Using a Markov chain) is a stand-alone Java program for the simulation of population demography and individual migration while recording ancestor-descendant relationships. It does not employ coalescent assumptions or discrete population boundaries. It is extremely flexible, allowing the user to specify border positions, reactions of organisms to borders, local and global carrying capacities, individual dispersal kernels, rates of reproduction and strategies for sampling individuals. Spatial variables may be specified using image files (e.g., as exported from gis software) and may vary through time. In combination with software for genetic marker simulation, DIM SUM will be useful for testing phylogeographic (e.g., nested clade phylogeographic analysis, coalescent-based tests and continuous-landscape frameworks) and landscape-genetic methods, specifically regarding violations of coalescent assumptions. It can also be used to explore the qualitative features of proposed demographic scenarios (e.g. regarding biological invasions) and as a pedagogical tool. DIM SUM (with user's manual) can be downloaded from http://code.google.com/p/bio-dimsum. PMID:21429144

  6. Performance evaluation of corrosion-affected reinforced concrete bridge girders using Markov chains with fuzzy states

    Indian Academy of Sciences (India)

    M B ANOOP; K BALAJI RAO

    2016-08-01

    A methodology for performance evaluation of reinforced concrete bridge girders in corrosive environments is proposed. The methodology uses the concept of performability and considers both serviceability- and ultimate-limit states. The serviceability limit states are defined based on the degree of cracking (characterized by crack width) in the girder due to chloride induced corrosion of reinforcement, and the ultimate limit states are defined based on the flexural load carrying capacity of the girder (characterized in terms of rating factor using the load and resistance factor rating method). The condition of the bridge girder is specified by the assignment of a condition state from a set of predefined condition states. Generally, the classification of condition states is linguistic, while the condition states are considered to be mutually exclusive and collectivelyexhaustive. In the present study, the condition states of the bridge girder are also represented by fuzzy sets to consider the ambiguities arising due to the linguistic classification of condition states. A non-homogeneous Markov chain (MC) model is used for modeling the condition state evolution of the bridge girder with time. The usefulness of the proposed methodology is demonstrated through a case study of a severely distressed beam of the Rocky Point Viaduct. The results obtained using the proposed approach are compared with those obtained using conventional MC model. It is noted that the use of MC with fuzzy states leads to conservative decision making for the problem considered in the case study.

  7. Markov chain models of coupled calcium channels: Kronecker representations and iterative solution methods

    International Nuclear Information System (INIS)

    Mathematical models of calcium release sites derived from Markov chain models of intracellular calcium channels exhibit collective gating reminiscent of the experimentally observed phenomenon of stochastic calcium excitability (i.e., calcium puffs and sparks). Calcium release site models are stochastic automata networks that involve many functional transitions, that is, the transition probabilities of each channel depend on the local calcium concentration and thus the state of the other channels. We present a Kronecker-structured representation for calcium release site models and perform benchmark stationary distribution calculations using both exact and approximate iterative numerical solution techniques that leverage this structure. When it is possible to obtain an exact solution, response measures such as the number of channels in a particular state converge more quickly using the iterative numerical methods than occupation measures calculated via Monte Carlo simulation. In particular, multi-level methods provide excellent convergence with modest additional memory requirements for the Kronecker representation of calcium release site models. When an exact solution is not feasible, iterative approximate methods based on the power method may be used, with performance similar to Monte Carlo estimates. This suggests approximate methods with multi-level iterative engines as a promising avenue of future research for large-scale calcium release site models

  8. A Markov chain model for image ranking system in social networks

    Science.gov (United States)

    Zin, Thi Thi; Tin, Pyke; Toriu, Takashi; Hama, Hiromitsu

    2014-03-01

    In today world, different kinds of networks such as social, technological, business and etc. exist. All of the networks are similar in terms of distributions, continuously growing and expanding in large scale. Among them, many social networks such as Facebook, Twitter, Flickr and many others provides a powerful abstraction of the structure and dynamics of diverse kinds of inter personal connection and interaction. Generally, the social network contents are created and consumed by the influences of all different social navigation paths that lead to the contents. Therefore, identifying important and user relevant refined structures such as visual information or communities become major factors in modern decision making world. Moreover, the traditional method of information ranking systems cannot be successful due to their lack of taking into account the properties of navigation paths driven by social connections. In this paper, we propose a novel image ranking system in social networks by using the social data relational graphs from social media platform jointly with visual data to improve the relevance between returned images and user intentions (i.e., social relevance). Specifically, we propose a Markov chain based Social-Visual Ranking algorithm by taking social relevance into account. By using some extensive experiments, we demonstrated the significant and effectiveness of the proposed social-visual ranking method.

  9. Use of Markov Chains to Design an Agent Bidding Strategy for Continuous Double Auctions

    CERN Document Server

    Birmingham, W P; Park, S; 10.1613/jair.1466

    2011-01-01

    As computational agents are developed for increasingly complicated e-commerce applications, the complexity of the decisions they face demands advances in artificial intelligence techniques. For example, an agent representing a seller in an auction should try to maximize the seller?s profit by reasoning about a variety of possibly uncertain pieces of information, such as the maximum prices various buyers might be willing to pay, the possible prices being offered by competing sellers, the rules by which the auction operates, the dynamic arrival and matching of offers to buy and sell, and so on. A naive application of multiagent reasoning techniques would require the seller?s agent to explicitly model all of the other agents through an extended time horizon, rendering the problem intractable for many realistically-sized problems. We have instead devised a new strategy that an agent can use to determine its bid price based on a more tractable Markov chain model of the auction process. We have experimentally ident...

  10. Study on the Calculation Models of Bus Delay at Bays Using Queueing Theory and Markov Chain

    Science.gov (United States)

    Sun, Li; Sun, Shao-wei; Wang, Dian-hai

    2015-01-01

    Traffic congestion at bus bays has decreased the service efficiency of public transit seriously in China, so it is crucial to systematically study its theory and methods. However, the existing studies lack theoretical model on computing efficiency. Therefore, the calculation models of bus delay at bays are studied. Firstly, the process that buses are delayed at bays is analyzed, and it was found that the delay can be divided into entering delay and exiting delay. Secondly, the queueing models of bus bays are formed, and the equilibrium distribution functions are proposed by applying the embedded Markov chain to the traditional model of queuing theory in the steady state; then the calculation models of entering delay are derived at bays. Thirdly, the exiting delay is studied by using the queueing theory and the gap acceptance theory. Finally, the proposed models are validated using field-measured data, and then the influencing factors are discussed. With these models the delay is easily assessed knowing the characteristics of the dwell time distribution and traffic volume at the curb lane in different locations and different periods. It can provide basis for the efficiency evaluation of bus bays. PMID:25759720

  11. Dynamical Models for NGC 6503 using a Markov Chain Monte Carlo Technique

    CERN Document Server

    Puglielli, David; Courteau, Stéphane

    2010-01-01

    We use Bayesian statistics and Markov chain Monte Carlo (MCMC) techniques to construct dynamical models for the spiral galaxy NGC 6503. The constraints include surface brightness profiles which display a Freeman Type II structure; HI and ionized gas rotation curves; the stellar rotation, which is nearly coincident with the ionized gas curve; and the line of sight stellar dispersion, with a sigma-drop at the centre. The galaxy models consist of a Sersic bulge, an exponential disc with an optional inner truncation and a cosmologically motivated dark halo. The Bayesian/MCMC technique yields the joint posterior probability distribution function for the input parameters. We examine several interpretations of the data: the Type II surface brightness profile may be due to dust extinction, to an inner truncated disc or to a ring of bright stars; and we test separate fits to the gas and stellar rotation curves to determine if the gas traces the gravitational potential. We test each of these scenarios for bar stability...

  12. On stochastic error and computational efficiency of the Markov Chain Monte Carlo method

    KAUST Repository

    Li, Jun

    2014-01-01

    In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.

  13. Mapping systematic errors in helium abundance determinations using Markov Chain Monte Carlo

    International Nuclear Information System (INIS)

    Monte Carlo techniques have been used to evaluate the statistical and systematic uncertainties in the helium abundances derived from extragalactic H II regions. The helium abundance is sensitive to several physical parameters associated with the H II region. In this work, we introduce Markov Chain Monte Carlo (MCMC) methods to efficiently explore the parameter space and determine the helium abundance, the physical parameters, and the uncertainties derived from observations of metal poor nebulae. Experiments with synthetic data show that the MCMC method is superior to previous implementations (based on flux perturbation) in that it is not affected by biases due to non-physical parameter space. The MCMC analysis allows a detailed exploration of degeneracies, and, in particular, a false minimum that occurs at large values of optical depth in the He I emission lines. We demonstrate that introducing the electron temperature derived from the [O III] emission lines as a prior, in a very conservative manner, produces negligible bias and effectively eliminates the false minima occurring at large optical depth. We perform a frequentist analysis on data from several ''high quality'' systems. Likelihood plots illustrate degeneracies, asymmetries, and limits of the determination. In agreement with previous work, we find relatively large systematic errors, limiting the precision of the primordial helium abundance for currently available spectra

  14. Development of compartment models with Markov-chain processes for radionuclide transport in repository region

    International Nuclear Information System (INIS)

    This paper presents a new radionuclide transport model for performance assessment and design of a geologic repository for high-level radioactive waste. The model uses compartmentalization of a model space and a Markov-chain process to describe the transport. The model space is divided into an array of compartments, among which a transition probability matrix describes radionuclide transport. While similar to the finite-difference method, it has several advantages such as flexibility to include various types of transport processes and reactions due to probabilistic interpretation, and higher-order accuracy resulting from direct formulation in a discrete-time frame. We demonstrated application of this model with a hypothetical repository in porous rock formation. First we calculated a three-dimensional steady-state heterogeneous groundwater flow field numerically by the finite-element method. The transition probability matrix was constructed based on the flow field and hydraulic dispersion coefficient. The present approach has been found to be effective in modeling radionuclide transport at a repository scale while taking into account the effects of change in hydraulic properties on the repository performance. Numerical exploration results indicate that engineered barrier configuration and material degradation have substantial effects on radionuclide release from the repository.

  15. Improving Markov Chain Monte Carlo algorithms in LISA Pathfinder Data Analysis

    International Nuclear Information System (INIS)

    The LISA Pathfinder mission (LPF) aims to test key technologies for the future LISA mission. The LISA Technology Package (LTP) on-board LPF will consist of an exhaustive suite of experiments and its outcome will be crucial for the future detection of gravitational waves. In order to achieve maximum sensitivity, we need to have an understanding of every instrument on-board and parametrize the properties of the underlying noise models. The Data Analysis team has developed algorithms for parameter estimation of the system. A very promising one implemented for LISA Pathfinder data analysis is the Markov Chain Monte Carlo. A series of experiments are going to take place during flight operations and each experiment is going to provide us with essential information for the next in the sequence. Therefore, it is a priority to optimize and improve our tools available for data analysis during the mission. Using a Bayesian framework analysis allows us to apply prior knowledge for each experiment, which means that we can efficiently use our prior estimates for the parameters, making the method more accurate and significantly faster. This, together with other algorithm improvements, will lead us to our main goal, which is no other than creating a robust and reliable tool for parameter estimation during the LPF mission.

  16. Clinical neonatal brain MRI segmentation using adaptive nonparametric data models and intensity-based Markov priors.

    Science.gov (United States)

    Song, Zhuang; Awate, Suyash P; Licht, Daniel J; Gee, James C

    2007-01-01

    This paper presents a Bayesian framework for neonatal brain-tissue segmentation in clinical magnetic resonance (MR) images. This is a challenging task because of the low contrast-to-noise ratio and large variance in both tissue intensities and brain structures, as well as imaging artifacts and partial-volume effects in clinical neonatal scanning. We propose to incorporate a spatially adaptive likelihood model using a data-driven nonparametric statistical technique. The method initially learns an intensity-based prior, relying on the empirical Markov statistics from training data, using fuzzy nonlinear support vector machines (SVM). In an iterative scheme, the models adapt to spatial variations of image intensities via nonparametric density estimation. The method is effective even in the absence of anatomical atlas priors. The implementation, however, can naturally incorporate probabilistic atlas priors and Markov-smoothness priors to impose additional regularity on segmentation. The maximum-a-posteriori (MAP) segmentation is obtained within a graph-cut framework. Cross validation on clinical neonatal brain-MR images demonstrates the efficacy of the proposed method, both qualitatively and quantitatively. PMID:18051142

  17. MODEL FOR OPTIMAL BLOCK REPLACEMENT DECISION OF AIR CONDITIONERS USING FIRST ORDER MARKOV CHAINS WITH & WITHOUT CONSIDERING INFLATION

    Directory of Open Access Journals (Sweden)

    Y HARI PRASADA REDDY

    2012-05-01

    Full Text Available In this paper, a mathematical model has been developed for group replacement of a block of Air Conditioners using discrete-time First Order Markov Chains. To make the model more realistic, threeintermediate states viz., Minor Repair State, Semi-Major Repair State and Major Repair States have been introduced between Functioning State & Complete Failure States of the system. The Transition Probabilities for future periods for First Order Markov Chain (FOMC are estimated by Spectral Decomposition Method. Using these probabilities, the number of systems in each state and accordingly the corresponding average maintenance cost is computed. The forecasted inflation for Air Conditioners and the real value of money using Fisherman’s relation are employed to study and develop the real time mathematical model for block replacement decision making.

  18. Identifying the role of typhoons as drought busters in South Korea based on hidden Markov chain models

    Science.gov (United States)

    Yoo, Jiyoung; Kwon, Hyun-Han; So, Byung-Jin; Rajagopalan, Balaji; Kim, Tae-Woong

    2015-04-01

    This study proposed a hidden Markov chain model-based drought analysis (HMM-DA) tool to understand the beginning and ending of meteorological drought and to further characterize typhoon-induced drought busters (TDB) by exploring spatiotemporal drought patterns in South Korea. It was found that typhoons have played a dominant role in ending drought events (EDE) during the typhoon season (July-September) over the last four decades (1974-2013). The percentage of EDEs terminated by TDBs was about 43-90% mainly along coastal regions in South Korea. Furthermore, the TDBs, mainly during summer, have a positive role in managing extreme droughts during the subsequent autumn and spring seasons. The HMM-DA models the temporal dependencies between drought states using Markov chain, consequently capturing the dependencies between droughts and typhoons well, thus, enabling a better performance in modeling spatiotemporal drought attributes compared to traditional methods.

  19. Nuclide transport of decay chain in the fractured rock medium: a model using continuous time Markov process

    International Nuclear Information System (INIS)

    A model using continuous time Markov process for nuclide transport of decay chain of arbitrary length in the fractured rock medium has been developed. Considering the fracture in the rock matrix as a finite number of compartments, the transition probability for nuclide from the transition intensity between and out of the compartments is represented utilizing Chapman-Kolmogorov equation, with which the expectation and the variance of nuclide distribution for the fractured rock medium could be obtained. A comparison between continuous time Markov process model and available analytical solutions for the nuclide transport of three decay chains without rock matrix diffusion has been made showing comparatively good agreement. Fittings with experimental breakthrough curves obtained with nonsorbing materials such as NaLS and uranine in the artificial fractured rock are also made. (author)

  20. An open Markov chain scheme model for a credit consumption portfolio fed by ARIMA and SARMA processes

    Science.gov (United States)

    Esquível, Manuel L.; Fernandes, José Moniz; Guerreiro, Gracinda R.

    2016-06-01

    We introduce a schematic formalism for the time evolution of a random population entering some set of classes and such that each member of the population evolves among these classes according to a scheme based on a Markov chain model. We consider that the flow of incoming members is modeled by a time series and we detail the time series structure of the elements in each of the classes. We present a practical application to data from a credit portfolio of a Cape Verdian bank; after modeling the entering population in two different ways - namely as an ARIMA process and as a deterministic sigmoid type trend plus a SARMA process for the residues - we simulate the behavior of the population and compare the results. We get that the second method is more accurate in describing the behavior of the populations when compared to the observed values in a direct simulation of the Markov chain.

  1. A Birthday Paradox for Markov chains with an optimal bound for collision in the Pollard Rho algorithm for discrete logarithm

    OpenAIRE

    Kim, Jeong Han; Montenegro, Ravi; Peres, Yuval; Tetali, Prasad

    2010-01-01

    We show a Birthday Paradox for self-intersections of Markov chains with uniform stationary distribution. As an application, we analyze Pollard's Rho algorithm for finding the discrete logarithm in a cyclic group $G$ and find that if the partition in the algorithm is given by a random oracle, then with high probability a collision occurs in $\\Theta(\\sqrt{|G|})$ steps. Moreover, for the parallelized distinguished points algorithm on $J$ processors we find that $\\Theta(\\sqrt{|G|}/J)$ steps suffi...

  2. Markov chains with exponentially small transition probabilities: First exit problem from a general domain. II. The general case

    International Nuclear Information System (INIS)

    In this paper we consider aperiodic ergodic Markov chains with transition probabilities exponentially small in a large parameter β. We extend to the general, not necessarily reversible case the analysis, started in part I of this work, of the first exit problem from a general domain Q containing many stable equilibria (attracting equilibrium points for the β = ∞ dynamics). In particular we describe the tube of typical trajectories during the first excursion outside Q

  3. Evaluation of Jefferies' level population ratios, and generalization of Seaton's cascade matrix, by a Markov-chain method

    International Nuclear Information System (INIS)

    Closed expressions are obtained for the conditional probabilities qsub(i)sub(j)sub(,)sub(k) required in evaluating particular ratios of atomic level populations, using a Markov-chain representation of the system of levels. The total transition probability between two arbitrary levels is also evaluated and its relation to population ratios is clarified. It is shown that Seaton's cascade matrix is a subset of the total transition probability matrix. (orig.)

  4. Bayesian Parameter Inference by Markov Chain Monte Carlo with Hybrid Fitness Measures: Theory and Test in Apoptosis Signal Transduction Network

    OpenAIRE

    Murakami, Yohei; Takada, Shoji

    2013-01-01

    When exact values of model parameters in systems biology are not available from experiments, they need to be inferred so that the resulting simulation reproduces the experimentally known phenomena. For the purpose, Bayesian statistics with Markov chain Monte Carlo (MCMC) is a useful method. Biological experiments are often performed with cell population, and the results are represented by histograms. On another front, experiments sometimes indicate the existence of a specific bifurcation patt...

  5. Test of The Weak Form Efficient Market Hypothesis for The Istanbul Stock Exchange By Markov Chains Methodology

    OpenAIRE

    KILIÇ, Öğr. Gör. Dr. Süleyman Bilgin

    2013-01-01

    In this study Markov chain methodology is used to test whether or not the daily returns of the Istanbul Stock Exchange ISE 100 index follows a martingale random walk process If the Weak Form Efficient Market Hypothesis EMH holds in any stock market stocks prices or returns follow a random walk process The random walk theory asserts that price movements will not follow any patterns or trends and that past price movements cannot be used to predict future price movements hence technic...

  6. A uniform estimate for the rate of convergence in the multidimensional central limit theorem for homogeneous Markov chains

    Directory of Open Access Journals (Sweden)

    M. Gharib

    1996-09-01

    Full Text Available In this paper a uniform estimate is obtained for the remainder term in the central limit theorem (CLT for a sequence of random vectors forming a homogeneous Markov chain with arbitrary set of states. The result makes it possible to estimate the rate of convergence in the CLT without assuming the finiteness of the absolute third moment of the transition probabilities. Some consequences are also proved.

  7. On The Transition Probabilities for the Fuzzy States of a Fuzzy Markov Chain

    Directory of Open Access Journals (Sweden)

    J.Earnest Lazarus Piriyakumar

    2015-12-01

    Full Text Available In this paper the theory of fuzzy logic is mixed with the theory of Markov systems and the abstraction of a Markov system with fuzzy states introduced. The notions such as fuzzy transient, fuzzy recurrent etc., were introduced. The results based on these notions are introduced.

  8. Combination of Markov chain and optimal control solved by Pontryagin’s Minimum Principle for a fuel cell/supercapacitor vehicle

    International Nuclear Information System (INIS)

    Highlights: • A combination of Markov chain and an optimal control solved by Pontryagin’s Minimum Principle is presented. • This strategy is applied to hybrid electric vehicle dynamic model. • The hydrogen consumption is analyzed for two different vehicle mass and drive cycle. • The supercapacitor and fuel cell behavior is analyzed at high or sudden required power. - Abstract: In this article, a real time optimal control strategy based on Pontryagin’s Minimum Principle (PMP) combined with the Markov chain approach is used for a fuel cell/supercapacitor electrical vehicle. In real time, at high power and at high speed, two phenomena are observed. The first is obtained at higher required power, and the second is observed at sudden power demand. To avoid these situations, the Markov chain model is proposed to predict the future power demand during a driving cycle. The optimal control problem is formulated as an equivalent consumption minimization strategy (ECMS), that has to be solved by using the Pontryagin’s Minimum Principle. A Markov chain model is added as a separate block for a prediction of required power. This approach and the whole system are modeled and implemented using the MATLAB/Simulink. The model without Markov chain block and the model is with it are compared. The results presented demonstrate the importance of a Markov chain block added to a model

  9. Additive N-step Markov chains as prototype model of symbolic stochastic dynamical systems with long-range correlations

    International Nuclear Information System (INIS)

    A theory of symbolic dynamic systems with long-range correlations based on the consideration of the binary N-step Markov chains developed earlier in Phys Rev Lett 2003;90:110601 is generalized to the biased case (non-equal numbers of zeros and unities in the chain). In the model, the conditional probability that the ith symbol in the chain equals zero (or unity) is a linear function of the number of unities (zeros) among the preceding N symbols. The correlation and distribution functions as well as the variance of number of symbols in the words of arbitrary length L are obtained analytically and verified by numerical simulations. A self-similarity of the studied stochastic process is revealed and the similarity group transformation of the chain parameters is presented. The diffusion Fokker-Planck equation governing the distribution function of the L-words is explored. If the persistent correlations are not extremely strong, the distribution function is shown to be the Gaussian with the variance being nonlinearly dependent on L. An equation connecting the memory and correlation function of the additive Markov chain is presented. This equation allows reconstructing a memory function using a correlation function of the system. Effectiveness and robustness of the proposed method is demonstrated by simple model examples. Memory functions of concrete coarse-grained literary texts are found and their universal power-law behavior at long distances is revealed

  10. DYNAMICAL MODELS FOR NGC 6503 USING A MARKOV CHAIN MONTE CARLO TECHNIQUE

    International Nuclear Information System (INIS)

    We use Bayesian statistics and Markov chain Monte Carlo (MCMC) techniques to construct dynamical models for the spiral galaxy NGC 6503. The constraints include surface brightness (SB) profiles which display a Freeman Type II structure; H I and ionized gas rotation curves; the stellar rotation, which is nearly coincident with the ionized gas curve; and the line of sight stellar dispersion, which displays a σ-drop at the center. The galaxy models consist of a Sersic bulge, an exponential disk with an optional inner truncation and a cosmologically motivated dark halo. The Bayesian/MCMC technique yields the joint posterior probability distribution function for the input parameters, allowing constraints on model parameters such as the halo cusp strength, structural parameters for the disk and bulge, and mass-to-light ratios. We examine several interpretations of the data: the Type II SB profile may be due to dust extinction, to an inner truncated disk, or to a ring of bright stars, and we test separate fits to the gas and stellar rotation curves to determine if the gas traces the gravitational potential. We test each of these scenarios for bar stability, ruling out dust extinction. We also find that the gas likely does not trace the gravitational potential, since the predicted stellar rotation curve, which includes asymmetric drift, is then inconsistent with the observed stellar rotation curve. The disk is well fit by an inner-truncated profile, but the possibility of ring formation by a bar to reproduce the Type II profile is also a realistic model. We further find that the halo must have a cuspy profile with γ ∼> 1; the bulge has a lower M/L than the disk, suggesting a star-forming component in the center of the galaxy; and the bulge, as expected for this late-type galaxy, has a low Sersic index with nb ∼ 1-2, suggesting a formation history dominated by secular evolution.

  11. BENCHMARK TESTS FOR MARKOV CHAIN MONTE CARLO FITTING OF EXOPLANET ECLIPSE OBSERVATIONS

    International Nuclear Information System (INIS)

    Ground-based observations of exoplanet eclipses provide important clues to the planets' atmospheric physics, yet systematics in light curve analyses are not fully understood. It is unknown if measurements suggesting near-infrared flux densities brighter than models predict are real, or artifacts of the analysis processes. We created a large suite of model light curves, using both synthetic and real noise, and tested the common process of light curve modeling and parameter optimization with a Markov Chain Monte Carlo algorithm. With synthetic white noise models, we find that input eclipse signals are generally recovered within 10% accuracy for eclipse depths greater than the noise amplitude, and to smaller depths for higher sampling rates and longer baselines. Red noise models see greater discrepancies between input and measured eclipse signals, often biased in one direction. Finally, we find that in real data, systematic biases result even with a complex model to account for trends, and significant false eclipse signals may appear in a non-Gaussian distribution. To quantify the bias and validate an eclipse measurement, we compare both the planet-hosting star and several of its neighbors to a separately chosen control sample of field stars. Re-examining the Rogers et al. Ks-band measurement of CoRoT-1b finds an eclipse 3190+370-440 ppm deep centered at φme = 0.50418+0.00197-0.00203. Finally, we provide and recommend the use of selected data sets we generated as a benchmark test for eclipse modeling and analysis routines, and propose criteria to verify eclipse detections.

  12. Multilevel markov chain monte carlo method for high-contrast single-phase flow problems

    KAUST Repository

    Efendiev, Yalchin R.

    2014-12-19

    In this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates. © Global Science Press Limited 2015.

  13. Input estimation for drug discovery using optimal control and Markov chain Monte Carlo approaches.

    Science.gov (United States)

    Trägårdh, Magnus; Chappell, Michael J; Ahnmark, Andrea; Lindén, Daniel; Evans, Neil D; Gennemark, Peter

    2016-04-01

    Input estimation is employed in cases where it is desirable to recover the form of an input function which cannot be directly observed and for which there is no model for the generating process. In pharmacokinetic and pharmacodynamic modelling, input estimation in linear systems (deconvolution) is well established, while the nonlinear case is largely unexplored. In this paper, a rigorous definition of the input-estimation problem is given, and the choices involved in terms of modelling assumptions and estimation algorithms are discussed. In particular, the paper covers Maximum a Posteriori estimates using techniques from optimal control theory, and full Bayesian estimation using Markov Chain Monte Carlo (MCMC) approaches. These techniques are implemented using the optimisation software CasADi, and applied to two example problems: one where the oral absorption rate and bioavailability of the drug eflornithine are estimated using pharmacokinetic data from rats, and one where energy intake is estimated from body-mass measurements of mice exposed to monoclonal antibodies targeting the fibroblast growth factor receptor (FGFR) 1c. The results from the analysis are used to highlight the strengths and weaknesses of the methods used when applied to sparsely sampled data. The presented methods for optimal control are fast and robust, and can be recommended for use in drug discovery. The MCMC-based methods can have long running times and require more expertise from the user. The rigorous definition together with the illustrative examples and suggestions for software serve as a highly promising starting point for application of input-estimation methods to problems in drug discovery. PMID:26932466

  14. An informational transition in conditioned Markov chains: Applied to genetics and evolution.

    Science.gov (United States)

    Zhao, Lei; Lascoux, Martin; Waxman, David

    2016-08-01

    In this work we assume that we have some knowledge about the state of a population at two known times, when the dynamics is governed by a Markov chain such as a Wright-Fisher model. Such knowledge could be obtained, for example, from observations made on ancient and contemporary DNA, or during laboratory experiments involving long term evolution. A natural assumption is that the behaviour of the population, between observations, is related to (or constrained by) what was actually observed. The present work shows that this assumption has limited validity. When the time interval between observations is larger than a characteristic value, which is a property of the population under consideration, there is a range of intermediate times where the behaviour of the population has reduced or no dependence on what was observed and an equilibrium-like distribution applies. Thus, for example, if the frequency of an allele is observed at two different times, then for a large enough time interval between observations, the population has reduced or no dependence on the two observed frequencies for a range of intermediate times. Given observations of a population at two times, we provide a general theoretical analysis of the behaviour of the population at all intermediate times, and determine an expression for the characteristic time interval, beyond which the observations do not constrain the population's behaviour over a range of intermediate times. The findings of this work relate to what can be meaningfully inferred about a population at intermediate times, given knowledge of terminal states. PMID:27105672

  15. Large Deviations for Stationary Probabilities of a Family of Continuous Time Markov Chains via Aubry-Mather Theory

    Science.gov (United States)

    Lopes, Artur O.; Neumann, Adriana

    2015-05-01

    In the present paper, we consider a family of continuous time symmetric random walks indexed by , . For each the matching random walk take values in the finite set of states ; notice that is a subset of , where is the unitary circle. The infinitesimal generator of such chain is denoted by . The stationary probability for such process converges to the uniform distribution on the circle, when . Here we want to study other natural measures, obtained via a limit on , that are concentrated on some points of . We will disturb this process by a potential and study for each the perturbed stationary measures of this new process when . We disturb the system considering a fixed potential and we will denote by the restriction of to . Then, we define a non-stochastic semigroup generated by the matrix , where is the infinifesimal generator of . From the continuous time Perron's Theorem one can normalized such semigroup, and, then we get another stochastic semigroup which generates a continuous time Markov Chain taking values on . This new chain is called the continuous time Gibbs state associated to the potential , see (Lopes et al. in J Stat Phys 152:894-933, 2013). The stationary probability vector for such Markov Chain is denoted by . We assume that the maximum of is attained in a unique point of , and from this will follow that . Thus, here, our main goal is to analyze the large deviation principle for the family , when . The deviation function , which is defined on , will be obtained from a procedure based on fixed points of the Lax-Oleinik operator and Aubry-Mather theory. In order to obtain the associated Lax-Oleinik operator we use the Varadhan's Lemma for the process . For a careful analysis of the problem we present full details of the proof of the Large Deviation Principle, in the Skorohod space, for such family of Markov Chains, when . Finally, we compute the entropy of the invariant probabilities on the Skorohod space associated to the Markov Chains we analyze.

  16. Modelagem da dinâmica de rebanho de bovino de corte por meio de cadeia de Markov Modeling dynamics of the beef cattle herd through Markov chain

    Directory of Open Access Journals (Sweden)

    Urbano Gomes Pinto de Abreu

    2008-12-01

    Full Text Available A estrutura etária do rebanho e a estratégia de descarte de matrizes são pontos importantes na avaliação da eficiência do sistema de produção de gado de corte. Neste trabalho foi monitorado o rebanho de cria, cujas matrizes eram mantidas ou descartadas com base em critério técnico, conforme a avaliação do desempenho reprodutivo. Para tanto, foi utilizada a cadeia de Markov com o objetivo de simular a probabilidade de descarte por idade, ao longo da vida, e de calcular a expectativa de vida da matriz no rebanho. Após a introdução da estação de monta, a pressão de descarte foi maior, com aumento da probabilidade de descarte e da diminuição da probabilidade de permanência da matriz no rebanho nas idades mais avançadas. Com a introdução de tecnologias e especialmente da monta controlada, o processo torna-se mais dinâmico com a identificação e descarte de novilhas e matrizes improdutivas, diminuindo a idade média das vacas no rebanho e de descarte das mesmas.The determination of age structure in a herd and the culling strategy in dams are important points to evaluate efficiency of a beef cattle production system. A breeding herd was monitored, where cows were maintained or discard according to technical criterion. Using the Markov chain the culling probability due to age throughout the life time of the cow was simulated. Life expectancy of the dam in the herd was also calculated. After the breeding season, culling pressure was higher and culling probability increased while permanency probability in the herd for older cows decreased, this made the herd more dynamic.

  17. Stochastic Monte-Carlo Markov Chain Inversions on Models Regionalized Using Receiver Functions

    Science.gov (United States)

    Larmat, C. S.; Maceira, M.; Kato, Y.; Bodin, T.; Calo, M.; Romanowicz, B. A.; Chai, C.; Ammon, C. J.

    2014-12-01

    There is currently a strong interest in stochastic approaches to seismic modeling - versus deterministic methods such as gradient methods - due to the ability of these methods to better deal with highly non-linear problems. Another advantage of stochastic methods is that they allow the estimation of the a posteriori probability distribution of the derived parameters, meaning the envisioned Bayesian inversion of Tarantola allowing the quantification of the solution error. The cost to pay of stochastic methods is that they require testing thousands of variations of each unknown parameter and their associated weights to ensure reliable probabilistic inferences. Even with the best High-Performance Computing resources available, 3D stochastic full waveform modeling at the regional scale still remains out-of-reach. We are exploring regionalization as one way to reduce the dimension of the parameter space, allowing the identification of areas in the models that can be treated as one block in a subsequent stochastic inversion. Regionalization is classically performed through the identification of tectonic or structural elements. Lekic & Romanowicz (2011) proposed a new approach with a cluster analysis of the tomographic velocity models instead. Here we present the results of a clustering analysis on the P-wave receiver-functions used in the subsequent inversion. Different clustering algorithms and quality of clustering are tested for different datasets of North America and China. Preliminary results with the kmean clustering algorithm show that an interpolated receiver function wavefield (Chai et al., GRL, in review) improve the agreement with the geological and tectonic regions of North America compared to the traditional approach of stacked receiver functions. After regionalization, 1D profile for each region is stochastically inferred using a parallelized code based on Monte-Carlo Markov Chains (MCMC), and modeling surfacewave-dispersion and receiver

  18. Supply Chain as Complex Adaptive System and Its Modeling

    Institute of Scientific and Technical Information of China (English)

    MingmingWang

    2004-01-01

    Supply chain is a complex, hierarchical, integrated, open and dynamic network.Every node in the network is an independent business unit that unites other organizations to develop its value, the competition and cooperation between these units are basic impetus of the development and evolution of the supply chain system. The characteristics of supply chain as a complex adaptive system and its modeling are discussed in this paper, and use an example demonstrating the feasibility of CAS modeling in supply chain management study.

  19. Markov chains and entropy tests in genetic-based lithofacies analysis of deep-water clastic depositional systems

    Science.gov (United States)

    Borka, Szabolcs

    2016-01-01

    The aim of this study was to examine the relationship between structural elements and the so-called genetic lithofacies in a clastic deep-water depositional system. Process-sedimentology has recently been gaining importance in the characterization of these systems. This way the recognized facies attributes can be associated with the depositional processes establishing the genetic lithofacies. In this paper this approach was presented through a case study of a Tertiary deep-water sequence of the Pannonian-basin. Of course it was necessary to interpret the stratigraphy of the sequences in terms of "general" sedimentology, focusing on the structural elements. For this purpose, well-logs and standard deep-water models were applied. The cyclicity of sedimentary sequences can be easily revealed by using Markov chains. Though Markov chain analysis has broad application in mainly fluvial depositional environments, its utilization is uncommon in deep-water systems. In this context genetic lithofacies was determined and analysed by embedded Markov chains. The randomness in the presence of a lithofacies within a cycle was estimated by entropy tests (entropy after depositional, before depositional, for the whole system). Subsequently the relationships between lithofacies were revealed and a depositional model (i.e. modal cycle) was produced with 90% confidence level of stationarity. The non-randomness of the latter was tested by chi-square test. The consequences coming from the comparison of "general" sequences (composed of architectural elements), the genetic-based sequences (showing the distributions of the genetic lithofacies) and the lithofacies relationships were discussed in details. This way main depositional channel has the best, channelized lobes have good potential hydrocarbon reservoir attributes, with symmetric alternation of persistent fine-grained sandstone (Facies D) and muddy fine-grained sandstone with traction structures (Facies F)

  20. Prediction on Human Resource Supply/Demand in Nuclear Industry Using Markov Chains Model and Job Coefficient

    International Nuclear Information System (INIS)

    According to the recent report by the OECD/NEA, there is a large imbalance between supply and demand of human resource in nuclear field. In the U.S., according to survey of Nuclear Engineering Department Heads Organization (NEDHO), 174 graduates in B.S or M.S degree were fed to nuclear industry in year 2004. Meanwhile, the total amount of demand in nuclear industry was about 642 engineers, which was approximately three times of the supply. In case of other developed western nations, the OECD/NEA report stated that the level of imbalance is similar to that of the U.S. However, nations having nuclear power development programs such as Korea, Japan and France seem to be in a different environment of supply and demand from that of the U.S. In this study, the difference of manpower status between the U.S and Korea has been investigated and the nuclear manpower required for the future in Korea is predicted. To investigate the factors making difference between the U.S. and NPP developing countries including Korea, a quantitative manpower planning model, Markov chains model, is applied. Since the Markov chains model has the strength of analyzing an inflow or push structure, the model fits the system governed by the inflow of manpower. A macroscopic status of manpower demand on nuclear industry is calculated up to 2015 using the Job coefficient (JC) and GDP, which are derived from the Survey for Roadmap of Electric Power Industry Manpower Planning. Furthermore, the total numbers of required manpower and supplied manpower up to 2030 were predicted by JC and Markov Chains model, respectively. Whereas the employee status of nuclear industries has been annually investigated by KAIF since 1995, the following data from the 10th survey and nuclear energy yearbooks from 1998 to 2005 are applied; (a) the status of the manpower demand of industry, (b) number of students entering, graduating and getting job in nuclear engineering

  1. Determination of the stopping power of 4He using Bayesian inference with the Markov chain Monte Carlo algorithm

    International Nuclear Information System (INIS)

    A new method for the experimental determination of stopping powers based on Bayesian Inference with the Markov chain Monte Carlo (MCMC) algorithm has been devised. This method avoids the difficulties related to thin target preparation. By measuring the RBS spectra for a known material, and using the known underlying physics, the stopping powers are determined by best matching the simulated spectra with the experimental spectra. Using silicon, SiO2 and Al2O3 as test cases, good agreement is obtained between calculated and experimental data. (author)

  2. The Markov chain method for solving dead time problems in the space dependent model of reactor noise

    International Nuclear Information System (INIS)

    The discrete time Markov chain approach for deriving the statistics of time-correlated pulses, in the presence of a non-extending dead time, is extended to include the effect of space energy distribution of the neutron field. Equations for the singlet and doublet densities of follower neutrons are derived by neglecting correlations beyond the second order. These equations are solved by the modal method. It is shown that in the unimodal approximation, the equations reduce to the point model equations with suitably defined parameters. (author)

  3. Simulation from endpoint-conditioned, continuous-time Markov chains on a finite state space, with applications to molecular evolution

    DEFF Research Database (Denmark)

    Hobolth, Asger; Stone, Eric

    2009-01-01

    Analyses of serially-sampled data often begin with the assumption that the observations represent discrete samples from a latent continuous-time stochastic process. The continuous-time Markov chain (CTMC) is one such generative model whose popularity extends to a variety of disciplines ranging from...... computational finance to human genetics and genomics. A common theme among these diverse applications is the need to simulate sample paths of a CTMC conditional on realized data that is discretely observed. Here we present a general solution to this sampling problem when the CTMC is defined on a discrete and...

  4. A Markov Chain Monte Carlo Algorithm for Infrasound Atmospheric Sounding: Application to the Humming Roadrunner experiment in New Mexico

    Science.gov (United States)

    Lalande, Jean-Marie; Waxler, Roger; Velea, Doru

    2016-04-01

    As infrasonic waves propagate at long ranges through atmospheric ducts it has been suggested that observations of such waves can be used as a remote sensing techniques in order to update properties such as temperature and wind speed. In this study we investigate a new inverse approach based on Markov Chain Monte Carlo methods. This approach as the advantage of searching for the full Probability Density Function in the parameter space at a lower computational cost than extensive parameters search performed by the standard Monte Carlo approach. We apply this inverse methods to observations from the Humming Roadrunner experiment (New Mexico) and discuss implications for atmospheric updates, explosion characterization, localization and yield estimation.

  5. Derivation of a Markov state model of the dynamics of a protein-like chain immersed in an implicit solvent

    Energy Technology Data Exchange (ETDEWEB)

    Schofield, Jeremy, E-mail: jmschofi@chem.utoronto.ca; Bayat, Hanif, E-mail: hbayat@chem.utoronto.ca [Chemical Physics Theory Group, Department of Chemistry, University of Toronto, Toronto, Ontario M5S 3H6 (Canada)

    2014-09-07

    A Markov state model of the dynamics of a protein-like chain immersed in an implicit hard sphere solvent is derived from first principles for a system of monomers that interact via discontinuous potentials designed to account for local structure and bonding in a coarse-grained sense. The model is based on the assumption that the implicit solvent interacts on a fast time scale with the monomers of the chain compared to the time scale for structural rearrangements of the chain and provides sufficient friction so that the motion of monomers is governed by the Smoluchowski equation. A microscopic theory for the dynamics of the system is developed that reduces to a Markovian model of the kinetics under well-defined conditions. Microscopic expressions for the rate constants that appear in the Markov state model are analyzed and expressed in terms of a temperature-dependent linear combination of escape rates that themselves are independent of temperature. Excellent agreement is demonstrated between the theoretical predictions of the escape rates and those obtained through simulation of a stochastic model of the dynamics of bond formation. Finally, the Markov model is studied by analyzing the eigenvalues and eigenvectors of the matrix of transition rates, and the equilibration process for a simple helix-forming system from an ensemble of initially extended configurations to mainly folded configurations is investigated as a function of temperature for a number of different chain lengths. For short chains, the relaxation is primarily single-exponential and becomes independent of temperature in the low-temperature regime. The profile is more complicated for longer chains, where multi-exponential relaxation behavior is seen at intermediate temperatures followed by a low temperature regime in which the folding becomes rapid and single exponential. It is demonstrated that the behavior of the equilibration profile as the temperature is lowered can be understood in terms of the

  6. Derivation of a Markov state model of the dynamics of a protein-like chain immersed in an implicit solvent

    International Nuclear Information System (INIS)

    A Markov state model of the dynamics of a protein-like chain immersed in an implicit hard sphere solvent is derived from first principles for a system of monomers that interact via discontinuous potentials designed to account for local structure and bonding in a coarse-grained sense. The model is based on the assumption that the implicit solvent interacts on a fast time scale with the monomers of the chain compared to the time scale for structural rearrangements of the chain and provides sufficient friction so that the motion of monomers is governed by the Smoluchowski equation. A microscopic theory for the dynamics of the system is developed that reduces to a Markovian model of the kinetics under well-defined conditions. Microscopic expressions for the rate constants that appear in the Markov state model are analyzed and expressed in terms of a temperature-dependent linear combination of escape rates that themselves are independent of temperature. Excellent agreement is demonstrated between the theoretical predictions of the escape rates and those obtained through simulation of a stochastic model of the dynamics of bond formation. Finally, the Markov model is studied by analyzing the eigenvalues and eigenvectors of the matrix of transition rates, and the equilibration process for a simple helix-forming system from an ensemble of initially extended configurations to mainly folded configurations is investigated as a function of temperature for a number of different chain lengths. For short chains, the relaxation is primarily single-exponential and becomes independent of temperature in the low-temperature regime. The profile is more complicated for longer chains, where multi-exponential relaxation behavior is seen at intermediate temperatures followed by a low temperature regime in which the folding becomes rapid and single exponential. It is demonstrated that the behavior of the equilibration profile as the temperature is lowered can be understood in terms of the

  7. Obesity status transitions across the elementary years: Use of Markov chain modeling

    Science.gov (United States)

    Overweight and obesity status transition probabilities using first-order Markov transition models applied to elementary school children were assessed. Complete longitudinal data across eleven assessments were available from 1,494 elementary school children (from 7,599 students in 41 out of 45 school...

  8. A new Markov Binomial distribution.

    OpenAIRE

    Omey, Edward; Minkova, Leda D.

    2011-01-01

    In this paper, we introduce a two state homogeneous Markov chain and define a geometric distribution related to this Markov chain. We define also the negative binomial distribution similar to the classical case and call it NB related to interrupted Markov chain. The new binomial distribution is related to the interrupted Markov chain. Some characterization properties of the Geometric distributions are given. Recursion formulas and probability mass functions for the NB distribution and the new...

  9. Predicting the Trend of Land Use Changes Using Artificial Neural Network and Markov Chain Model (Case Study: Kermanshah City

    Directory of Open Access Journals (Sweden)

    Behzad Saeedi Razavi

    2014-04-01

    Full Text Available Nowadays, cities are expanding and developing with a rapid growth, so that the urban development process is currently one of the most important issues facing researchers in urban issues. In addition to the growth of the cities, how land use changes in macro level is also considered. Studying the changes and degradation of the resources in the past few years, as well as feasibility study and predicting these changes in the future years may play a significant role in planning and optimal use of resources and harnessing the non-normative changes in the future. There are diverse approaches for modeling the land use and cover changes among which may point to the Markov chain model. In this study, the changes in land use and land cover in Kermanshah City, Iran during 19 years has been studied using multi-temporal Landsat satellite images in 1987, 2000 and 2006, side information and Markov Chain Model. Results shows the decreasing trend in range land, forest, garden and green space area and in the other hand, an increased in residential land, agriculture and water suggesting the general trend of degradation in the study area through the growth in the residential land and agriculture rather than other land uses. Finally, the state of land use classes of next 19 years (2025 was anticipated using Markov Model. Results obtained from changes prediction matrix based on the maps of years 1987 and 2006 it is likely that 82% of residential land, 58.51% of agriculture, 34.47% of water, 8.94% of green space, 30.78% of gardens, 23.93% of waste land and 16.76% of range lands will remain unchanged from 2006 to 2025, among which residential lands and green space have the most and the least sustainability, respectively.

  10. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    International Nuclear Information System (INIS)

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  11. A Markov chain approach to modelling charge exchange processes of an ion beam in monotonically increasing or decreasing potentials

    International Nuclear Information System (INIS)

    A Markov chain method is presented as an alternative approach to Monte Carlo simulations of charge exchange collisions by an energetic hydrogen ion beam with a cold background hydrogen gas. This method was used to determine the average energy of the resulting energetic neutrals along the path of the beam. A comparison with Monte Carlo modelling showed a good agreement but with the advantage that it required much less computing time and produced no numerical noise. In particular, the Markov chain method works well for monotonically increasing or decreasing electrostatic potentials. Finally, a good agreement is obtained with experimental results from Doppler shift spectroscopy on energetic beams from a hollow cathode discharge. In particular, the average energy of ions that undergo charge exchange reaches a plateau that can be well below the full energy that might be expected from the applied voltage bias, depending on the background gas pressure. For example, pressures of ∼20 mTorr limit the ion energy to ∼20% of the applied voltage

  12. Estimating Lower Bound and Upper Bound of a Markov chain over a noisy communication channel with Poisson distribution

    Directory of Open Access Journals (Sweden)

    Vinay Mahajan

    2012-06-01

    Full Text Available Under the assumption that the encoders’ observations are conditionally independent Markov chains given an unobserved time-invariant random variable, results on the structure of optimal real-time encoding and decoding functions are obtained. The problem with noiseless channels and perfect memory at the receiver is then considered. A new methodology to find the structure of optimal real-time encoders is employed. A sufficient statistic with a time-invariant domain is found for this problem. This methodology exploits the presence of common information between the encoders and the receiver when communication is over noiseless channels. In this paper we estimate the lower bond, upper bond and define the encoder. In the previous design approach they follow Markov Chain approach to estimating the upper bound and define the encoder. In this dissertation we follow poison distribution to finding the lower bound and upper bound. Poisson can be viewed as an approximation to the binomial distribution. The approximation is good enough to be useful even when the sample size (N is only moderately large (say N > 50 and the probability (p is only relatively small (p < .2 The advantage of the Poisson distribution, of course, is that if N is large you need only know p to determine the approximate distribution of events. With the binomial distribution you also need to know N.

  13. Averaging over fast variables in the fluid limit for Markov chains: application to the supermarket model with memory

    CERN Document Server

    Luczak, M J

    2010-01-01

    We set out a general procedure which allows the approximation of certain Markov chains by the solutions of differential equations. The chains considered have some components which oscillate rapidly and randomly, while others are close to deterministic. The limiting dynamics are obtained by averaging the drift of the latter with respect to a local equilibrium distribution of the former. Some general estimates are proved under a uniform mixing condition on the fast variable which give explicit error probabilities for the fluid approximation. Mitzenmacher, Prabhakar and Shah \\cite{MPS} introduced a variant with memory of the `join the shortest queue' or `supermarket' model, and obtained a limit picture for the case of a stable system in which the number of queues and the total arrival rate are large. In this limit, the empirical distribution of queue sizes satisfies a differential equation, while the memory of the system oscillates rapidly and randomly. We illustrate our general fluid limit estimate in giving a ...

  14. Sequence-based Parameter Estimation for an Epidemiological Temporal Aftershock Forecasting Model using Markov Chain Monte Carlo Simulation

    Science.gov (United States)

    Jalayer, Fatemeh; Ebrahimian, Hossein

    2014-05-01

    Introduction The first few days elapsed after the occurrence of a strong earthquake and in the presence of an ongoing aftershock sequence are quite critical for emergency decision-making purposes. Epidemic Type Aftershock Sequence (ETAS) models are used frequently for forecasting the spatio-temporal evolution of seismicity in the short-term (Ogata, 1988). The ETAS models are epidemic stochastic point process models in which every earthquake is a potential triggering event for subsequent earthquakes. The ETAS model parameters are usually calibrated a priori and based on a set of events that do not belong to the on-going seismic sequence (Marzocchi and Lombardi 2009). However, adaptive model parameter estimation, based on the events in the on-going sequence, may have several advantages such as, tuning the model to the specific sequence characteristics, and capturing possible variations in time of the model parameters. Simulation-based methods can be employed in order to provide a robust estimate for the spatio-temporal seismicity forecasts in a prescribed forecasting time interval (i.e., a day) within a post-main shock environment. This robust estimate takes into account the uncertainty in the model parameters expressed as the posterior joint probability distribution for the model parameters conditioned on the events that have already occurred (i.e., before the beginning of the forecasting interval) in the on-going seismic sequence. The Markov Chain Monte Carlo simulation scheme is used herein in order to sample directly from the posterior probability distribution for ETAS model parameters. Moreover, the sequence of events that is going to occur during the forecasting interval (and hence affecting the seismicity in an epidemic type model like ETAS) is also generated through a stochastic procedure. The procedure leads to two spatio-temporal outcomes: (1) the probability distribution for the forecasted number of events, and (2) the uncertainty in estimating the

  15. A complete solution to Blackwell's unique ergodicity problem for hidden Markov chains

    CERN Document Server

    Chigansky, Pavel

    2009-01-01

    We develop necessary and sufficient conditions for uniqueness of the invariant measure of the filtering process associated to an ergodic hidden Markov model in a finite or countable state space. These results provide a complete solution to a problem posed by Blackwell (1957), and subsume earlier partial results due to Kaijser, Kochman and Reeds. The proofs of our main results are based on the stability theory of nonlinear filters.

  16. On Dobrushin Ergodicity Coefficient and weak ergodicity of Markov Chains on Jordan Algebras

    International Nuclear Information System (INIS)

    In this paper we study certain properties of Dobrushin's ergodicity coefficient for stochastic operators defined on non-associative L1-spaces associated with semi-finite JBW-algebras. Such results extends the well-known classical ones to a non-associative setting. This allows us to investigate the weak ergodicity of nonhomogeneous discrete Markov processes (NDMP) by means of the ergodicity coefficient. We provide a necessary and sufficient conditions for such processes to satisfy the weak ergodicity.

  17. State Dependence and Wage Dynamics: A Heterogeneous Markov Chain Model for Wage Mobility in Austria

    OpenAIRE

    Weber, Andrea

    2002-01-01

    Abstract: The behaviour of individual movements in the wage distribution over time can be described by a Markov process. To investigate wage mobility in terms of transitions between quintiles in the wage distribution we apply a fixed effects panel estimation method suggested by Honorè and Kyriazidou (2000). This method of mobility measurement is robust to data contamination like all methods that treat fractiles. Moreover it allows for the inclusion of exogenous variables that change over time...

  18. Modelling human control behaviour with a Markov-chain switched bank of control laws

    OpenAIRE

    Murray-Smith, R.

    1998-01-01

    A probabilistic model of human control behaviour is described. It assumes that human behaviour can be represented by switching among a number of relatively simple behaviours. The model structure is closely related to the Hidden Markov Models (HMMs) commonly used for speech recognition. An HMM with context-dependent transition functions switching between linear control laws is identified from experimental data. The applicability of the approach is demonstrated in a pitch control task for a sim...

  19. The distribution of genome shared identical by descent for a pair of full sibs by means of the continuous time Markov chain

    Science.gov (United States)

    Julie, Hongki; Pasaribu, Udjianna S.; Pancoro, Adi

    2015-12-01

    This paper will allow Markov Chain's application in genome shared identical by descent by two individual at full sibs model. The full sibs model was a continuous time Markov Chain with three state. In the full sibs model, we look for the cumulative distribution function of the number of sub segment which have 2 IBD haplotypes from a segment of the chromosome which the length is t Morgan and the cumulative distribution function of the number of sub segment which have at least 1 IBD haplotypes from a segment of the chromosome which the length is t Morgan. This cumulative distribution function will be developed by the moment generating function.

  20. Additive N-Step Markov Chains as Prototype Model of Symbolic Stochastic Dynamical Systems with Long-Range Correlations

    CERN Document Server

    Mayzelis, Z A; Usatenko, O V; Yampolskii, V A

    2006-01-01

    A theory of symbolic dynamic systems with long-range correlations based on the consideration of the binary N-step Markov chains developed earlier in Phys. Rev. Lett. 90, 110601 (2003) is generalized to the biased case (non equal numbers of zeros and unities in the chain). In the model, the conditional probability that the i-th symbol in the chain equals zero (or unity) is a linear function of the number of unities (zeros) among the preceding N symbols. The correlation and distribution functions as well as the variance of number of symbols in the words of arbitrary length L are obtained analytically and verified by numerical simulations. A self-similarity of the studied stochastic process is revealed and the similarity group transformation of the chain parameters is presented. The diffusion Fokker-Planck equation governing the distribution function of the L-words is explored. If the persistent correlations are not extremely strong, the distribution function is shown to be the Gaussian with the variance being n...