WorldWideScience

Sample records for symmetric markov processes

  1. Dirichlet forms and symmetric Markov processes

    CERN Document Server

    Oshima, Yoichi; Fukushima, Masatoshi

    2010-01-01

    Since the publication of the first edition in 1994, this book has attracted constant interests from readers and is by now regarded as a standard reference for the theory of Dirichlet forms. For the present second edition, the authors not only revised the existing text, but also added some new sections as well as several exercises with solutions. The book addresses to researchers and graduate students who wish to comprehend the area of Dirichlet forms and symmetric Markov processes.

  2. Symmetric Markov Processes, Time Change, and Boundary Theory (LMS-35)

    CERN Document Server

    Chen, Zhen-Qing

    2011-01-01

    This book gives a comprehensive and self-contained introduction to the theory of symmetric Markov processes and symmetric quasi-regular Dirichlet forms. In a detailed and accessible manner, Zhen-Qing Chen and Masatoshi Fukushima cover the essential elements and applications of the theory of symmetric Markov processes, including recurrence/transience criteria, probabilistic potential theory, additive functional theory, and time change theory. The authors develop the theory in a general framework of symmetric quasi-regular Dirichlet forms in a unified manner with that of regular Dirichlet forms

  3. Markov Jump Processes Approximating a Non-Symmetric Generalized Diffusion

    International Nuclear Information System (INIS)

    Limić, Nedžad

    2011-01-01

    Consider a non-symmetric generalized diffusion X(⋅) in ℝ d determined by the differential operator A(x) = -Σ ij ∂ i a ij (x)∂ j + Σ i b i (x)∂ i . In this paper the diffusion process is approximated by Markov jump processes X n (⋅), in homogeneous and isotropic grids G n ⊂ℝ d , which converge in distribution in the Skorokhod space D([0,∞),ℝ d ) to the diffusion X(⋅). The generators of X n (⋅) are constructed explicitly. Due to the homogeneity and isotropy of grids, the proposed method for d≥3 can be applied to processes for which the diffusion tensor {a ij (x)} 11 dd fulfills an additional condition. The proposed construction offers a simple method for simulation of sample paths of non-symmetric generalized diffusion. Simulations are carried out in terms of jump processes X n (⋅). For piece-wise constant functions a ij on ℝ d and piece-wise continuous functions a ij on ℝ 2 the construction and principal algorithm are described enabling an easy implementation into a computer code.

  4. Markov processes

    CERN Document Server

    Kirkwood, James R

    2015-01-01

    Review of ProbabilityShort HistoryReview of Basic Probability DefinitionsSome Common Probability DistributionsProperties of a Probability DistributionProperties of the Expected ValueExpected Value of a Random Variable with Common DistributionsGenerating FunctionsMoment Generating FunctionsExercisesDiscrete-Time, Finite-State Markov ChainsIntroductionNotationTransition MatricesDirected Graphs: Examples of Markov ChainsRandom Walk with Reflecting BoundariesGambler’s RuinEhrenfest ModelCentral Problem of Markov ChainsCondition to Ensure a Unique Equilibrium StateFinding the Equilibrium StateTransient and Recurrent StatesIndicator FunctionsPerron-Frobenius TheoremAbsorbing Markov ChainsMean First Passage TimeMean Recurrence Time and the Equilibrium StateFundamental Matrix for Regular Markov ChainsDividing a Markov Chain into Equivalence ClassesPeriodic Markov ChainsReducible Markov ChainsSummaryExercisesDiscrete-Time, Infinite-State Markov ChainsRenewal ProcessesDelayed Renewal ProcessesEquilibrium State f...

  5. Recursive Markov Process

    OpenAIRE

    Hidaka, Shohei

    2015-01-01

    A Markov process, which is constructed recursively, arises in stochastic games with Markov strategies. In this study, we defined a special class of random processes called the recursive Markov process, which has infinitely many states but can be expressed in a closed form. We derive the characteristic equation which the marginal stationary distribution of an arbitrary recursive Markov process needs to satisfy.

  6. Markov processes and controlled Markov chains

    CERN Document Server

    Filar, Jerzy; Chen, Anyue

    2002-01-01

    The general theory of stochastic processes and the more specialized theory of Markov processes evolved enormously in the second half of the last century. In parallel, the theory of controlled Markov chains (or Markov decision processes) was being pioneered by control engineers and operations researchers. Researchers in Markov processes and controlled Markov chains have been, for a long time, aware of the synergies between these two subject areas. However, this may be the first volume dedicated to highlighting these synergies and, almost certainly, it is the first volume that emphasizes the contributions of the vibrant and growing Chinese school of probability. The chapters that appear in this book reflect both the maturity and the vitality of modern day Markov processes and controlled Markov chains. They also will provide an opportunity to trace the connections that have emerged between the work done by members of the Chinese school of probability and the work done by the European, US, Central and South Ameri...

  7. Semi-Markov processes

    CERN Document Server

    Grabski

    2014-01-01

    Semi-Markov Processes: Applications in System Reliability and Maintenance is a modern view of discrete state space and continuous time semi-Markov processes and their applications in reliability and maintenance. The book explains how to construct semi-Markov models and discusses the different reliability parameters and characteristics that can be obtained from those models. The book is a useful resource for mathematicians, engineering practitioners, and PhD and MSc students who want to understand the basic concepts and results of semi-Markov process theory. Clearly defines the properties and

  8. Quantum Markov Processes

    Science.gov (United States)

    Kümmerer, Burkhard

    These notes give an introduction to some aspects of quantum Markov processes. Quantum Markov processes come into play whenever a mathematical description of irreversible time behaviour of quantum systems is aimed at. Indeed, there is hardly a book on quantum optics without having at least a chapter on quantum Markov processes. However, it is not always easy to recognize the basic concepts of probability theory in families of creation and annihilation operators on Fock space. Therefore, in these lecture notes much emphasis is put on explaining the intuition behind the mathematical machinery of classical and quantum probability. The lectures start with describing how probabilistic intuition is cast into the mathematical language of classical probability (Sects. 4.1-4.3). Later on, we show how this formulation can be extended such as to incorporate the Hilbert space formulation of quantum mechanics (Sects. 4.4,4.5). Quantum Markov processes are constructed and discussed in Sects. 4.6,4.7, and we add some further discussions and examples in Sects. 4.8-4.11.

  9. Markov reward processes

    Science.gov (United States)

    Smith, R. M.

    1991-01-01

    Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.

  10. Process Algebra and Markov Chains

    NARCIS (Netherlands)

    Brinksma, Hendrik; Hermanns, H.; Brinksma, Hendrik; Hermanns, H.; Katoen, Joost P.

    This paper surveys and relates the basic concepts of process algebra and the modelling of continuous time Markov chains. It provides basic introductions to both fields, where we also study the Markov chains from an algebraic perspective, viz. that of Markov chain algebra. We then proceed to study

  11. Nonlinear Markov processes: Deterministic case

    International Nuclear Information System (INIS)

    Frank, T.D.

    2008-01-01

    Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution

  12. Reviving Markov processes and applications

    International Nuclear Information System (INIS)

    Cai, H.

    1988-01-01

    In this dissertation we study a procedure which restarts a Markov process when the process is killed by some arbitrary multiplicative functional. The regenerative nature of this revival procedure is characterized through a Markov renewal equation. An interesting duality between the revival procedure and the classical killing operation is found. Under the condition that the multiplicative functional possesses an intensity, the generators of the revival process can be written down explicitly. An intimate connection is also found between the perturbation of the sample path of a Markov process and the perturbation of a generator (in Kato's sense). The applications of the theory include the study of the processes like piecewise-deterministic Markov process, virtual waiting time process and the first entrance decomposition (taboo probability)

  13. A relation between non-Markov and Markov processes

    International Nuclear Information System (INIS)

    Hara, H.

    1980-01-01

    With the aid of a transformation technique, it is shown that some memory effects in the non-Markov processes can be eliminated. In other words, some non-Markov processes are rewritten in a form obtained by the random walk process; the Markov process. To this end, two model processes which have some memory or correlation in the random walk process are introduced. An explanation of the memory in the processes is given. (orig.)

  14. Tokunaga self-similarity for symmetric homogeneous Markov chains

    Science.gov (United States)

    Kovchegov, Y.; Zaliapin, I.

    2010-12-01

    Hierarchical branching organization is ubiquitous in nature. It is readily seen in river basins, drainage networks, bronchial passages, botanical trees, and snowflakes, to mention but a few. Empirical evidence suggests that one can describe many natural hierarchies by so-called Tokunaga self-similar trees (SSTs) [Shreve, 1969; Tokunaga, 1978; Ossadnik, 1992; Peckham, 1995; Newman et al., 1997; Pelletier and Turcotte, 2000]; Tokunaga SST have been proven to describe the Galton-Watson critical branching [Burd et al., 2000] and a general particle coagulation process [Gabrielov et al., 1999]. Tokunaga SSTs form a special two-parametric class of SSTs that preserves its statistical properties under the operation of pruning, i.e., cutting the leaves. It has been conjectured (Webb and Zaliapin, 2009; Zaliapin et al. 2009) that Tokunaga self-similarity is a characteristic property of the inverse aggregation (coagulation) process. This study provides further evidence in support of this hypothesis by focusing on trees that describe the topological structure of level sets of a time series, so-called level-set trees (LST). We prove that the LST for a symmetric homogeneous Markov chain (HMC) is a Tokunaga SST with the same parameters as the famous Shreve tree and critical Galton-Watson tree. We show, furthermore, that the Tokunaga property holds for any transformation F[X(G(t))] of a symmetric HMC X(t), where F and G are monotone increasing functions, and as a result - for the regular Brownian motion. At the same time, the Tokunaga property does not hold in general in asymmetric HMCs, a Brownian motion with a drift, ARMA, and some other conventional models. We discuss the relation of our results to the Tokunaga self-similarity of the nearest-neighbor trees for random point sets. References: 1. Gabrielov, A., W.I. Newman, D.L. Turcotte (1999) An exactly soluble hierarchical clustering model: inverse cascades, self-similarity, and scaling. Phys. Rev. E, 1999, 60, 5293-5300. 2

  15. Markov Decision Processes in Practice

    NARCIS (Netherlands)

    Boucherie, Richardus J.; van Dijk, N.M.

    2017-01-01

    It is over 30 years ago since D.J. White started his series of surveys on practical applications of Markov decision processes (MDP), over 20 years after the phenomenal book by Martin Puterman on the theory of MDP, and over 10 years since Eugene A. Feinberg and Adam Shwartz published their Handbook

  16. Markov processes in Thermodynamics and Turbulence

    OpenAIRE

    Nickelsen, Daniel

    2014-01-01

    This thesis deals with Markov processes in stochastic thermodynamics and fully developed turbulence. In the first part of the thesis, a detailed account on the theory of Markov processes is given, forming the mathematical fundament. In the course of developing the theory of continuous Markov processes, stochastic differential equations, the Fokker-Planck equation and Wiener path integrals are introduced and embedded into the class of discontinuous Markov processes. Special attention is pai...

  17. A canonical representation for aggregated Markov processes

    OpenAIRE

    Larget, Bret

    1998-01-01

    A deterministic function of a Markov process is called an aggregated Markov process. We give necessary and sufficient conditions for the equivalence of continuous-time aggregated Markov processes. For both discrete- and continuous-time, we show that any aggregated Markov process which satisfies mild regularity conditions can be directly converted to a canonical representation which is unique for each class of equivalent models, and furthermore, is a minimal parameterization ...

  18. Open Markov Processes and Reaction Networks

    Science.gov (United States)

    Swistock Pollard, Blake Stephen

    2017-01-01

    We begin by defining the concept of "open" Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain "boundary" states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow…

  19. Markov processes characterization and convergence

    CERN Document Server

    Ethier, Stewart N

    2009-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists."[A]nyone who works with Markov processes whose state space is uncountably infinite will need this most impressive book as a guide and reference."-American Scientist"There is no question but that space should immediately be reserved for [this] book on the library shelf. Those who aspire to mastery of the contents should also reserve a large number of long winter evenings."-Zentralblatt f?r Mathematik und ihre Grenzgebiete/Mathematics Abstracts"Ethier and Kurtz have produced an excellent treatment of the modern theory of Markov processes that [is] useful both as a reference work and as a graduate textbook."-Journal of Statistical PhysicsMarkov Proce...

  20. Maximizing entropy over Markov processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2014-01-01

    computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...

  1. Maximizing Entropy over Markov Processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2013-01-01

    computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...

  2. Markov Decision Process Measurement Model.

    Science.gov (United States)

    LaMar, Michelle M

    2018-03-01

    Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.

  3. Markov process of muscle motors

    International Nuclear Information System (INIS)

    Kondratiev, Yu; Pechersky, E; Pirogov, S

    2008-01-01

    We study a Markov random process describing muscle molecular motor behaviour. Every motor is either bound up with a thin filament or unbound. In the bound state the motor creates a force proportional to its displacement from the neutral position. In both states the motor spends an exponential time depending on the state. The thin filament moves at a velocity proportional to the average of all displacements of all motors. We assume that the time which a motor stays in the bound state does not depend on its displacement. Then one can find an exact solution of a nonlinear equation appearing in the limit of an infinite number of motors

  4. Inhomogeneous Markov point processes by transformation

    DEFF Research Database (Denmark)

    Jensen, Eva B. Vedel; Nielsen, Linda Stougaard

    2000-01-01

    We construct parametrized models for point processes, allowing for both inhomogeneity and interaction. The inhomogeneity is obtained by applying parametrized transformations to homogeneous Markov point processes. An interesting model class, which can be constructed by this transformation approach......, is that of exponential inhomogeneous Markov point processes. Statistical inference For such processes is discussed in some detail....

  5. Generated dynamics of Markov and quantum processes

    CERN Document Server

    Janßen, Martin

    2016-01-01

    This book presents Markov and quantum processes as two sides of a coin called generated stochastic processes. It deals with quantum processes as reversible stochastic processes generated by one-step unitary operators, while Markov processes are irreversible stochastic processes generated by one-step stochastic operators. The characteristic feature of quantum processes are oscillations, interference, lots of stationary states in bounded systems and possible asymptotic stationary scattering states in open systems, while the characteristic feature of Markov processes are relaxations to a single stationary state. Quantum processes apply to systems where all variables, that control reversibility, are taken as relevant variables, while Markov processes emerge when some of those variables cannot be followed and are thus irrelevant for the dynamic description. Their absence renders the dynamic irreversible. A further aim is to demonstrate that almost any subdiscipline of theoretical physics can conceptually be put in...

  6. Timed Comparisons of Semi-Markov Processes

    DEFF Research Database (Denmark)

    Pedersen, Mathias Ruggaard; Larsen, Kim Guldstrand; Bacci, Giorgio

    2018-01-01

    Semi-Markov processes are Markovian processes in which the firing time of transitions is modelled by probabilistic distributions over positive reals interpreted as the probability of firing a transition at a certain moment in time. In this paper we consider the trace-based semantics of semi......-Markov processes, and investigate the question of how to compare two semi-Markov processes with respect to their time-dependent behaviour. To this end, we introduce the relation of being “faster than” between processes and study its algorithmic complexity. Through a connection to probabilistic automata we obtain...... hardness results showing in particular that this relation is undecidable. However, we present an additive approximation algorithm for a time-bounded variant of the faster-than problem over semi-Markov processes with slow residence-time functions, and a coNP algorithm for the exact faster-than problem over...

  7. Bayesian analysis of Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2006-01-01

    Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...... a partially ordered Markov point process as the auxiliary variable. As the method requires simulation from the "unknown" likelihood, perfect simulation algorithms for spatial point processes become useful....

  8. On Continuous Time Markov Processes in Bargaining

    NARCIS (Netherlands)

    Houba, H.E.D.

    2008-01-01

    For bilateral stochastic bargaining procedures embedded in stable homogeneous continuous-time Markov processes, we show unusual limit results when time between rounds vanish. Standard convergence results require that some states are instantaneous. © 2008.

  9. Finite Markov processes and their applications

    CERN Document Server

    Iosifescu, Marius

    2007-01-01

    A self-contained treatment of finite Markov chains and processes, this text covers both theory and applications. Author Marius Iosifescu, vice president of the Romanian Academy and director of its Center for Mathematical Statistics, begins with a review of relevant aspects of probability theory and linear algebra. Experienced readers may start with the second chapter, a treatment of fundamental concepts of homogeneous finite Markov chain theory that offers examples of applicable models.The text advances to studies of two basic types of homogeneous finite Markov chains: absorbing and ergodic ch

  10. Financial Applications of Bivariate Markov Processes

    OpenAIRE

    Ortobelli Lozza, Sergio; Angelelli, Enrico; Bianchi, Annamaria

    2011-01-01

    This paper describes a methodology to approximate a bivariate Markov process by means of a proper Markov chain and presents possible financial applications in portfolio theory, option pricing and risk management. In particular, we first show how to model the joint distribution between market stochastic bounds and future wealth and propose an application to large-scale portfolio problems. Secondly, we examine an application to VaR estimation. Finally, we propose a methodology...

  11. On the entropy of a hidden Markov process.

    Science.gov (United States)

    Jacquet, Philippe; Seroussi, Gadiel; Szpankowski, Wojciech

    2008-05-01

    We study the entropy rate of a hidden Markov process (HMP) defined by observing the output of a binary symmetric channel whose input is a first-order binary Markov process. Despite the simplicity of the models involved, the characterization of this entropy is a long standing open problem. By presenting the probability of a sequence under the model as a product of random matrices, one can see that the entropy rate sought is equal to a top Lyapunov exponent of the product. This offers an explanation for the elusiveness of explicit expressions for the HMP entropy rate, as Lyapunov exponents are notoriously difficult to compute. Consequently, we focus on asymptotic estimates, and apply the same product of random matrices to derive an explicit expression for a Taylor approximation of the entropy rate with respect to the parameter of the binary symmetric channel. The accuracy of the approximation is validated against empirical simulation results. We also extend our results to higher-order Markov processes and to Rényi entropies of any order.

  12. Conditioned real self-similar Markov processes

    OpenAIRE

    Kyprianou, Andreas E.; Rivero, Víctor M.; Satitkanitkul, Weerapat

    2015-01-01

    In recent work, Chaumont et al. [9] showed that is possible to condition a stable process with index ${\\alpha} \\in (1,2)$ to avoid the origin. Specifically, they describe a new Markov process which is the Doob h-transform of a stable process and which arises from a limiting procedure in which the stable process is conditioned to have avoided the origin at later and later times. A stable process is a particular example of a real self-similar Markov process (rssMp) and we develop the idea of su...

  13. Markov processes an introduction for physical scientists

    CERN Document Server

    Gillespie, Daniel T

    1991-01-01

    Markov process theory is basically an extension of ordinary calculus to accommodate functions whos time evolutions are not entirely deterministic. It is a subject that is becoming increasingly important for many fields of science. This book develops the single-variable theory of both continuous and jump Markov processes in a way that should appeal especially to physicists and chemists at the senior and graduate level.Key Features* A self-contained, prgamatic exposition of the needed elements of random variable theory* Logically integrated derviations of the Chapman-Kolmogorov e

  14. Continuity Properties of Distances for Markov Processes

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Mao, Hua; Larsen, Kim Guldstrand

    2014-01-01

    In this paper we investigate distance functions on finite state Markov processes that measure the behavioural similarity of non-bisimilar processes. We consider both probabilistic bisimilarity metrics, and trace-based distances derived from standard Lp and Kullback-Leibler distances. Two desirable...

  15. A Metrized Duality Theorem for Markov Processes

    DEFF Research Database (Denmark)

    Kozen, Dexter; Mardare, Radu Iulian; Panangaden, Prakash

    2014-01-01

    We extend our previous duality theorem for Markov processes by equipping the processes with a pseudometric and the algebras with a notion of metric diameter. We are able to show that the isomorphisms of our previous duality theorem become isometries in this quantitative setting. This opens the way...

  16. Markov decision processes in artificial intelligence

    CERN Document Server

    Sigaud, Olivier

    2013-01-01

    Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision problems under uncertainty as well as Reinforcement Learning problems. Written by experts in the field, this book provides a global view of current research using MDPs in Artificial Intelligence. It starts with an introductory presentation of the fundamental aspects of MDPs (planning in MDPs, Reinforcement Learning, Partially Observable MDPs, Markov games and the use of non-classical criteria). Then it presents more advanced research trends in the domain and gives some concrete examples using illustr

  17. Nonlinearly perturbed semi-Markov processes

    CERN Document Server

    Silvestrov, Dmitrii

    2017-01-01

    The book presents new methods of asymptotic analysis for nonlinearly perturbed semi-Markov processes with a finite phase space. These methods are based on special time-space screening procedures for sequential phase space reduction of semi-Markov processes combined with the systematical use of operational calculus for Laurent asymptotic expansions. Effective recurrent algorithms are composed for getting asymptotic expansions, without and with explicit upper bounds for remainders, for power moments of hitting times, stationary and conditional quasi-stationary distributions for nonlinearly perturbed semi-Markov processes. These results are illustrated by asymptotic expansions for birth-death-type semi-Markov processes, which play an important role in various applications. The book will be a useful contribution to the continuing intensive studies in the area. It is an essential reference for theoretical and applied researchers in the field of stochastic processes and their applications that will cont...

  18. Generalizing Markov Decision Processes to Imprecise Probabilities

    Czech Academy of Sciences Publication Activity Database

    Harmanec, David

    2002-01-01

    Roč. 105, - (2002), s. 199-213 ISSN 0378-3758 Grant - others:Ministry of Education(SG) RP960351 Institutional research plan: AV0Z1030915 Keywords : generalized Markov decission process * sequential decision making * interval utilities Subject RIV: BA - General Mathematics Impact factor: 0.385, year: 2002

  19. Operational Markov Condition for Quantum Processes

    Science.gov (United States)

    Pollock, Felix A.; Rodríguez-Rosario, César; Frauenheim, Thomas; Paternostro, Mauro; Modi, Kavan

    2018-01-01

    We derive a necessary and sufficient condition for a quantum process to be Markovian which coincides with the classical one in the relevant limit. Our condition unifies all previously known definitions for quantum Markov processes by accounting for all potentially detectable memory effects. We then derive a family of measures of non-Markovianity with clear operational interpretations, such as the size of the memory required to simulate a process or the experimental falsifiability of a Markovian hypothesis.

  20. Modeling nonhomogeneous Markov processes via time transformation.

    Science.gov (United States)

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  1. Exact solution of the hidden Markov processes

    Science.gov (United States)

    Saakian, David B.

    2017-11-01

    We write a master equation for the distributions related to hidden Markov processes (HMPs) and solve it using a functional equation. Thus the solution of HMPs is mapped exactly to the solution of the functional equation. For a general case the latter can be solved only numerically. We derive an exact expression for the entropy of HMPs. Our expression for the entropy is an alternative to the ones given before by the solution of integral equations. The exact solution is possible because actually the model can be considered as a generalized random walk on a one-dimensional strip. While we give the solution for the two second-order matrices, our solution can be easily generalized for the L values of the Markov process and M values of observables: We should be able to solve a system of L functional equations in the space of dimension M -1 .

  2. Markov State Model of Ion Assembling Process.

    Science.gov (United States)

    Shevchuk, Roman

    2016-05-12

    We study the process of ion assembling in aqueous solution by means of molecular dynamics. In this article, we present a method to study many-particle assembly using the Markov state model formalism. We observed that at NaCl concentration higher than 1.49 mol/kg, the system tends to form a big ionic cluster composed of roughly 70-90% of the total number of ions. Using Markov state models, we estimated the average time needed for the system to make a transition from discorded state to a state with big ionic cluster. Our results suggest that the characteristic time to form an ionic cluster is a negative exponential function of the salt concentration. Moreover, we defined and analyzed three different kinetic states of a single ion particle. These states correspond to a different particle location during nucleation process.

  3. Dynamical fluctuations for semi-Markov processes

    Czech Academy of Sciences Publication Activity Database

    Maes, C.; Netočný, Karel; Wynants, B.

    2009-01-01

    Roč. 42, č. 36 (2009), 365002/1-365002/21 ISSN 1751-8113 R&D Projects: GA ČR GC202/07/J051 Institutional research plan: CEZ:AV0Z10100520 Keywords : nonequilibrium fluctuations * semi-Markov processes Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.577, year: 2009 http://www.iop.org/EJ/abstract/1751-8121/42/36/365002

  4. Risk aversion and risk seeking in multicriteria forest management: a Markov decision process approach

    Science.gov (United States)

    Joseph Buongiorno; Mo Zhou; Craig Johnston

    2017-01-01

    Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value.  The other method used the certainty...

  5. Hybrid Discrete-Continuous Markov Decision Processes

    Science.gov (United States)

    Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich

    2003-01-01

    This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.

  6. Continuously monitored barrier options under Markov processes

    NARCIS (Netherlands)

    Mijatović, A.; Pistorius, M.

    2011-01-01

    In this paper, we present an algorithm for pricing barrier options in one-dimensional Markov models. The approach rests on the construction of an approximating continuous-time Markov chain that closely follows the dynamics of the given Markov model. We illustrate the method by implementing it for a

  7. A Markov Process Inspired Cellular Automata Model of Road Traffic

    OpenAIRE

    Wang, Fa; Li, Li; Hu, Jianming; Ji, Yan; Yao, Danya; Zhang, Yi; Jin, Xuexiang; Su, Yuelong; Wei, Zheng

    2008-01-01

    To provide a more accurate description of the driving behaviors in vehicle queues, a namely Markov-Gap cellular automata model is proposed in this paper. It views the variation of the gap between two consequent vehicles as a Markov process whose stationary distribution corresponds to the observed distribution of practical gaps. The multiformity of this Markov process provides the model enough flexibility to describe various driving behaviors. Two examples are given to show how to specialize i...

  8. Limit theorems for Markov-modulated and reflected diffusion processes

    NARCIS (Netherlands)

    Huang, G.

    2015-01-01

    In this thesis, asymptotic properties of two variants of one-dimensional diffusion processes, which are Markov-modulated and reflected Ornstein-Uhlenbeck processes, are studied. Besides the random term of the Brownian motion, the Markov-modulated diffusion process evolves in an extra random

  9. Traffic generated by a semi-Markov additive process

    NARCIS (Netherlands)

    J.G. Blom (Joke); M.R.H. Mandjes (Michel)

    2011-01-01

    textabstractWe consider a semi-Markov additive process $A(\\cdot)$, i.e., a Markov additive process for which the sojourn times in the various states have general (rather than exponential) distributions. Letting the L\\'evy processes $X_i(\\cdot)$, which describe the evolution of $A(\\cdot)$ while

  10. Transition-Independent Decentralized Markov Decision Processes

    Science.gov (United States)

    Becker, Raphen; Silberstein, Shlomo; Lesser, Victor; Goldman, Claudia V.; Morris, Robert (Technical Monitor)

    2003-01-01

    There has been substantial progress with formal models for sequential decision making by individual agents using the Markov decision process (MDP). However, similar treatment of multi-agent systems is lacking. A recent complexity result, showing that solving decentralized MDPs is NEXP-hard, provides a partial explanation. To overcome this complexity barrier, we identify a general class of transition-independent decentralized MDPs that is widely applicable. The class consists of independent collaborating agents that are tied up by a global reward function that depends on both of their histories. We present a novel algorithm for solving this class of problems and examine its properties. The result is the first effective technique to solve optimally a class of decentralized MDPs. This lays the foundation for further work in this area on both exact and approximate solutions.

  11. Derivation of Markov processes that violate detailed balance

    Science.gov (United States)

    Lee, Julian

    2018-03-01

    Time-reversal symmetry of the microscopic laws dictates that the equilibrium distribution of a stochastic process must obey the condition of detailed balance. However, cyclic Markov processes that do not admit equilibrium distributions with detailed balance are often used to model systems driven out of equilibrium by external agents. I show that for a Markov model without detailed balance, an extended Markov model can be constructed, which explicitly includes the degrees of freedom for the driving agent and satisfies the detailed balance condition. The original cyclic Markov model for the driven system is then recovered as an approximation at early times by summing over the degrees of freedom for the driving agent. I also show that the widely accepted expression for the entropy production in a cyclic Markov model is actually a time derivative of an entropy component in the extended model. Further, I present an analytic expression for the entropy component that is hidden in the cyclic Markov model.

  12. A comparison of time-homogeneous Markov chain and Markov process multi-state models.

    Science.gov (United States)

    Wan, Lijie; Lou, Wenjie; Abner, Erin; Kryscio, Richard J

    2016-01-01

    Time-homogeneous Markov models are widely used tools for analyzing longitudinal data about the progression of a chronic disease over time. There are advantages to modeling the true disease progression as a discrete time stationary Markov chain. However, one limitation of this method is its inability to handle uneven follow-up assessments or skipped visits. A continuous time version of a homogeneous Markov process multi-state model could be an alternative approach. In this article, we conduct comparisons of these two methods for unevenly spaced observations. Simulations compare the performance of the two methods and two applications illustrate the results.

  13. Pathwise duals of monotone and additive Markov processes

    Czech Academy of Sciences Publication Activity Database

    Sturm, A.; Swart, Jan M.

    -, - (2018) ISSN 0894-9840 R&D Projects: GA ČR GAP201/12/2613 Institutional support: RVO:67985556 Keywords : pathwise duality * monotone Markov process * additive Markov process * interacting particle system Subject RIV: BA - General Mathematics Impact factor: 0.854, year: 2016 http://library.utia.cas.cz/separaty/2016/SI/swart-0465436.pdf

  14. Markov processes from K. Ito's perspective (AM-155)

    CERN Document Server

    Stroock, Daniel W

    2003-01-01

    Kiyosi Itô''s greatest contribution to probability theory may be his introduction of stochastic differential equations to explain the Kolmogorov-Feller theory of Markov processes. Starting with the geometric ideas that guided him, this book gives an account of Itô''s program. The modern theory of Markov processes was initiated by A. N. Kolmogorov. However, Kolmogorov''s approach was too analytic to reveal the probabilistic foundations on which it rests. In particular, it hides the central role played by the simplest Markov processes: those with independent, identically distributed incremen

  15. Performability analysis using semi-Markov reward processes

    Science.gov (United States)

    Ciardo, Gianfranco; Marie, Raymond A.; Sericola, Bruno; Trivedi, Kishor S.

    1990-01-01

    Beaudry (1978) proposed a simple method of computing the distribution of performability in a Markov reward process. Two extensions of Beaudry's approach are presented. The method is generalized to a semi-Markov reward process by removing the restriction requiring the association of zero reward to absorbing states only. The algorithm proceeds by replacing zero-reward nonabsorbing states by a probabilistic switch; it is therefore related to the elimination of vanishing states from the reachability graph of a generalized stochastic Petri net and to the elimination of fast transient states in a decomposition approach to stiff Markov chains. The use of the approach is illustrated with three applications.

  16. Continuous-time Markov decision processes theory and applications

    CERN Document Server

    Guo, Xianping

    2009-01-01

    This volume provides the first book entirely devoted to recent developments on the theory and applications of continuous-time Markov decision processes (MDPs). The MDPs presented here include most of the cases that arise in applications.

  17. On mean reward variance in semi-Markov processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2005-01-01

    Roč. 62, č. 3 (2005), s. 387-397 ISSN 1432-2994 R&D Projects: GA ČR(CZ) GA402/05/0115; GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov and semi-Markov processes with rewards * variance of cumulative reward * asymptotic behaviour Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.259, year: 2005

  18. Markov or non-Markov property of $cM-X$ processes

    OpenAIRE

    MATSUMOTO, Hiroyuki; OGURA, Yukio

    2004-01-01

    For a Brownian motion with a constant drift $X$ and its maximum process $M,$ $M-X$ and $2M-X$ are diffusion processes by the extensions of Lévy's and Pitman's theorems. We show that $cM-X$ is not a Markov process if $c\\in R\\backslash \\{0,1,2\\}$ ∊ $R\\backslash \\{0,1,2\\}$ . We also give other elementary proofs of Lévy's and Pitman's theorems.

  19. Monte Carlo Simulation of Markov, Semi-Markov, and Generalized Semi- Markov Processes in Probabilistic Risk Assessment

    Science.gov (United States)

    English, Thomas

    2005-01-01

    A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.

  20. Quantum Markov processes and applications in many-body systems

    International Nuclear Information System (INIS)

    Temme, P. K.

    2010-01-01

    This thesis is concerned with the investigation of quantum as well as classical Markov processes and their application in the field of strongly correlated many-body systems. A Markov process is a special kind of stochastic process, which is determined by an evolution that is independent of its history and only depends on the current state of the system. The application of Markov processes has a long history in the field of statistical mechanics and classical many-body theory. Not only are Markov processes used to describe the dynamics of stochastic systems, but they predominantly also serve as a practical method that allows for the computation of fundamental properties of complex many-body systems by means of probabilistic algorithms. The aim of this thesis is to investigate the properties of quantum Markov processes, i.e. Markov processes taking place in a quantum mechanical state space, and to gain a better insight into complex many-body systems by means thereof. Moreover, we formulate a novel quantum algorithm which allows for the computation of the thermal and ground states of quantum many-body systems. After a brief introduction to quantum Markov processes we turn to an investigation of their convergence properties. We find bounds on the convergence rate of the quantum process by generalizing geometric bounds found for classical processes. We generalize a distance measure that serves as the basis for our investigations, the chi-square divergence, to non-commuting probability spaces. This divergence allows for a convenient generalization of the detailed balance condition to quantum processes. We then devise the quantum algorithm that can be seen as the natural generalization of the ubiquitous Metropolis algorithm to simulate quantum many-body Hamiltonians. By this we intend to provide further evidence, that a quantum computer can serve as a fully-fledged quantum simulator, which is not only capable of describing the dynamical evolution of quantum systems, but

  1. An approximation approach for the deviation matrix of continuous-time Markov processes with application to Markov decision theory

    NARCIS (Netherlands)

    Heidergott, B.F.; Hordijk, A.; Leder, N.

    2010-01-01

    We present an update formula that allows the expression of the deviation matrix of a continuous-time Markov process with denumerable state space having generator matrix Q* through a continuous-time Markov process with generator matrix Q. We show that under suitable stability conditions the algorithm

  2. Nonlinear Markov Control Processes and Games

    Science.gov (United States)

    2012-11-15

    ordinary differential equation (ODE) with the specific feature of preserving positivity. This feature distinguishes it from a general Banach space valued...variable (but not vanishing) number of players is the 6 union X̂ = ∪∞j=1X j. We denote by Csym(XN) the Banach spaces of symmetric (with respect to...N, for the Banach space of k times continuously differentiable functions in the interior of Ω ⊂ Rd with f and all its derivatives up to and including

  3. Convergence in distribution for filtering processes associated to Hidden Markov Models with densities

    OpenAIRE

    Kaijser, Thomas

    2013-01-01

    A Hidden Markov Model generates two basic stochastic processes, a Markov chain, which is hidden, and an observation sequence. The filtering process of a Hidden Markov Model is, roughly speaking, the sequence of conditional distributions of the hidden Markov chain that is obtained as new observations are received. It is well-known, that the filtering process itself, is also a Markov chain. A classical, theoretical problem is to find conditions which implies that the distributions of the filter...

  4. Critical age-dependent branching Markov processes and their ...

    Indian Academy of Sciences (India)

    This paper studies: (i) the long-time behaviour of the empirical distribution of age and normalized position of an age-dependent critical branching Markov process conditioned on non-extinction; and (ii) the super-process limit of a sequence of age-dependent critical branching Brownian motions.

  5. Reachability in continuous-time Markov reward decision processes

    NARCIS (Netherlands)

    Baier, Christel; Haverkort, Boudewijn R.H.M.; Hermanns, H.; Katoen, Joost P.; Flum, J.; Graedel, E.; Wilke, Th.

    Continuous-time Markov decision processes (CTMDPs) are widely used for the control of queueing systems, epidemic and manufacturing processes. Various results on optimal schedulers for discounted and average reward optimality criteria in CTMDPs are known, but the typical game-theoretic winning

  6. Critical Age-Dependent Branching Markov Processes and their ...

    Indian Academy of Sciences (India)

    This paper studies: (i) the long-time behaviour of the empirical distribution of age and normalized position of an age-dependent critical branching Markov process conditioned on non-extinction; and (ii) the super-process limit of a sequence of age-dependent critical branching Brownian motions.

  7. Markov and non-Markov processes in complex systems by the dynamical information entropy

    Science.gov (United States)

    Yulmetyev, R. M.; Gafarov, F. M.

    1999-12-01

    We consider the Markov and non-Markov processes in complex systems by the dynamical information Shannon entropy (DISE) method. The influence and important role of the two mutually dependent channels of entropy alternation (creation or generation of correlation) and anti-correlation (destroying or annihilation of correlation) have been discussed. The developed method has been used for the analysis of the complex systems of various natures: slow neutron scattering in liquid cesium, psychology (short-time numeral and pattern human memory and effect of stress on the dynamical taping-test), random dynamics of RR-intervals in human ECG (problem of diagnosis of various disease of the human cardio-vascular systems), chaotic dynamics of the parameters of financial markets and ecological systems.

  8. Lectures from Markov processes to Brownian motion

    CERN Document Server

    Chung, Kai Lai

    1982-01-01

    This book evolved from several stacks of lecture notes written over a decade and given in classes at slightly varying levels. In transforming the over­ lapping material into a book, I aimed at presenting some of the best features of the subject with a minimum of prerequisities and technicalities. (Needless to say, one man's technicality is another's professionalism. ) But a text frozen in print does not allow for the latitude of the classroom; and the tendency to expand becomes harder to curb without the constraints of time and audience. The result is that this volume contains more topics and details than I had intended, but I hope the forest is still visible with the trees. The book begins at the beginning with the Markov property, followed quickly by the introduction of option al times and martingales. These three topics in the discrete parameter setting are fully discussed in my book A Course In Probability Theory (second edition, Academic Press, 1974). The latter will be referred to throughout this book ...

  9. Hidden Markov processes theory and applications to biology

    CERN Document Server

    Vidyasagar, M

    2014-01-01

    This book explores important aspects of Markov and hidden Markov processes and the applications of these ideas to various problems in computational biology. The book starts from first principles, so that no previous knowledge of probability is necessary. However, the work is rigorous and mathematical, making it useful to engineers and mathematicians, even those not interested in biological applications. A range of exercises is provided, including drills to familiarize the reader with concepts and more advanced problems that require deep thinking about the theory. Biological applications are t

  10. Envelopes of Sets of Measures, Tightness, and Markov Control Processes

    International Nuclear Information System (INIS)

    Gonzalez-Hernandez, J.; Hernandez-Lerma, O.

    1999-01-01

    We introduce upper and lower envelopes for sets of measures on an arbitrary topological space, which are then used to give a tightness criterion. These concepts are applied to show the existence of optimal policies for a class of Markov control processes

  11. Elements of the theory of Markov processes and their applications

    CERN Document Server

    Bharucha-Reid, A T

    2010-01-01

    This graduate-level text and reference in probability, with numerous applications to several fields of science, presents nonmeasure-theoretic introduction to theory of Markov processes. The work also covers mathematical models based on the theory, employed in various applied fields. Prerequisites are a knowledge of elementary probability theory, mathematical statistics, and analysis. Appendixes. Bibliographies. 1960 edition.

  12. Cascade probabilistic function and the Markov's processes. Chapter 1

    International Nuclear Information System (INIS)

    2002-01-01

    In the Chapter 1 the physical and mathematical descriptions of radiation processes are carried out. The relation of the cascade probabilistic functions (CPF) for electrons, protons, alpha-particles and ions with Markov's chain is shown. The algorithms for CPF calculation with accounting energy losses are given

  13. Application of Markov Processes to the Concept of State.

    Science.gov (United States)

    Freedle, Roy; Lewis, Michael

    The purpose of this paper is to outline some application of the Markov Process to the study of state and state changes. The essence of this mathematical concept consists of the analysis of sequences of infant responses in interaction with its environment. Categories can be defined which reflect the joint occurrence of an infant's behavior (or…

  14. Inferring parental genomic ancestries using pooled semi-Markov processes.

    Science.gov (United States)

    Zou, James Y; Halperin, Eran; Burchard, Esteban; Sankararaman, Sriram

    2015-06-15

    A basic problem of broad public and scientific interest is to use the DNA of an individual to infer the genomic ancestries of the parents. In particular, we are often interested in the fraction of each parent's genome that comes from specific ancestries (e.g. European, African, Native American, etc). This has many applications ranging from understanding the inheritance of ancestry-related risks and traits to quantifying human assortative mating patterns. We model the problem of parental genomic ancestry inference as a pooled semi-Markov process. We develop a general mathematical framework for pooled semi-Markov processes and construct efficient inference algorithms for these models. Applying our inference algorithm to genotype data from 231 Mexican trios and 258 Puerto Rican trios where we have the true genomic ancestry of each parent, we demonstrate that our method accurately infers parameters of the semi-Markov processes and parents' genomic ancestries. We additionally validated the method on simulations. Our model of pooled semi-Markov process and inference algorithms may be of independent interest in other settings in genomics and machine learning. © The Author 2015. Published by Oxford University Press.

  15. On characterisation of Markov processes via martingale problems

    Indian Academy of Sciences (India)

    This extension is used to improve on a criterion for a probability measure to be invariant for the semigroup associated with the Markov process. We also give examples of martingale problems that are well-posed in the class of solutions which are continuous in probability but for which no r.c.l.l. solution exists.

  16. Synthesis for PCTL in Parametric Markov Decision Processes

    DEFF Research Database (Denmark)

    Hahn, Ernst Moritz; Han, Tingting; Zhang, Lijun

    2011-01-01

    In parametric Markov decision processes (PMDPs), transition probabilities are not fixed, but are given as functions over a set of parameters. A PMDP denotes a family of concrete MDPs. This paper studies the synthesis problem for PCTL in PMDPs: Given a specification Φ in PCTL, we synthesise...

  17. Markov LIMID processes for representing and solving renewal problems

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Kristensen, Anders Ringgaard; Nilsson, Dennis

    2014-01-01

    In this paper a new tool for simultaneous optimisation of decisions on multiple time scales is presented. The tool combines the dynamic properties of Markov decision processes with the flexible and compact state space representation of LImited Memory Influence Diagrams (Limids). A temporal version...

  18. Delayed Nondeterminism in Continuous-Time Markov Decision Processes

    NARCIS (Netherlands)

    Neuhausser, M.; Stoelinga, Mariëlle Ida Antoinette; Katoen, Joost P.

    2009-01-01

    Schedulers in randomly timed games can be classified as to whether they use timing information or not. We consider continuous-time Markov decision processes (CTMDPs) and define a hierarchy of positional (P) and history-dependent (H) schedulers which induce strictly tighter bounds on quantitative

  19. On Characterisation of Markov Processes Via Martingale Problems

    Indian Academy of Sciences (India)

    This extension is used to improve on a criterion for a probability measure to be invariant for the semigroup associated with the Markov process. We also give examples of martingale problems that are well-posed in the class of solutions which are continuous in probability but for which no r.c.l.l. solution exists.

  20. Safety Verification of Piecewise-Deterministic Markov Processes

    DEFF Research Database (Denmark)

    Wisniewski, Rafael; Sloth, Christoffer; Bujorianu, Manuela

    2016-01-01

    We consider the safety problem of piecewise-deterministic Markov processes (PDMP). These are systems that have deterministic dynamics and stochastic jumps, where both the time and the destination of the jumps are stochastic. Specifically, we solve a p-safety problem, where we identify the set...

  1. Critical age-dependent branching Markov processes and their ...

    Indian Academy of Sciences (India)

    limiting distribution, called the Yaglom-limit (see [5] or [17]). The study of the size, age and location spread of such population is of interest. Limit theorems for critical branching Markov processes where the motion depends on the age does not seem to have been considered in the literature before. These are addressed.

  2. Multiresolution Hilbert Approach to Multidimensional Gauss-Markov Processes

    Directory of Open Access Journals (Sweden)

    Thibaud Taillefumier

    2011-01-01

    Full Text Available The study of the multidimensional stochastic processes involves complex computations in intricate functional spaces. In particular, the diffusion processes, which include the practically important Gauss-Markov processes, are ordinarily defined through the theory of stochastic integration. Here, inspired by the Lévy-Ciesielski construction of the Wiener process, we propose an alternative representation of multidimensional Gauss-Markov processes as expansions on well-chosen Schauder bases, with independent random coefficients of normal law with zero mean and unit variance. We thereby offer a natural multiresolution description of the Gauss-Markov processes as limits of finite-dimensional partial sums of the expansion, that are strongly almost-surely convergent. Moreover, such finite-dimensional random processes constitute an optimal approximation of the process, in the sense of minimizing the associated Dirichlet energy under interpolating constraints. This approach allows for a simpler treatment of problems in many applied and theoretical fields, and we provide a short overview of applications we are currently developing.

  3. MARKOV: A methodology for the solution of infinite time horizon MARKOV decision processes

    Science.gov (United States)

    Williams, B.K.

    1988-01-01

    Algorithms are described for determining optimal policies for finite state, finite action, infinite discrete time horizon Markov decision processes. Both value-improvement and policy-improvement techniques are used in the algorithms. Computing procedures are also described. The algorithms are appropriate for processes that are either finite or infinite, deterministic or stochastic, discounted or undiscounted, in any meaningful combination of these features. Computing procedures are described in terms of initial data processing, bound improvements, process reduction, and testing and solution. Application of the methodology is illustrated with an example involving natural resource management. Management implications of certain hypothesized relationships between mallard survival and harvest rates are addressed by applying the optimality procedures to mallard population models.

  4. First passage of time-reversible spectrally negative Markov additive processes

    NARCIS (Netherlands)

    Ivanovs, J.; Mandjes, M.

    2010-01-01

    We study the first passage process of a spectrally negative Markov additive process (MAP). The focus is on the background Markov chain at the times of the first passage. This process is a Markov chain itself with a transition rate matrix Λ. Assuming time reversibility, we show that all the

  5. Limits for density dependent time inhomogeneous Markov processes.

    Science.gov (United States)

    Smith, Andrew G

    2015-10-01

    A new functional law of large numbers to approximate a time inhomogeneous Markov process that is only density dependent in the limit as an index parameter goes to infinity is developed. This extends previous results by other authors to a broader class of Markov processes while relaxing some of the conditions required for those results to hold. This result is applied to a stochastic metapopulation model that accounts for spatial structure as well as within patch dynamics with the novel addition of time dependent dynamics. The resulting nonautonomous differential equation is analysed to provide conditions for extinction and persistence for a number of examples. This condition shows that the migration of a species will positively impact the reproduction in less populated areas while negatively impacting densely populated areas. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  6. Bisimulation on Markov Processes over Arbitrary Measurable Spaces

    DEFF Research Database (Denmark)

    Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand

    2014-01-01

    We introduce a notion of bisimulation on labelled Markov Processes over generic measurable spaces in terms of arbitrary binary relations. Our notion of bisimulation is proven to coincide with the coalgebraic definition of Aczel and Mendler in terms of the Giry functor, which associates with a mea......We introduce a notion of bisimulation on labelled Markov Processes over generic measurable spaces in terms of arbitrary binary relations. Our notion of bisimulation is proven to coincide with the coalgebraic definition of Aczel and Mendler in terms of the Giry functor, which associates...... with a measurable space its collection of (sub)probability measures. This coalgebraic formulation allows one to relate the concepts of bisimulation and event bisimulation of Danos et al. (i.e., cocongruence) by means of a formal adjunction between the category of bisimulations and a (full sub...

  7. Identification of Optimal Policies in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    46 2010, č. 3 (2010), s. 558-570 ISSN 0023-5954. [ International Conference on Mathematical Methods in Economy and Industry. České Budějovice, 15.06.2009-18.06.2009] R&D Projects: GA ČR(CZ) GA402/08/0107; GA ČR GA402/07/1113 Institutional research plan: CEZ:AV0Z10750506 Keywords : finite state Markov decision processes * discounted and average costs * elimination of suboptimal policies Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/E/sladky-identification of optimal policies in markov decision processes.pdf

  8. Learning Representation and Control in Markov Decision Processes

    Science.gov (United States)

    2013-10-21

    turns out to be the soft-thresholding operator Sρ(·), which is an entry-wise shrinkage operator: proxh(x)i = Sρ(xi) = max(xi − ρ, 0)−max(−xi − ρ, 0...clarify previously proposed algorithms [7, 53], which amount to particular instances of this general framework. Concretely , basis adaptation can be...D. Bertsekas and H. Yu. Basis function adaptation methods for cost approximation in Markov Decision Processes. In IEEE International Symposium on

  9. Intra prediction based on Markov process modeling of images.

    Science.gov (United States)

    Kamisli, Fatih

    2013-10-01

    In recent video coding standards, intraprediction of a block of pixels is performed by copying neighbor pixels of the block along an angular direction inside the block. Each block pixel is predicted from only one or few directionally aligned neighbor pixels of the block. Although this is a computationally efficient approach, it ignores potentially useful correlation of other neighbor pixels of the block. To use this correlation, a general linear prediction approach is proposed, where each block pixel is predicted using a weighted sum of all neighbor pixels of the block. The disadvantage of this approach is the increased complexity because of the large number of weights. In this paper, we propose an alternative approach to intraprediction, where we model image pixels with a Markov process. The Markov process model accounts for the ignored correlation in standard intraprediction methods, but uses few neighbor pixels and enables a computationally efficient recursive prediction algorithm. Compared with the general linear prediction approach that has a large number of independent weights, the Markov process modeling approach uses a much smaller number of independent parameters and thus offers significantly reduced memory or computation requirements, while achieving similar coding gains with offline computed parameters.

  10. Method of Coding Search Strings as Markov Processes Using a Higher Level Language.

    Science.gov (United States)

    Ghanti, Srinivas; Evans, John E.

    For much of the twentieth century, Markov theory and Markov processes have been widely accepted as valid ways to view statistical variables and parameters. In the complex realm of online searching, where researchers are always seeking the route to the best search strategies and the most powerful query terms and sequences, Markov process analysis…

  11. Semi adiabatic theory of seasonal Markov processes

    Energy Technology Data Exchange (ETDEWEB)

    Talkner, P. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    The dynamics of many natural and technical systems are essentially influenced by a periodic forcing. Analytic solutions of the equations of motion for periodically driven systems are generally not known. Simulations, numerical solutions or in some limiting cases approximate analytic solutions represent the known approaches to study the dynamics of such systems. Besides the regime of weak periodic forces where linear response theory works, the limit of a slow driving force can often be treated analytically using an adiabatic approximation. For this approximation to hold all intrinsic processes must be fast on the time-scale of a period of the external driving force. We developed a perturbation theory for periodically driven Markovian systems that covers the adiabatic regime but also works if the system has a single slow mode that may even be slower than the driving force. We call it the semi adiabatic approximation. Some results of this approximation for a system exhibiting stochastic resonance which usually takes place within the semi adiabatic regime are indicated. (author) 1 fig., 8 refs.

  12. Markov decision processes in natural resources management: observability and uncertainty

    Science.gov (United States)

    Williams, Byron K.

    2015-01-01

    The breadth and complexity of stochastic decision processes in natural resources presents a challenge to analysts who need to understand and use these approaches. The objective of this paper is to describe a class of decision processes that are germane to natural resources conservation and management, namely Markov decision processes, and to discuss applications and computing algorithms under different conditions of observability and uncertainty. A number of important similarities are developed in the framing and evaluation of different decision processes, which can be useful in their applications in natural resources management. The challenges attendant to partial observability are highlighted, and possible approaches for dealing with it are discussed.

  13. Markov process models of the dynamics of HIV reservoirs.

    Science.gov (United States)

    Hawkins, Jane M

    2016-05-01

    While latently infected CD4+ T cells are extremely sparse, they are a reality that prevents HIV from being cured, and their dynamics are largely unknown. We begin with a two-state Markov process that models the outcomes of regular but infrequent blood tests for latently infected cells in an HIV positive patient under drug therapy. We then model the hidden dynamics of a latently infected CD4+ T cell in an HIV positive patient and show there is a limiting distribution, which indicates in which compartments the HIV typically can be found. Our model shows that the limiting distribution of latently infected cells reveals the presence of latency in every compartment with positive probability, supported by clinical data. We also show that the hidden Markov model determines the outcome of blood tests and analyze its connection to the blood test model. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Markov chains and decision processes for engineers and managers

    CERN Document Server

    Sheskin, Theodore J

    2010-01-01

    Markov Chain Structure and ModelsHistorical NoteStates and TransitionsModel of the WeatherRandom WalksEstimating Transition ProbabilitiesMultiple-Step Transition ProbabilitiesState Probabilities after Multiple StepsClassification of StatesMarkov Chain StructureMarkov Chain ModelsProblemsReferencesRegular Markov ChainsSteady State ProbabilitiesFirst Passage to a Target StateProblemsReferencesReducible Markov ChainsCanonical Form of the Transition MatrixTh

  15. Non-parametric Bayesian inference for inhomogeneous Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper; Johansen, Per Michael

    is a shot noise process, and the interaction function for a pair of points depends only on the distance between the two points and is a piecewise linear function modelled by a marked Poisson process. Simulation of the resulting posterior using a Metropolis-Hastings algorithm in the "conventional" way...... involves evaluating ratios of unknown normalising constants. We avoid this problem by applying a new auxiliary variable technique introduced by Møller, Pettitt, Reeves & Berthelsen (2006). In the present setting the auxiliary variable used is an example of a partially ordered Markov point process model....

  16. Performance evaluation:= (process algebra + model checking) x Markov chains

    NARCIS (Netherlands)

    Hermanns, H.; Larsen, K.G.; Nielsen, Mogens; Katoen, Joost P.

    2001-01-01

    Markov chains are widely used in practice to determine system performance and reliability characteristics. The vast majority of applications considers continuous-time Markov chains (CTMCs). This tutorial paper shows how successful model specification and analysis techniques from concurrency theory

  17. Fluctuations in Markov Processes Time Symmetry and Martingale Approximation

    CERN Document Server

    Komorowski, Tomasz; Olla, Stefano

    2012-01-01

    The present volume contains the most advanced theories on the martingale approach to central limit theorems. Using the time symmetry properties of the Markov processes, the book develops the techniques that allow us to deal with infinite dimensional models that appear in statistical mechanics and engineering (interacting particle systems, homogenization in random environments, and diffusion in turbulent flows, to mention just a few applications). The first part contains a detailed exposition of the method, and can be used as a text for graduate courses. The second concerns application to exclu

  18. Variance reduction techniques in the simulation of Markov processes

    International Nuclear Information System (INIS)

    Lessi, O.

    1987-01-01

    We study a functional r of the stationary distribution of a homogeneous Markov chain. It is often difficult or impossible to perform the analytical calculation of r and so it is reasonable to estimate r by a simulation process. A consistent estimator r(n) of r is obtained with respect to a chain with a countable state space. Suitably modifying the estimator r(n) of r one obtains a new consistent estimator which has a smaller variance than r(n). The same is obtained in the case of finite state space

  19. Non-markovian limits of additive functionals of Markov processes

    OpenAIRE

    Jara, Milton; Komorowski, Tomasz

    2009-01-01

    In this paper we consider an additive functional of an observable $V(x)$ of a Markov jump process. We assume that the law of the expected jump time $t(x)$ under the invariant probability measure $\\pi$ of the skeleton chain belongs to the domain of attraction of a subordinator. Then, the scaled limit of the functional is a Mittag-Leffler proces, provided that $\\Psi(x):=V(x)t(x)$ is square integrable w.r.t. $\\pi$. When the law of $\\Psi(x)$ belongs to a domain of attraction of a stable law the r...

  20. Chaos from nonlinear Markov processes: Why the whole is different from the sum of its parts

    Science.gov (United States)

    Frank, T. D.

    2009-10-01

    Nonlinear Markov processes have been frequently used to address bifurcations and multistability in equilibrium and non-equilibrium many-body systems. However, our understanding of the range of phenomena produced by nonlinear Markov processes is still in its infancy. We demonstrate that in addition to bifurcations and multistability nonlinear Markov processes can exhibit another key phenomena well known in the realm of nonlinear physics: chaos. It is argued that chaotically evolving process probabilities are a generic feature of many-body systems exhibiting nonlinear Markov processes even if the isolated subsystems do not exhibit chaos. That is, when considering a nonlinear Markov process as an entity of its own type, then the nonlinear Markov process in general is qualitatively different from its constituent subprocesses, which reflects that the many-body system as a whole is different from the sum of its parts.

  1. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    Science.gov (United States)

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  2. Accretion processes for general spherically symmetric compact objects

    International Nuclear Information System (INIS)

    Bahamonde, Sebastian; Jamil, Mubasher

    2015-01-01

    We investigate the accretion process for different spherically symmetric space-time geometries for a static fluid. We analyze this procedure using the most general black hole metric ansatz. After that, we examine the accretion process for specific spherically symmetric metrics obtaining the velocity of the sound during the process and the critical speed of the flow of the fluid around the black hole. In addition, we study the behavior of the rate of change of the mass for each chosen metric for a barotropic fluid. (orig.)

  3. The application of Markov decision process in restaurant delivery robot

    Science.gov (United States)

    Wang, Yong; Hu, Zhen; Wang, Ying

    2017-05-01

    As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional path planning algorithm is not very ideal. To solve this problem, this paper proposes the Markov dynamic state immediate reward (MDR) path planning algorithm according to the traditional Markov decision process. First of all, it uses MDR to plan a global path, then navigates along this path. When the sensor detects there is no obstructions in front state, increase its immediate state reward value; when the sensor detects there is an obstacle in front, plan a global path that can avoid obstacle with the current position as the new starting point and reduce its state immediate reward value. This continues until the target is reached. When the robot learns for a period of time, it can avoid those places where obstacles are often present when planning the path. By analyzing the simulation experiment, the algorithm has achieved good results in the global path planning under the dynamic environment.

  4. Stem Cell Differentiation as a Non-Markov Stochastic Process.

    Science.gov (United States)

    Stumpf, Patrick S; Smith, Rosanna C G; Lenz, Michael; Schuppert, Andreas; Müller, Franz-Josef; Babtie, Ann; Chan, Thalia E; Stumpf, Michael P H; Please, Colin P; Howison, Sam D; Arai, Fumio; MacArthur, Ben D

    2017-09-27

    Pluripotent stem cells can self-renew in culture and differentiate along all somatic lineages in vivo. While much is known about the molecular basis of pluripotency, the mechanisms of differentiation remain unclear. Here, we profile individual mouse embryonic stem cells as they progress along the neuronal lineage. We observe that cells pass from the pluripotent state to the neuronal state via an intermediate epiblast-like state. However, analysis of the rate at which cells enter and exit these observed cell states using a hidden Markov model indicates the presence of a chain of unobserved molecular states that each cell transits through stochastically in sequence. This chain of hidden states allows individual cells to record their position on the differentiation trajectory, thereby encoding a simple form of cellular memory. We suggest a statistical mechanics interpretation of these results that distinguishes between functionally distinct cellular "macrostates" and functionally similar molecular "microstates" and propose a model of stem cell differentiation as a non-Markov stochastic process. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Local Composite Quantile Regression Smoothing for Harris Recurrent Markov Processes.

    Science.gov (United States)

    Li, Degui; Li, Runze

    2016-09-01

    In this paper, we study the local polynomial composite quantile regression (CQR) smoothing method for the nonlinear and nonparametric models under the Harris recurrent Markov chain framework. The local polynomial CQR regression method is a robust alternative to the widely-used local polynomial method, and has been well studied in stationary time series. In this paper, we relax the stationarity restriction on the model, and allow that the regressors are generated by a general Harris recurrent Markov process which includes both the stationary (positive recurrent) and nonstationary (null recurrent) cases. Under some mild conditions, we establish the asymptotic theory for the proposed local polynomial CQR estimator of the mean regression function, and show that the convergence rate for the estimator in nonstationary case is slower than that in stationary case. Furthermore, a weighted type local polynomial CQR estimator is provided to improve the estimation efficiency, and a data-driven bandwidth selection is introduced to choose the optimal bandwidth involved in the nonparametric estimators. Finally, we give some numerical studies to examine the finite sample performance of the developed methodology and theory.

  6. Muon-catalysed fusion as a finite Markov process

    International Nuclear Information System (INIS)

    Van Siclen, C.DeW.

    1985-01-01

    By regarding muon catalysis of nuclear fusion in a mixture of hydrogen isotopes as a series of stochastic processes. Markov chain theory is used to derive several exact analytic equations relating the rates of the various reactions and the sticking coefficients for the fusion channels. These include expressions for the mean number of pd, dd, dt, tt and pt fusions per muon, the mean total number of fusions per muon and the muon cycling rate, which reduce to the corresponding well known expressions for catalysis in a deuterium-tritium mixture. Inclusion of the fusion reaction ddμ → pμ + t provides a particularly interesting complication, as this process gives rise to a catalysis cycle that may not return a free muon to the system. (author)

  7. Embedding a State Space Model Into a Markov Decision Process

    DEFF Research Database (Denmark)

    Nielsen, Lars Relund; Jørgensen, Erik; Højsgaard, Søren

    2011-01-01

    In agriculture Markov decision processes (MDPs) with finite state and action space are often used to model sequential decision making over time. For instance, states in the process represent possible levels of traits of the animal and transition probabilities are based on biological models...... estimated from data collected from the animal or herd. State space models (SSMs) are a general tool for modeling repeated measurements over time where the model parameters can evolve dynamically. In this paper we consider methods for embedding an SSM into an MDP with finite state and action space. Different...... ways of discretizing an SSM are discussed and methods for reducing the state space of the MDP are presented. An example from dairy production is given...

  8. Active Learning of Markov Decision Processes for System Verification

    DEFF Research Database (Denmark)

    Chen, Yingke; Nielsen, Thomas Dyhre

    2012-01-01

    Formal model verification has proven a powerful tool for verifying and validating the properties of a system. Central to this class of techniques is the construction of an accurate formal model for the system being investigated. Unfortunately, manual construction of such models can be a resource...... demanding process, and this shortcoming has motivated the development of algorithms for automatically learning system models from observed system behaviors. Recently, algorithms have been proposed for learning Markov decision process representations of reactive systems based on alternating sequences...... of input/output observations. While alleviating the problem of manually constructing a system model, the collection/generation of observed system behaviors can also prove demanding. Consequently we seek to minimize the amount of data required. In this paper we propose an algorithm for learning...

  9. Pavement maintenance optimization model using Markov Decision Processes

    Science.gov (United States)

    Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.

    2017-09-01

    This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.

  10. Rate estimation in partially observed Markov jump processes with measurement errors

    OpenAIRE

    Amrein, Michael; Kuensch, Hans R.

    2010-01-01

    We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...

  11. The Z-Transform Applied to Birth-Death Markov Processes ...

    African Journals Online (AJOL)

    Birth-death Markov models have been widely used in the study of natural and physical processes. The analysis of such processes, however, is mostly performed using time series analysis. In this report, a finite state birth‑death Markov process is analyzed using the z‑transform approach. The performance metrics of the ...

  12. Simulation-based algorithms for Markov decision processes

    CERN Document Server

    Chang, Hyeong Soo; Fu, Michael C; Marcus, Steven I

    2013-01-01

    Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences.  Many real-world problems modeled by MDPs have huge state and/or action spaces, giving an opening to the curse of dimensionality and so making practical solution of the resulting models intractable.  In other cases, the system of interest is too complex to allow explicit specification of some of the MDP model parameters, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based algorithms have been developed to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function.  Specific approaches include adaptive sampling, evolutionary policy iteration, evolutionary random policy search, and model reference adaptive search. This substantially enlarged new edition reflects the latest developments in novel ...

  13. First passage process of a Markov additive process, with applications to reflection problems

    NARCIS (Netherlands)

    B. D'Auria; J. Ivanovs; O. Kella; M.R.H. Mandjes (Michel)

    2009-01-01

    htmlabstractIn this paper we consider the first passage process of a spectrally negative Markov additive process (MAP). The law of this process is uniquely characterized by a certain matrix function, which plays a crucial role in fluctuation theory. We show how to identify this matrix using the

  14. Prediction and generation of binary Markov processes: Can a finite-state fox catch a Markov mouse?

    Science.gov (United States)

    Ruebeck, Joshua B.; James, Ryan G.; Mahoney, John R.; Crutchfield, James P.

    2018-01-01

    Understanding the generative mechanism of a natural system is a vital component of the scientific method. Here, we investigate one of the fundamental steps toward this goal by presenting the minimal generator of an arbitrary binary Markov process. This is a class of processes whose predictive model is well known. Surprisingly, the generative model requires three distinct topologies for different regions of parameter space. We show that a previously proposed generator for a particular set of binary Markov processes is, in fact, not minimal. Our results shed the first quantitative light on the relative (minimal) costs of prediction and generation. We find, for instance, that the difference between prediction and generation is maximized when the process is approximately independently, identically distributed.

  15. Bayesian inference for Markov jump processes with informative observations.

    Science.gov (United States)

    Golightly, Andrew; Wilkinson, Darren J

    2015-04-01

    In this paper we consider the problem of parameter inference for Markov jump process (MJP) representations of stochastic kinetic models. Since transition probabilities are intractable for most processes of interest yet forward simulation is straightforward, Bayesian inference typically proceeds through computationally intensive methods such as (particle) MCMC. Such methods ostensibly require the ability to simulate trajectories from the conditioned jump process. When observations are highly informative, use of the forward simulator is likely to be inefficient and may even preclude an exact (simulation based) analysis. We therefore propose three methods for improving the efficiency of simulating conditioned jump processes. A conditioned hazard is derived based on an approximation to the jump process, and used to generate end-point conditioned trajectories for use inside an importance sampling algorithm. We also adapt a recently proposed sequential Monte Carlo scheme to our problem. Essentially, trajectories are reweighted at a set of intermediate time points, with more weight assigned to trajectories that are consistent with the next observation. We consider two implementations of this approach, based on two continuous approximations of the MJP. We compare these constructs for a simple tractable jump process before using them to perform inference for a Lotka-Volterra system. The best performing construct is used to infer the parameters governing a simple model of motility regulation in Bacillus subtilis.

  16. $\\beta$-mixing and moments properties of a non-stationary copula-based Markov process

    OpenAIRE

    Gobbi, Fabio; Mulinacci, Sabrina

    2017-01-01

    This paper provides conditions under which a non-stationary copula-based Markov process is $\\beta$-mixing. We introduce, as a particular case, a convolution-based gaussian Markov process which generalizes the standard random walk allowing the increments to be dependent.

  17. On Determining the Order of Markov Dependence of an Observed Process Governed by a Hidden Markov Model

    OpenAIRE

    R.J. Boys; D.A. Henderson

    2002-01-01

    This paper describes a Bayesian approach to determining the order of a finite state Markov chain whose transition probabilities are themselves governed by a homogeneous finite state Markov chain. It extends previous work on homogeneous Markov chains to more general and applicable hidden Markov models. The method we describe uses a Markov chain Monte Carlo algorithm to obtain samples from the (posterior) distribution for both the order of Markov dependence in the observed sequence and the othe...

  18. On the record process of time-reversible spectrally-negative Markov additive processes

    NARCIS (Netherlands)

    J. Ivanovs; M.R.H. Mandjes (Michel)

    2009-01-01

    htmlabstractWe study the record process of a spectrally-negative Markov additive process (MAP). Assuming time-reversibility, a number of key quantities can be given explicitly. It is shown how these key quantities can be used when analyzing the distribution of the all-time maximum attained by MAPs

  19. Efficient computation of time-bounded reachability probabilities in uniform continuous-time Markov decision processes

    NARCIS (Netherlands)

    Jensen, K; Baier, Christel; Haverkort, Boudewijn R.H.M.; Podelski, A.; Hermanns, H.; Katoen, Joost P.

    2004-01-01

    A continuous-time Markov decision process (CTMDP) is a generalization of a continuous-time Markov chain in which both probabilistic and nondeterministic choices co-exist. This paper presents an efficient algorithm to compute the maximum (or minimum) probability to reach a set of goal states within a

  20. Efficient computation of time-bounded reachability probabilities in uniform continuous-time Markov decision processes

    NARCIS (Netherlands)

    Baier, Christel; Hermanns, H.; Katoen, Joost P.; Haverkort, Boudewijn R.H.M.

    2005-01-01

    A continuous-time Markov decision process (CTMDP) is a generalization of a continuous-time Markov chain in which both probabilistic and nondeterministic choices co-exist. This paper presents an efficient algorithm to compute the maximum (or minimum) probability to reach a set of goal states within a

  1. Hidden Markov model using Dirichlet process for de-identification.

    Science.gov (United States)

    Chen, Tao; Cullen, Richard M; Godwin, Marshall

    2015-12-01

    For the 2014 i2b2/UTHealth de-identification challenge, we introduced a new non-parametric Bayesian hidden Markov model using a Dirichlet process (HMM-DP). The model intends to reduce task-specific feature engineering and to generalize well to new data. In the challenge we developed a variational method to learn the model and an efficient approximation algorithm for prediction. To accommodate out-of-vocabulary words, we designed a number of feature functions to model such words. The results show the model is capable of understanding local context cues to make correct predictions without manual feature engineering and performs as accurately as state-of-the-art conditional random field models in a number of categories. To incorporate long-range and cross-document context cues, we developed a skip-chain conditional random field model to align the results produced by HMM-DP, which further improved the performance. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Value Function and Optimal Rule on the Optimal Stopping Problem for Continuous-Time Markov Processes

    Directory of Open Access Journals (Sweden)

    Lu Ye

    2017-01-01

    Full Text Available This paper considers the optimal stopping problem for continuous-time Markov processes. We describe the methodology and solve the optimal stopping problem for a broad class of reward functions. Moreover, we illustrate the outcomes by some typical Markov processes including diffusion and Lévy processes with jumps. For each of the processes, the explicit formula for value function and optimal stopping time is derived. Furthermore, we relate the derived optimal rules to some other optimal problems.

  3. Composability of Markov Models for Processing Sensor Data

    NARCIS (Netherlands)

    Evers, S.

    2007-01-01

    We show that it is possible to apply the divide-and-conquer principle in constructing a Markov model for sensor data from available sensor logs. The state space can be partitioned into clusters, for which the required transition counts or probabilities can be acquired locally. The combination of

  4. Students' Progress throughout Examination Process as a Markov Chain

    Science.gov (United States)

    Hlavatý, Robert; Dömeová, Ludmila

    2014-01-01

    The paper is focused on students of Mathematical methods in economics at the Czech university of life sciences (CULS) in Prague. The idea is to create a model of students' progress throughout the whole course using the Markov chain approach. Each student has to go through various stages of the course requirements where his success depends on the…

  5. Simulation on a computer the cascade probabilistic functions and theirs relation with Markov's processes

    International Nuclear Information System (INIS)

    Kupchishin, A.A.; Kupchishin, A.I.; Shmygaleva, T.A.

    2002-01-01

    Within framework of the cascade-probabilistic (CP) method the radiation and physical processes are studied, theirs relation with Markov's processes are found. The conclusion that CP-function for electrons, protons, alpha-particles and ions are describing by unhomogeneous Markov's chain is drawn. The algorithms are developed, the CP-functions calculations for charged particles, concentration of radiation defects in solids at ion irradiation are carried out as well. Tables for CPF different parameters and radiation defects concentration at charged particle interaction with solids are given. The book consists of the introduction and two chapters: (1) Cascade probabilistic function and the Markov's processes; (2) Radiation defects formation in solids as a part of the Markov's processes. The book is intended for specialists on the radiation defects mathematical stimulation, solid state physics, elementary particles physics and applied mathematics

  6. A Method for Speeding Up Value Iteration in Partially Observable Markov Decision Processes

    OpenAIRE

    Zhang, Nevin Lianwen; Lee, Stephen S.; Zhang, Weihong

    2013-01-01

    We present a technique for speeding up the convergence of value iteration for partially observable Markov decisions processes (POMDPs). The underlying idea is similar to that behind modified policy iteration for fully observable Markov decision processes (MDPs). The technique can be easily incorporated into any existing POMDP value iteration algorithms. Experiments have been conducted on several test problems with one POMDP value iteration algorithm called incremental pruning. We find that th...

  7. On Markov processes in the hadron-nuclear and nuclear-nuclear collisions at superhigh energies

    International Nuclear Information System (INIS)

    Lebedeva, A.A.; Rus'kin, V.I.

    2001-01-01

    In the article the possibility of the Markov processes use as simulation method for mean characteristics of hadron-nuclear and nucleus-nuclear collisions at superhigh energies is discussed. The simple (hadron-nuclear collisions) and non-simple (nucleus-nuclear collisions) non-uniform Markov process of output constant spectrum and absorption in a nucleon's nucleus-target with rapidity y are considered. The expression allowing to simulate the different collision modes were obtained

  8. Simulation based sequential Monte Carlo methods for discretely observed Markov processes

    OpenAIRE

    Neal, Peter

    2014-01-01

    Parameter estimation for discretely observed Markov processes is a challenging problem. However, simulation of Markov processes is straightforward using the Gillespie algorithm. We exploit this ease of simulation to develop an effective sequential Monte Carlo (SMC) algorithm for obtaining samples from the posterior distribution of the parameters. In particular, we introduce two key innovations, coupled simulations, which allow us to study multiple parameter values on the basis of a single sim...

  9. High-order hidden Markov model for piecewise linear processes and applications to speech recognition.

    Science.gov (United States)

    Lee, Lee-Min; Jean, Fu-Rong

    2016-08-01

    The hidden Markov models have been widely applied to systems with sequential data. However, the conditional independence of the state outputs will limit the output of a hidden Markov model to be a piecewise constant random sequence, which is not a good approximation for many real processes. In this paper, a high-order hidden Markov model for piecewise linear processes is proposed to better approximate the behavior of a real process. A parameter estimation method based on the expectation-maximization algorithm was derived for the proposed model. Experiments on speech recognition of noisy Mandarin digits were conducted to examine the effectiveness of the proposed method. Experimental results show that the proposed method can reduce the recognition error rate compared to a baseline hidden Markov model.

  10. Compensating Operator and Weak Convergence of Semi-Markov Process to the Diffusion Process without Balance Condition

    Directory of Open Access Journals (Sweden)

    Igor V. Malyk

    2015-01-01

    Full Text Available Weak convergence of semi-Markov processes in the diffusive approximation scheme is studied in the paper. This problem is not new and it is studied in many papers, using convergence of random processes. Unlike other studies, we used in this paper concept of the compensating operator. It enables getting sufficient conditions of weak convergence under the conditions on the local characteristics of output semi-Markov process.

  11. Limit Properties of Transition Functions of Continuous-Time Markov Branching Processes

    Directory of Open Access Journals (Sweden)

    Azam A. Imomov

    2014-01-01

    Full Text Available Consider the Markov Branching Process with continuous time. Our focus is on the limit properties of transition functions of this process. Using differential analogue of the Basic Lemma we prove local limit theorems for all cases and observe invariant properties of considering process.

  12. Markov decision processes: a tool for sequential decision making under uncertainty.

    Science.gov (United States)

    Alagoz, Oguzhan; Hsu, Heather; Schaefer, Andrew J; Roberts, Mark S

    2010-01-01

    We provide a tutorial on the construction and evaluation of Markov decision processes (MDPs), which are powerful analytical tools used for sequential decision making under uncertainty that have been widely used in many industrial and manufacturing applications but are underutilized in medical decision making (MDM). We demonstrate the use of an MDP to solve a sequential clinical treatment problem under uncertainty. Markov decision processes generalize standard Markov models in that a decision process is embedded in the model and multiple decisions are made over time. Furthermore, they have significant advantages over standard decision analysis. We compare MDPs to standard Markov-based simulation models by solving the problem of the optimal timing of living-donor liver transplantation using both methods. Both models result in the same optimal transplantation policy and the same total life expectancies for the same patient and living donor. The computation time for solving the MDP model is significantly smaller than that for solving the Markov model. We briefly describe the growing literature of MDPs applied to medical decisions.

  13. The application of Markov decision process with penalty function in restaurant delivery robot

    Science.gov (United States)

    Wang, Yong; Hu, Zhen; Wang, Ying

    2017-05-01

    As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional Markov decision process path planning algorithm is not save, the robot is very close to the table and chairs. To solve this problem, this paper proposes the Markov Decision Process with a penalty term called MDPPT path planning algorithm according to the traditional Markov decision process (MDP). For MDP, if the restaurant delivery robot bumps into an obstacle, the reward it receives is part of the current status reward. For the MDPPT, the reward it receives not only the part of the current status but also a negative constant term. Simulation results show that the MDPPT algorithm can plan a more secure path.

  14. Characterization of the marginal distributions of Markov processes used in dynamic reliability

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available In dynamic reliability, the evolution of a system is described by a piecewise deterministic Markov process ( I t , X t t≥0 with state-space E× ℝ d , where E is finite. The main result of the present paper is the characterization of the marginal distribution of the Markov process ( I t , X t t≥0 at time t , as the unique solution of a set of explicit integro-differential equations, which can be seen as a weak form of the Chapman-Kolmogorov equation. Uniqueness is the difficult part of the result.

  15. Recurrent extensions of self-similar Markov processes and Cram\\'er's condition II

    OpenAIRE

    Rivero, Víctor

    2007-01-01

    We prove that a positive self-similar Markov process $(X,\\mathbb{P})$ that hits 0 in a finite time admits a self-similar recurrent extension that leaves 0 continuously if and only if the underlying L\\'{e}vy process satisfies Cram\\'{e}r's condition.

  16. Unbounded-rate Markov decision processes : structural properties via a parametrisation approach

    NARCIS (Netherlands)

    Blok, H.

    2016-01-01

    This research is interested in optimal control of Markov decision processes (MDPs). Herein a key role is played by structural properties. Properties such as monotonicity and convexity help in finding the optimal policy. Value iteration is a tool to derive such properties in discrete time processes.

  17. Potts model based on a Markov process computation solves the community structure problem effectively.

    Science.gov (United States)

    Li, Hui-Jia; Wang, Yong; Wu, Ling-Yun; Zhang, Junhua; Zhang, Xiang-Sun

    2012-07-01

    The Potts model is a powerful tool to uncover community structure in complex networks. Here, we propose a framework to reveal the optimal number of communities and stability of network structure by quantitatively analyzing the dynamics of the Potts model. Specifically we model the community structure detection Potts procedure by a Markov process, which has a clear mathematical explanation. Then we show that the local uniform behavior of spin values across multiple timescales in the representation of the Markov variables could naturally reveal the network's hierarchical community structure. In addition, critical topological information regarding multivariate spin configuration could also be inferred from the spectral signatures of the Markov process. Finally an algorithm is developed to determine fuzzy communities based on the optimal number of communities and the stability across multiple timescales. The effectiveness and efficiency of our algorithm are theoretically analyzed as well as experimentally validated.

  18. [Markov process of vegetation cover change in arid area of northwest China based on FVC index].

    Science.gov (United States)

    Wang, Zhi; Chang, Shun-li; Shi, Qing-dong; Ma, Ke; Liang, Feng-chao

    2010-05-01

    Based on the fractional vegetation cover (FVC) data of 1982-2000 NOAA/AVHRR (National Oceanic and Atmospheric Administration/ the Advanced Very High Resolution Radiometer) images, the whole arid area of Northwest China was divided into three sub-areas, and then, the vegetation cover in each sub-area was classified by altitude. Furthermore, the Markov process of vegetation cover change was analyzed and tested through calculating the limit probability of any two years and the continuous and interval mean transition matrixes of vegetation cover change with 8 km x 8 km spatial resolution. By this method, the Markov process of vegetation cover change and its indicative significance were approached. The results showed that the vegetation cover change in the study area was controlled by some random processes and affected by long-term stable driving factors, and the transitional change of vegetation cover was a multiple Markov process. Therefore, only using two term image data, no matter they were successive or intervallic, Markov process could not accurately estimate the trend of vegetation cover change. As for the arid area of Northwest China, more than 10 years successive data could basically reflect all the factors affecting regional vegetation cover change, and using long term average transition matrix data could reliably simulate and predict the vegetation cover change. Vegetation cover change was a long term dynamic balance. Once the balance was broken down, it should be a long time process to establish a new balance.

  19. Statistical Inference for Partially Observed Markov Processes via the R Package pomp

    Directory of Open Access Journals (Sweden)

    Aaron A. King

    2016-03-01

    Full Text Available Partially observed Markov process (POMP models, also known as hidden Markov models or state space models, are ubiquitous tools for time series analysis. The R package pomp provides a very flexible framework for Monte Carlo statistical investigations using nonlinear, non-Gaussian POMP models. A range of modern statistical methods for POMP models have been implemented in this framework including sequential Monte Carlo, iterated filtering, particle Markov chain Monte Carlo, approximate Bayesian computation, maximum synthetic likelihood estimation, nonlinear forecasting, and trajectory matching. In this paper, we demonstrate the application of these methodologies using some simple toy problems. We also illustrate the specification of more complex POMP models, using a nonlinear epidemiological model with a discrete population, seasonality, and extra-demographic stochasticity. We discuss the specification of user-defined models and the development of additional methods within the programming environment provided by pomp.

  20. Bisimulation and Logical Preservation for Continuous-Time Markov Decision Processes

    NARCIS (Netherlands)

    Neuhausser, M.; Katoen, Joost P.

    This paper introduces strong bisimulation for continuous-time Markov decision processes (CTMDPs), a stochastic model which allows for a nondeterministic choice between exponential distributions, and shows that bisimulation preserves the validity of CSL. To that end, we interpret the semantics of CSL

  1. POISSON REPRESENTATIONS OF BRANCHING MARKOV AND MEASURE-VALUED BRANCHING PROCESSES

    NARCIS (Netherlands)

    Kurtz, Thomas G.; Rodrigues, Eliane R.

    Representations of branching Markov processes and their measure-valued limits in terms of countable systems of particles are constructed for models with spatially varying birth and death rates. Each particle has a location and a "level," but unlike earlier constructions, the levels change with time.

  2. Data-based inference of generators for Markov jump processes using convex optimization

    NARCIS (Netherlands)

    D.T. Crommelin (Daan); E. Vanden-Eijnden (Eric)

    2009-01-01

    textabstractA variational approach to the estimation of generators for Markov jump processes from discretely sampled data is discussed and generalized. In this approach, one first calculates the spectrum of the discrete maximum likelihood estimator for the transition matrix consistent with

  3. Generalization of Faustmann's Formula for Stochastic Forest Growth and Prices with Markov Decision Process Models

    Science.gov (United States)

    Joseph Buongiorno

    2001-01-01

    Faustmann's formula gives the land value, or the forest value of land with trees, under deterministic assumptions regarding future stand growth and prices, over an infinite horizon. Markov decision process (MDP) models generalize Faustmann's approach by recognizing that future stand states and prices are known only as probabilistic distributions. The...

  4. Counseling as a Stochastic Process: Fitting a Markov Chain Model to Initial Counseling Interviews

    Science.gov (United States)

    Lichtenberg, James W.; Hummel, Thomas J.

    1976-01-01

    The goodness of fit of a first-order Markov chain model to six counseling interviews was assessed by using chi-square tests of homogeneity and simulating sampling distributions of selected process characteristics against which the same characteristics in the actual interviews were compared. The model fit four of the interviews. Presented at AERA,…

  5. The invariant measure of homogeneous Markov processes in the quarter-plane: Representation in geometric terms

    NARCIS (Netherlands)

    Chen, Y.; Boucherie, Richardus J.; Goseling, Jasper

    2011-01-01

    We consider the invariant measure of a homogeneous continuous-time Markov process in the quarter-plane. The basic solutions of the global balance equation are the geometric distributions. We first show that the invariant measure can not be a finite linear combination of basic geometric

  6. Spectral analysis of multi-dimensional self-similar Markov processes

    Science.gov (United States)

    Modarresi, N.; Rezakhah, S.

    2010-03-01

    In this paper we consider a discrete scale invariant (DSI) process {X(t), t in R+} with scale l > 1. We consider a fixed number of observations in every scale, say T, and acquire our samples at discrete points αk, k in W, where α is obtained by the equality l = αT and W = {0, 1, ...}. We thus provide a discrete time scale invariant (DT-SI) process X(sdot) with the parameter space {αk, k in W}. We find the spectral representation of the covariance function of such a DT-SI process. By providing the harmonic-like representation of multi-dimensional self-similar processes, spectral density functions of them are presented. We assume that the process {X(t), t in R+} is also Markov in the wide sense and provide a discrete time scale invariant Markov (DT-SIM) process with the above scheme of sampling. We present an example of the DT-SIM process, simple Brownian motion, by the above sampling scheme and verify our results. Finally, we find the spectral density matrix of such a DT-SIM process and show that its associated T-dimensional self-similar Markov process is fully specified by {RHj(1), RjH(0), j = 0, 1, ..., T - 1}, where RHj(τ) is the covariance function of jth and (j + τ)th observations of the process.

  7. Spectral analysis of multi-dimensional self-similar Markov processes

    International Nuclear Information System (INIS)

    Modarresi, N; Rezakhah, S

    2010-01-01

    In this paper we consider a discrete scale invariant (DSI) process {X(t), t in R + } with scale l > 1. We consider a fixed number of observations in every scale, say T, and acquire our samples at discrete points α k , k in W, where α is obtained by the equality l = α T and W = {0, 1, ...}. We thus provide a discrete time scale invariant (DT-SI) process X(.) with the parameter space {α k , k in W}. We find the spectral representation of the covariance function of such a DT-SI process. By providing the harmonic-like representation of multi-dimensional self-similar processes, spectral density functions of them are presented. We assume that the process {X(t), t in R + } is also Markov in the wide sense and provide a discrete time scale invariant Markov (DT-SIM) process with the above scheme of sampling. We present an example of the DT-SIM process, simple Brownian motion, by the above sampling scheme and verify our results. Finally, we find the spectral density matrix of such a DT-SIM process and show that its associated T-dimensional self-similar Markov process is fully specified by {R H j (1), R j H (0), j = 0, 1, ..., T - 1}, where R H j (τ) is the covariance function of jth and (j + τ)th observations of the process.

  8. Reliability analysis of nuclear component cooling water system using semi-Markov process model

    International Nuclear Information System (INIS)

    Veeramany, Arun; Pandey, Mahesh D.

    2011-01-01

    Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.

  9. A necessary and sufficient condition for gelation of a reversible Markov process of polymerization

    CERN Document Server

    Han, D

    2003-01-01

    A reversible Markov process as a chemical polymerization model which permits the coagulation and fragmentation reactions is considered. We present a necessary and sufficient condition for the occurrence of a gelation in the process. We show that a gelation transition may or may not occur, depending on the value of the fragmentation strength, and, in the case that gelation takes place, a critical value for the occurrence of the gelation and the mass of the gel can be determined by close forms.

  10. A Tutorial on Markov Renewal Theory Semi-Regenerative Processes and Their Applications.

    Science.gov (United States)

    1980-12-01

    have had the priviledge of working with some very bright young men and women. To them is due credit for nearly everything in these notes. The errors...are not much different but they are obviously different. Furthermore, they are converging to values less than the sample variances would indicate. This...use the special knowledge of that process. None-the- less , the more general theory of Markov renewal processes often leads to new insights, new

  11. Bearing Degradation Process Prediction Based on the Support Vector Machine and Markov Model

    Directory of Open Access Journals (Sweden)

    Shaojiang Dong

    2014-01-01

    Full Text Available Predicting the degradation process of bearings before they reach the failure threshold is extremely important in industry. This paper proposed a novel method based on the support vector machine (SVM and the Markov model to achieve this goal. Firstly, the features are extracted by time and time-frequency domain methods. However, the extracted original features are still with high dimensional and include superfluous information, and the nonlinear multifeatures fusion technique LTSA is used to merge the features and reduces the dimension. Then, based on the extracted features, the SVM model is used to predict the bearings degradation process, and the CAO method is used to determine the embedding dimension of the SVM model. After the bearing degradation process is predicted by SVM model, the Markov model is used to improve the prediction accuracy. The proposed method was validated by two bearing run-to-failure experiments, and the results proved the effectiveness of the methodology.

  12. A Correlated Random Effects Model for Non-homogeneous Markov Processes with Nonignorable Missingness.

    Science.gov (United States)

    Chen, Baojiang; Zhou, Xiao-Hua

    2013-05-01

    Life history data arising in clusters with prespecified assessment time points for patients often feature incomplete data since patients may choose to visit the clinic based on their needs. Markov process models provide a useful tool describing disease progression for life history data. The literature mainly focuses on time homogeneous process. In this paper we develop methods to deal with non-homogeneous Markov process with incomplete clustered life history data. A correlated random effects model is developed to deal with the nonignorable missingness, and a time transformation is employed to address the non-homogeneity in the transition model. Maximum likelihood estimate based on the Monte-Carlo EM algorithm is advocated for parameter estimation. Simulation studies demonstrate that the proposed method works well in many situations. We also apply this method to an Alzheimer's disease study.

  13. Semi-markov model of processing requests to the cloud storage

    Directory of Open Access Journals (Sweden)

    Zamoryonov Mikhail

    2017-01-01

    Full Text Available This paper presents a semi-Markov model as an important part of the computer-aided manufacturing and a modular system of cloud storage, which affects the functioning of the whole process. The residence times of the system in states and the probability of system transitions are determined. There is a stationary distribution of the embedded Markov chain. The residence times of the system in states with allowance for repeated hits are determined by the theorem on the distribution functions of the residence times of the system in states with allowance for repeated hits. When using the trajectory method, the distribution function of the time for the complete processing of the read request by such a system is determined. A comparison of the expectation time complete processing the read request obtained in this study and obtained by the known formula for determining from the literature the expectation residence time in the subset of the system states.

  14. Choice of the parameters of the cusum algorithms for parameter estimation in the markov modulated poisson process

    OpenAIRE

    Burkatovskaya, Yuliya Borisovna; Kabanova, T.; Khaustov, Pavel Aleksandrovich

    2016-01-01

    CUSUM algorithm for controlling chain state switching in the Markov modulated Poissonprocess was investigated via simulation. Recommendations concerning the parameter choice were givensubject to characteristics of the process. Procedure of the process parameter estimation was described.

  15. Methodology for transition probabilities determination in a Markov decision processes model for quality-accuracy management

    Directory of Open Access Journals (Sweden)

    Mitkovska-Trendova Katerina

    2014-01-01

    Full Text Available The main goal of the presented research is to define a methodology for determination of the transition probabilities in a Markov Decision Process on the example of optimization of the quality accuracy through optimization of its main measure (percent of scrap in a Performance Measurement System. This research had two main driving forces. First, today's urge for introduction of more robust, mathematically founded methods/tools in different enterprise areas, including PMSs. Second, since Markov Decision Processes are chosen as such tool, certain shortcomings of this approach had to be handled. Exactly the calculation of the transition probabilities is one of the weak points of the Markov Decision Processes. The proposed methodology for calculation of the transition probabilities is based on utilization of recorded historical data and they are calculated for each possible transition from a state after one run to a state after the following run of the influential factor (e.g. machine. The methodology encompasses several steps that include: collecting different data connected to the percent of scrap and their processing according to the needs of the methodology, determination of the limits of the states for every influential factor, classification of the data from real batches according to the determined states and calculation of the transition probabilities from one state to another state for every action. However, the implementation of the Markov Decision Process model with the proposed methodology for calculation of the transition probabilities resulted in optimal policy that showed significant differences in the percent of scrap, compared to the real situation when the optimization of the percent of scrap was done heuristically (5.2107% versus 13.5928%.

  16. The Markov process admits a consistent steady-state thermodynamic formalism

    Science.gov (United States)

    Peng, Liangrong; Zhu, Yi; Hong, Liu

    2018-01-01

    The search for a unified formulation for describing various non-equilibrium processes is a central task of modern non-equilibrium thermodynamics. In this paper, a novel steady-state thermodynamic formalism was established for general Markov processes described by the Chapman-Kolmogorov equation. Furthermore, corresponding formalisms of steady-state thermodynamics for the master equation and Fokker-Planck equation could be rigorously derived in mathematics. To be concrete, we proved that (1) in the limit of continuous time, the steady-state thermodynamic formalism for the Chapman-Kolmogorov equation fully agrees with that for the master equation; (2) a similar one-to-one correspondence could be established rigorously between the master equation and Fokker-Planck equation in the limit of large system size; (3) when a Markov process is restrained to one-step jump, the steady-state thermodynamic formalism for the Fokker-Planck equation with discrete state variables also goes to that for master equations, as the discretization step gets smaller and smaller. Our analysis indicated that general Markov processes admit a unified and self-consistent non-equilibrium steady-state thermodynamic formalism, regardless of underlying detailed models.

  17. A fast exact simulation method for a class of Markov jump processes.

    Science.gov (United States)

    Li, Yao; Hu, Lili

    2015-11-14

    A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze its properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.

  18. Conditioned Limit Theorems for Some Null Recurrent Markov Processes

    Science.gov (United States)

    1976-08-01

    observed by Lamperti in [25]): Theorem 3.2 If (i) and (ii) hold there is a 5 > 0 so that for all C > 0 VX cvX (.c). (M) This sealing relationship...assumptions (i) and (ii) hold, there is a 6 0 so that for all c > 0 Vcx cvX (c 6 ) , (2) 1];=t/6 (hrm 1, for all t > 0 lim c /c t (here, t= lim tm). (3) W...processes which can occur as limits in cx d x (ii). If 6 = 0, however, (2) becomes V = cVx and we can no e x We have not been able to characterize

  19. «Concurrency» in M-L-Parallel Semi-Markov Process

    Directory of Open Access Journals (Sweden)

    Larkin Eugene

    2017-01-01

    Full Text Available This article investigates the functioning of a swarm of robots, each of which receives instructions from the external human operator and autonomously executes them. An abstract model of functioning of a robot, a group of robots and multiple groups of robots was obtained using the notion of semi-Markov process. The concepts of aggregated initial and aggregated absorbing states were introduced. Correspondences for calculation of time parameters of concurrency were obtained.

  20. Hidden Parameter Markov Decision Processes: A Semiparametric Regression Approach for Discovering Latent Task Parametrizations.

    Science.gov (United States)

    Doshi-Velez, Finale; Konidaris, George

    2016-07-01

    Control applications often feature tasks with similar, but not identical, dynamics. We introduce the Hidden Parameter Markov Decision Process (HiP-MDP), a framework that parametrizes a family of related dynamical systems with a low-dimensional set of latent factors, and introduce a semiparametric regression approach for learning its structure from data. We show that a learned HiP-MDP rapidly identifies the dynamics of new task instances in several settings, flexibly adapting to task variation.

  1. Scalable approximate policies for Markov decision process models of hospital elective admissions.

    Science.gov (United States)

    Zhu, George; Lizotte, Dan; Hoey, Jesse

    2014-05-01

    To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Assistive system for people with Apraxia using a Markov decision process.

    Science.gov (United States)

    Jean-Baptiste, Emilie M D; Russell, Martin; Rothstein, Pia

    2014-01-01

    CogWatch is an assistive system to re-train stroke survivors suffering from Apraxia or Action Disorganization Syndrome (AADS) to complete activities of daily living (ADLs). This paper describes the approach to real-time planning based on a Markov Decision Process (MDP), and demonstrates its ability to improve task's performance via user simulation. The paper concludes with a discussion of the remaining challenges and future enhancements.

  3. A Learning Based Approach to Control Synthesis of Markov Decision Processes for Linear Temporal Logic Specifications

    Science.gov (United States)

    2014-09-20

    ABSTRACT We propose to synthesize a control policy for a Markov decision process ( MDP ) such that the resulting traces of the MDP satisfy a linear...temporal logic (LTL) property. We construct a product MDP that incorporates a deterministic Rabin automaton generated from the desired LTL property. The...reward function of the product MDP is defined from the acceptance condition of the Rabin automaton. This construction allows us to apply techniques from

  4. Stochastic Differential Equations and Markov Processes in the Modeling of Electrical Circuits

    Directory of Open Access Journals (Sweden)

    R. Rezaeyan

    2010-06-01

    Full Text Available Stochastic differential equations(SDEs, arise from physical systems that possess inherent noise and certainty. We derive a SDE for electrical circuits. In this paper, we will explore the close relationship between the SDE and autoregressive(AR model. We will solve SDE related to RC circuit with using of AR(1 model (Markov process and however with Euler-Maruyama(EM method. Then, we will compare this solutions. Numerical simulations in MATLAB are obtained.

  5. A reward semi-Markov process with memory for wind speed modeling

    Science.gov (United States)

    Petroni, F.; D'Amico, G.; Prattico, F.

    2012-04-01

    -order Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. The primary goal of this analysis is the study of the time history of the wind in order to assess its reliability as a source of power and to determine the associated storage levels required. In order to assess this issue we use a probabilistic model based on indexed semi-Markov process [4] to which a reward structure is attached. Our model is used to calculate the expected energy produced by a given turbine and its variability expressed by the variance of the process. Our results can be used to compare different wind farms based on their reward and also on the risk of missed production due to the intrinsic variability of the wind speed process. The model is used to generate synthetic time series for wind speed by means of Monte Carlo simulations and backtesting procedure is used to compare results on first and second oder moments of rewards between real and synthetic data. [1] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic gen- eration of wind speed time series, Energy 30 (2005) 693-708. [2] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating wind speed time series at Tangiers, Morocco, Re- newable Energy 29 (2004) 1407-1418. [3] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling of wind using first-order Markov chain and Weibull distribu- tion, Renewable Energy 28 (2003) 1787-1802. [4]F. Petroni, G. D'Amico, F. Prattico, Indexed semi-Markov process for wind speed modeling. To be submitted.

  6. A continuous-index hidden Markov jump process for modeling DNA copy number data.

    Science.gov (United States)

    Stjernqvist, Susann; Rydén, Tobias

    2009-10-01

    The number of copies of DNA in human cells can be measured using array comparative genomic hybridization (aCGH), which provides intensity ratios of sample to reference DNA at genomic locations corresponding to probes on a microarray. In the present paper, we devise a statistical model, based on a latent continuous-index Markov jump process, that is aimed to capture certain features of aCGH data, including probes that are unevenly long, unevenly spaced, and overlapping. The model has a continuous state space, with 1 state representing a normal copy number of 2, and the rest of the states being either amplifications or deletions. We adopt a Bayesian approach and apply Markov chain Monte Carlo (MCMC) methods for estimating the parameters and the Markov process. The model can be applied to data from both tiling bacterial artificial chromosome arrays and oligonucleotide arrays. We also compare a model with normal distributed noise to a model with t-distributed noise, showing that the latter is more robust to outliers.

  7. The Logic of Adaptive Behavior - Knowledge Representation and Algorithms for the Markov Decision Process Framework in First-Order Domains

    NARCIS (Netherlands)

    van Otterlo, M.

    2008-01-01

    Learning and reasoning in large, structured, probabilistic worlds is at the heart of artificial intelligence. Markov decision processes have become the de facto standard in modeling and solving sequential decision making problems under uncertainty. Many efficient reinforcement learning and dynamic

  8. Reduced equations of motion for quantum systems driven by diffusive Markov processes.

    Science.gov (United States)

    Sarovar, Mohan; Grace, Matthew D

    2012-09-28

    The expansion of a stochastic Liouville equation for the coupled evolution of a quantum system and an Ornstein-Uhlenbeck process into a hierarchy of coupled differential equations is a useful technique that simplifies the simulation of stochastically driven quantum systems. We expand the applicability of this technique by completely characterizing the class of diffusive Markov processes for which a useful hierarchy of equations can be derived. The expansion of this technique enables the examination of quantum systems driven by non-Gaussian stochastic processes with bounded range. We present an application of this extended technique by simulating Stark-tuned Förster resonance transfer in Rydberg atoms with nonperturbative position fluctuations.

  9. A hierarchical Markov decision process modeling feeding and marketing decisions of growing pigs

    DEFF Research Database (Denmark)

    Pourmoayed, Reza; Nielsen, Lars Relund; Kristensen, Anders Ringgaard

    2016-01-01

    Feeding is the most important cost in the production of growing pigs and has a direct impact on the marketing decisions, growth and the final quality of the meat. In this paper, we address the sequential decision problem of when to change the feed-mix within a finisher pig pen and when to pick pigs...... for marketing. We formulate a hierarchical Markov decision process with three levels representing the decision process. The model considers decisions related to feeding and marketing and finds the optimal decision given the current state of the pen. The state of the system is based on information from on...

  10. A statistical property of multiagent learning based on Markov decision process.

    Science.gov (United States)

    Iwata, Kazunori; Ikeda, Kazushi; Sakai, Hideaki

    2006-07-01

    We exhibit an important property called the asymptotic equipartition property (AEP) on empirical sequences in an ergodic multiagent Markov decision process (MDP). Using the AEP which facilitates the analysis of multiagent learning, we give a statistical property of multiagent learning, such as reinforcement learning (RL), near the end of the learning process. We examine the effect of the conditions among the agents on the achievement of a cooperative policy in three different cases: blind, visible, and communicable. Also, we derive a bound on the speed with which the empirical sequence converges to the best sequence in probability, so that the multiagent learning yields the best cooperative result.

  11. A Multi-stage Representation of Cell Proliferation as a Markov Process.

    Science.gov (United States)

    Yates, Christian A; Ford, Matthew J; Mort, Richard L

    2017-12-01

    The stochastic simulation algorithm commonly known as Gillespie's algorithm (originally derived for modelling well-mixed systems of chemical reactions) is now used ubiquitously in the modelling of biological processes in which stochastic effects play an important role. In well-mixed scenarios at the sub-cellular level it is often reasonable to assume that times between successive reaction/interaction events are exponentially distributed and can be appropriately modelled as a Markov process and hence simulated by the Gillespie algorithm. However, Gillespie's algorithm is routinely applied to model biological systems for which it was never intended. In particular, processes in which cell proliferation is important (e.g. embryonic development, cancer formation) should not be simulated naively using the Gillespie algorithm since the history-dependent nature of the cell cycle breaks the Markov process. The variance in experimentally measured cell cycle times is far less than in an exponential cell cycle time distribution with the same mean.Here we suggest a method of modelling the cell cycle that restores the memoryless property to the system and is therefore consistent with simulation via the Gillespie algorithm. By breaking the cell cycle into a number of independent exponentially distributed stages, we can restore the Markov property at the same time as more accurately approximating the appropriate cell cycle time distributions. The consequences of our revised mathematical model are explored analytically as far as possible. We demonstrate the importance of employing the correct cell cycle time distribution by recapitulating the results from two models incorporating cellular proliferation (one spatial and one non-spatial) and demonstrating that changing the cell cycle time distribution makes quantitative and qualitative differences to the outcome of the models. Our adaptation will allow modellers and experimentalists alike to appropriately represent cellular

  12. Adiabatic reduction of a model of stochastic gene expression with jump Markov process.

    Science.gov (United States)

    Yvinec, Romain; Zhuge, Changjing; Lei, Jinzhi; Mackey, Michael C

    2014-04-01

    This paper considers adiabatic reduction in a model of stochastic gene expression with bursting transcription considered as a jump Markov process. In this model, the process of gene expression with auto-regulation is described by fast/slow dynamics. The production of mRNA is assumed to follow a compound Poisson process occurring at a rate depending on protein levels (the phenomena called bursting in molecular biology) and the production of protein is a linear function of mRNA numbers. When the dynamics of mRNA is assumed to be a fast process (due to faster mRNA degradation than that of protein) we prove that, with appropriate scalings in the burst rate, jump size or translational rate, the bursting phenomena can be transmitted to the slow variable. We show that, depending on the scaling, the reduced equation is either a stochastic differential equation with a jump Poisson process or a deterministic ordinary differential equation. These results are significant because adiabatic reduction techniques seem to have not been rigorously justified for a stochastic differential system containing a jump Markov process. We expect that the results can be generalized to adiabatic methods in more general stochastic hybrid systems.

  13. Graph theoretical calculation of systems reliability with semi-Markov processes

    International Nuclear Information System (INIS)

    Widmer, U.

    1984-06-01

    The determination of the state probabilities and related quantities of a system characterized by an SMP (or a homogeneous MP) can be performed by means of graph-theoretical methods. The calculation procedures for semi-Markov processes based on signal flow graphs are reviewed. Some methods from electrotechnics are adapted in order to obtain a representation of the state probabilities by means of trees. From this some formulas are derived for the asymptotic state probabilities and for the mean life-time in reliability considerations. (Auth.)

  14. Detection of Text Lines of Handwritten Arabic Manuscripts using Markov Decision Processes

    Directory of Open Access Journals (Sweden)

    Youssef Boulid

    2016-09-01

    Full Text Available In a character recognition systems, the segmentation phase is critical since the accuracy of the recognition depend strongly on it. In this paper we present an approach based on Markov Decision Processes to extract text lines from binary images of Arabic handwritten documents. The proposed approach detects the connected components belonging to the same line by making use of knowledge about features and arrangement of those components. The initial results show that the system is promising for extracting Arabic handwritten lines.

  15. Pitch angle scattering of relativistic electrons from stationary magnetic waves: Continuous Markov process and quasilinear theory

    International Nuclear Information System (INIS)

    Lemons, Don S.

    2012-01-01

    We develop a Markov process theory of charged particle scattering from stationary, transverse, magnetic waves. We examine approximations that lead to quasilinear theory, in particular the resonant diffusion approximation. We find that, when appropriate, the resonant diffusion approximation simplifies the result of the weak turbulence approximation without significant further restricting the regime of applicability. We also explore a theory generated by expanding drift and diffusion rates in terms of a presumed small correlation time. This small correlation time expansion leads to results valid for relatively small pitch angle and large wave energy density - a regime that may govern pitch angle scattering of high-energy electrons into the geomagnetic loss cone.

  16. Markov chain Monte Carlo methods for state-space models with point process observations.

    Science.gov (United States)

    Yuan, Ke; Girolami, Mark; Niranjan, Mahesan

    2012-06-01

    This letter considers how a number of modern Markov chain Monte Carlo (MCMC) methods can be applied for parameter estimation and inference in state-space models with point process observations. We quantified the efficiencies of these MCMC methods on synthetic data, and our results suggest that the Reimannian manifold Hamiltonian Monte Carlo method offers the best performance. We further compared such a method with a previously tested variational Bayes method on two experimental data sets. Results indicate similar performance on the large data sets and superior performance on small ones. The work offers an extensive suite of MCMC algorithms evaluated on an important class of models for physiological signal analysis.

  17. Conditions for the Solvability of the Linear Programming Formulation for Constrained Discounted Markov Decision Processes

    Energy Technology Data Exchange (ETDEWEB)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Institut de Mathématiques de Bordeaux, INRIA Bordeaux Sud Ouest, Team: CQFD, and IMB (France); Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es [UNED, Department of Statistics and Operations Research (Spain)

    2016-08-15

    We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.

  18. Policy Iteration for Continuous-Time Average Reward Markov Decision Processes in Polish Spaces

    Directory of Open Access Journals (Sweden)

    Quanxin Zhu

    2009-01-01

    Full Text Available We study the policy iteration algorithm (PIA for continuous-time jump Markov decision processes in general state and action spaces. The corresponding transition rates are allowed to be unbounded, and the reward rates may have neither upper nor lower bounds. The criterion that we are concerned with is expected average reward. We propose a set of conditions under which we first establish the average reward optimality equation and present the PIA. Then under two slightly different sets of conditions we show that the PIA yields the optimal (maximum reward, an average optimal stationary policy, and a solution to the average reward optimality equation.

  19. A Stable Clock Error Model Using Coupled First and Second Order Gauss-Markov Processes

    Science.gov (United States)

    Carpenter, Russell; Lee, Taesul

    2008-01-01

    Long data outages may occur in applications of global navigation satellite system technology to orbit determination for missions that spend significant fractions of their orbits above the navigation satellite constellation(s). Current clock error models based on the random walk idealization may not be suitable in these circumstances, since the covariance of the clock errors may become large enough to overflow flight computer arithmetic. A model that is stable, but which approximates the existing models over short time horizons is desirable. A coupled first- and second-order Gauss-Markov process is such a model.

  20. Unsupervised Learning of Structural Representation of Percussive Audio Using a Hierarchical Dirichlet Process Hidden Markov Model

    DEFF Research Database (Denmark)

    Antich, Jose Luis Diez; Paterna, Mattia; Marxer, Richard

    2016-01-01

    single-linkage clustering, metrical regularity calculation and beat detection. 2) The approx. equal length blocks are clustered into k clusters and the resulting cluster sequence is modelled by transition probabilities between clusters. The Hierarchical Dirichlet Process Hidden Markov Model is employed......A method is proposed that extracts a structural representation of percussive audio in an unsupervised manner. It consists of two parts: 1) The input signal is segmented into blocks of approximately even duration, aligned to a metrical grid, using onset and timbre feature extraction, agglomerative...

  1. Generator estimation of Markov jump processes based on incomplete observations nonequidistant in time

    Science.gov (United States)

    Metzner, Philipp; Horenko, Illia; Schütte, Christof

    2007-12-01

    Markov jump processes can be used to model the effective dynamics of observables in applications ranging from molecular dynamics to finance. In this paper we present a different method which allows the inverse modeling of Markov jump processes based on incomplete observations in time: We consider the case of a given time series of the discretely observed jump process. We show how to compute efficiently the maximum likelihood estimator of its infinitesimal generator and demonstrate in detail that the method allows us to handle observations nonequidistant in time. The method is based on the work of and Bladt and Sørensen [J. R. Stat. Soc. Ser. B (Stat. Methodol.) 67, 395 (2005)] but scales much more favorably than it with the length of the time series and the dimension and size of the state space of the jump process. We illustrate its performance on a toy problem as well as on data arising from simulations of biochemical kinetics of a genetic toggle switch.

  2. Analysis of single-molecule fluorescence spectroscopic data with a Markov-modulated Poisson process.

    Science.gov (United States)

    Jäger, Mark; Kiel, Alexander; Herten, Dirk-Peter; Hamprecht, Fred A

    2009-10-05

    We present a photon-by-photon analysis framework for the evaluation of data from single-molecule fluorescence spectroscopy (SMFS) experiments using a Markov-modulated Poisson process (MMPP). A MMPP combines a discrete (and hidden) Markov process with an additional Poisson process reflecting the observation of individual photons. The algorithmic framework is used to automatically analyze the dynamics of the complex formation and dissociation of Cu2+ ions with the bidentate ligand 2,2'-bipyridine-4,4'dicarboxylic acid in aqueous media. The process of association and dissociation of Cu2+ ions is monitored with SMFS. The dcbpy-DNA conjugate can exist in two or more distinct states which influence the photon emission rates. The advantage of a photon-by-photon analysis is that no information is lost in preprocessing steps. Different model complexities are investigated in order to best describe the recorded data and to determine transition rates on a photon-by-photon basis. The main strength of the method is that it allows to detect intermittent phenomena which are masked by binning and that are difficult to find using correlation techniques when they are short-lived.

  3. Non-homogeneous Markov process models with informative observations with an application to Alzheimer's disease.

    Science.gov (United States)

    Chen, Baojiang; Zhou, Xiao-Hua

    2011-05-01

    Identifying risk factors for transition rates among normal cognition, mildly cognitive impairment, dementia and death in an Alzheimer's disease study is very important. It is known that transition rates among these states are strongly time dependent. While Markov process models are often used to describe these disease progressions, the literature mainly focuses on time homogeneous processes, and limited tools are available for dealing with non-homogeneity. Further, patients may choose when they want to visit the clinics, which creates informative observations. In this paper, we develop methods to deal with non-homogeneous Markov processes through time scale transformation when observation times are pre-planned with some observations missing. Maximum likelihood estimation via the EM algorithm is derived for parameter estimation. Simulation studies demonstrate that the proposed method works well under a variety of situations. An application to the Alzheimer's disease study identifies that there is a significant increase in transition rates as a function of time. Furthermore, our models reveal that the non-ignorable missing mechanism is perhaps reasonable. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Enantiodromic effective generators of a Markov jump process with Gallavotti-Cohen symmetry.

    Science.gov (United States)

    Terohid, S A A; Torkaman, P; Jafarpour, F H

    2016-11-01

    This paper deals with the properties of the stochastic generators of the effective (driven) processes associated with atypical values of transition-dependent time-integrated currents with Gallavotti-Cohen symmetry in Markov jump processes. Exploiting the concept of biased ensemble of trajectories by introducing a biasing field s, we show that the stochastic generators of the effective processes associated with the biasing fields s and E-s are enantiodromic with respect to each other where E is the conjugated field to the current. We illustrate our findings by considering an exactly solvable creation-annihilation process of classical particles with nearest-neighbor interactions defined on a one-dimensional lattice.

  5. Detection of bursts in extracellular spike trains using hidden semi-Markov point process models.

    Science.gov (United States)

    Tokdar, Surya; Xi, Peiyi; Kelly, Ryan C; Kass, Robert E

    2010-08-01

    Neurons in vitro and in vivo have epochs of bursting or "up state" activity during which firing rates are dramatically elevated. Various methods of detecting bursts in extracellular spike trains have appeared in the literature, the most widely used apparently being Poisson Surprise (PS). A natural description of the phenomenon assumes (1) there are two hidden states, which we label "burst" and "non-burst," (2) the neuron evolves stochastically, switching at random between these two states, and (3) within each state the spike train follows a time-homogeneous point process. If in (2) the transitions from non-burst to burst and burst to non-burst states are memoryless, this becomes a hidden Markov model (HMM). For HMMs, the state transitions follow exponential distributions, and are highly irregular. Because observed bursting may in some cases be fairly regular-exhibiting inter-burst intervals with small variation-we relaxed this assumption. When more general probability distributions are used to describe the state transitions the two-state point process model becomes a hidden semi-Markov model (HSMM). We developed an efficient Bayesian computational scheme to fit HSMMs to spike train data. Numerical simulations indicate the method can perform well, sometimes yielding very different results than those based on PS.

  6. Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes.

    Science.gov (United States)

    Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide

    2017-01-01

    Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods.

  7. Markovian and non-Markovian protein sequence evolution: aggregated Markov process models.

    Science.gov (United States)

    Kosiol, Carolin; Goldman, Nick

    2011-08-26

    Over the years, there have been claims that evolution proceeds according to systematically different processes over different timescales and that protein evolution behaves in a non-Markovian manner. On the other hand, Markov models are fundamental to many applications in evolutionary studies. Apparent non-Markovian or time-dependent behavior has been attributed to influence of the genetic code at short timescales and dominance of physicochemical properties of the amino acids at long timescales. However, any long time period is simply the accumulation of many short time periods, and it remains unclear why evolution should appear to act systematically differently across the range of timescales studied. We show that the observed time-dependent behavior can be explained qualitatively by modeling protein sequence evolution as an aggregated Markov process (AMP): a time-homogeneous Markovian substitution model observed only at the level of the amino acids encoded by the protein-coding DNA sequence. The study of AMPs sheds new light on the relationship between amino acid-level and codon-level models of sequence evolution, and our results suggest that protein evolution should be modeled at the codon level rather than using amino acid substitution models. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes

    Directory of Open Access Journals (Sweden)

    Tomoaki Nakamura

    2017-12-01

    Full Text Available Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM, the emission distributions of which are Gaussian processes (GPs. Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods.

  9. Fisher informations and local asymptotic normality for continuous-time quantum Markov processes

    Science.gov (United States)

    Catana, Catalin; Bouten, Luc; Guţă, Mădălin

    2015-09-01

    We consider the problem of estimating an arbitrary dynamical parameter of an open quantum system in the input-output formalism. For irreducible Markov processes, we show that in the limit of large times the system-output state can be approximated by a quantum Gaussian state whose mean is proportional to the unknown parameter. This approximation holds locally in a neighbourhood of size {t}-1/2 in the parameter space, and provides an explicit expression of the asymptotic quantum Fisher information in terms of the Markov generator. Furthermore we show that additive statistics of the counting and homodyne measurements also satisfy local asymptotic normality and we compute the corresponding classical Fisher informations. The general theory is illustrated with the examples of a two-level system and the atom maser. Our results contribute towards a better understanding of the statistical and probabilistic properties of the output process, with relevance for quantum control engineering, and the theory of non-equilibrium quantum open systems.

  10. Combining experimental and simulation data of molecular processes via augmented Markov models.

    Science.gov (United States)

    Olsson, Simon; Wu, Hao; Paul, Fabian; Clementi, Cecilia; Noé, Frank

    2017-08-01

    Accurate mechanistic description of structural changes in biomolecules is an increasingly important topic in structural and chemical biology. Markov models have emerged as a powerful way to approximate the molecular kinetics of large biomolecules while keeping full structural resolution in a divide-and-conquer fashion. However, the accuracy of these models is limited by that of the force fields used to generate the underlying molecular dynamics (MD) simulation data. Whereas the quality of classical MD force fields has improved significantly in recent years, remaining errors in the Boltzmann weights are still on the order of a few [Formula: see text], which may lead to significant discrepancies when comparing to experimentally measured rates or state populations. Here we take the view that simulations using a sufficiently good force-field sample conformations that are valid but have inaccurate weights, yet these weights may be made accurate by incorporating experimental data a posteriori. To do so, we propose augmented Markov models (AMMs), an approach that combines concepts from probability theory and information theory to consistently treat systematic force-field error and statistical errors in simulation and experiment. Our results demonstrate that AMMs can reconcile conflicting results for protein mechanisms obtained by different force fields and correct for a wide range of stationary and dynamical observables even when only equilibrium measurements are incorporated into the estimation process. This approach constitutes a unique avenue to combine experiment and computation into integrative models of biomolecular structure and dynamics.

  11. Data-Driven Markov Decision Process Approximations for Personalized Hypertension Treatment Planning

    Directory of Open Access Journals (Sweden)

    Greggory J. Schell PhD

    2016-10-01

    Full Text Available Background: Markov decision process (MDP models are powerful tools. They enable the derivation of optimal treatment policies but may incur long computational times and generate decision rules that are challenging to interpret by physicians. Methods: In an effort to improve usability and interpretability, we examined whether Poisson regression can approximate optimal hypertension treatment policies derived by an MDP for maximizing a patient’s expected discounted quality-adjusted life years. Results: We found that our Poisson approximation to the optimal treatment policy matched the optimal policy in 99% of cases. This high accuracy translates to nearly identical health outcomes for patients. Furthermore, the Poisson approximation results in 104 additional quality-adjusted life years per 1000 patients compared to the Seventh Joint National Committee’s treatment guidelines for hypertension. The comparative health performance of the Poisson approximation was robust to the cardiovascular disease risk calculator used and calculator calibration error. Limitations: Our results are based on Markov chain modeling. Conclusions: Poisson model approximation for blood pressure treatment planning has high fidelity to optimal MDP treatment policies, which can improve usability and enhance transparency of more personalized treatment policies.

  12. Programming list processes. SLIP: symmetric list processor - applications

    International Nuclear Information System (INIS)

    Broudin, Y.

    1966-06-01

    Modern aspects of programming languages are essentially turned towards list processing. The ordinary methods of sequential treatment become inadequate and we must substitute list processes for them, where the cells of a group have no neighbourhood connection, but where the address of one cell is contained in the preceding one. These methods are required in 'time sharing' solving problems. They also allow us to treat new problems and to solve others in the shortest time. Many examples are presented after an abstract of the most usual list languages and a detailed study of one of them : SLIP. Among these examples one should note: locating of words in a dictionary or in a card index, treatment of non numerical symbols, formal derivation. The problems are treated in Fortran II on an IBM 7094 machine. The subroutines which make up the language are presented in an appendix. (author) [fr

  13. The application of Markov's stochastic processes in risk assessment for accounting information systems

    Directory of Open Access Journals (Sweden)

    Milojević Ivan

    2017-01-01

    Full Text Available Almost all processes in the area of business management, especially those of determining reliability of the accounting-information system in business management are connected with certain risk, i.e. they are of a stochastic character, which means that every method for solving these problems must be related to the probability theory and corresponding mathematical-statistical methods. This is why it can be noted that only reliable means for determining the reliability rate of the accounting information system are corresponding mathematical-statistical methods. Having this in mind, in this paper we tried to have the problem of forming the risk rate in the area of reliability of the accounting system solved by applying methods based on stochastic processes of Markov's type.

  14. A Novel Analytical Model for Network-on-Chip using Semi-Markov Process

    Directory of Open Access Journals (Sweden)

    WANG, J.

    2011-02-01

    Full Text Available Network-on-Chip (NoC communication architecture is proposed to resolve the bottleneck of Multi-processor communication in a single chip. In this paper, a performance analytical model using Semi-Markov Process (SMP is presented to obtain the NoC performance. More precisely, given the related parameters, SMP is used to describe the behavior of each channel and the header flit routing time on each channel can be calculated by analyzing the SMP. Then, the average packet latency in NoC can be calculated. The accuracy of our model is illustrated through simulation. Indeed, the experimental results show that the proposed model can be used to obtain NoC performance and it performs better than the state-of-art models. Therefore, our model can be used as a useful tool to guide the NoC design process.

  15. Numerical construction of the p(fold) (committor) reaction coordinate for a Markov process.

    Science.gov (United States)

    Krivov, Sergei V

    2011-10-06

    To simplify the description of a complex multidimensional dynamical process, one often projects it onto a single reaction coordinate. In protein folding studies, the folding probability p(fold) is an optimal reaction coordinate which preserves many important properties of the dynamics. The construction of the coordinate is difficult. Here, an efficient numerical approach to construct the p(fold) reaction coordinate for a Markov process (satisfying the detailed balance) is described. The coordinate is obtained by optimizing parameters of a chosen functional form to make a generalized cut-based free energy profile the highest. The approach is illustrated by constructing the p(fold) reaction coordinate for the equilibrium folding simulation of FIP35 protein reported by Shaw et al. (Science 2010, 330, 341-346). © 2011 American Chemical Society

  16. First Passage Moments of Finite-State Semi-Markov Processes

    Energy Technology Data Exchange (ETDEWEB)

    Warr, Richard [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cordeiro, James [Air Force Research Lab. (AFRL), Wright-Patterson AFB, OH (United States)

    2014-03-31

    In this paper, we discuss the computation of first-passage moments of a regular time-homogeneous semi-Markov process (SMP) with a finite state space to certain of its states that possess the property of universal accessibility (UA). A UA state is one which is accessible from any other state of the SMP, but which may or may not connect back to one or more other states. An important characteristic of UA is that it is the state-level version of the oft-invoked process-level property of irreducibility. We adapt existing results for irreducible SMPs to the derivation of an analytical matrix expression for the first passage moments to a single UA state of the SMP. In addition, consistent point estimators for these first passage moments, together with relevant R code, are provided.

  17. Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach

    Energy Technology Data Exchange (ETDEWEB)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Bordeaux INP, IMB, UMR CNRS 5251 (France); Piunovskiy, A. B., E-mail: piunov@liv.ac.uk [University of Liverpool, Department of Mathematical Sciences (United Kingdom)

    2016-08-15

    In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures of the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.

  18. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. I. Biological model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    Several replacement models have been presented in literature. In other applicational areas like dairy cow replacement, various methodological improvements like hierarchical Markov processes and Bayesian updating have been implemented, but not in sow models. Furthermore, there are methodological...... improvements like multi-level hierarchical Markov processes with decisions on multiple time scales, efficient methods for parameter estimations at herd level and standard software that has been hardly implemented at all in any replacement model. The aim of this study is to present a sow replacement model...

  19. A high-fidelity weather time series generator using the Markov Chain process on a piecewise level

    Science.gov (United States)

    Hersvik, K.; Endrerud, O.-E. V.

    2017-12-01

    A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.

  20. Mathematical Model of Induction Heating Processes in Axial Symmetric Inductor-Detail Systems

    Directory of Open Access Journals (Sweden)

    Maik Streblau

    2014-05-01

    Full Text Available The wide variety of models for analysis of processes in the inductor-detail systems makes it necessary to summarize them. This is a difficult task because of the variety of inductor-detail system configurations. This paper aims to present a multi physics mathematical model for complex analysis of electromagnetic and thermal fields in axial symmetric systems inductor-detail.

  1. Large-deviation functions for nonlinear functionals of a Gaussian stationary Markov process.

    Science.gov (United States)

    Majumdar, Satya N; Bray, Alan J

    2002-05-01

    We introduce a general method, based on a mapping onto quantum mechanics, for investigating the large-T limit of the distribution P(r,T) of the nonlinear functional r[V]=(1/T)integral(T)(0)dT' V[X(T')], where V(X) is an arbitrary function of the stationary Gaussian Markov process X(T). For T-->infinity at fixed r we obtain P(r,T) approximately exp[-theta(r)T], where theta(r) is a large-deviation function. We present explicit results for a number of special cases including V(X)=XH(X) [where H(X) is the Heaviside function], which is related to the cooling and the heating degree days relevant to weather derivatives.

  2. Sieve estimation in a Markov illness-death process under dual censoring.

    Science.gov (United States)

    Boruvka, Audrey; Cook, Richard J

    2016-04-01

    Semiparametric methods are well established for the analysis of a progressive Markov illness-death process observed up to a noninformative right censoring time. However, often the intermediate and terminal events are censored in different ways, leading to a dual censoring scheme. In such settings, unbiased estimation of the cumulative transition intensity functions cannot be achieved without some degree of smoothing. To overcome this problem, we develop a sieve maximum likelihood approach for inference on the hazard ratio. A simulation study shows that the sieve estimator offers improved finite-sample performance over common imputation-based alternatives and is robust to some forms of dependent censoring. The proposed method is illustrated using data from cancer trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Strategy Complexity of Finite-Horizon Markov Decision Processes and Simple Stochastic Games

    DEFF Research Database (Denmark)

    Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu

    2012-01-01

    Markov decision processes (MDPs) and simple stochastic games (SSGs) provide a rich mathematical framework to study many important problems related to probabilistic systems. MDPs and SSGs with finite-horizon objectives, where the goal is to maximize the probability to reach a target state in a given...... finite time, is a classical and well-studied problem. In this work we consider the strategy complexity of finite-horizon MDPs and SSGs. We show that for all ε > 0, the natural class of counter-based strategies require at most loglog(1 ϵ )+n+1 memory states, and memory of size Ω(loglog(1 ϵ )+n......) is required, for ε-optimality, where n is the number of states of the MDP (resp. SSG). Thus our bounds are asymptotically optimal. We then study the periodic property of optimal strategies, and show a sub-exponential lower bound on the period for optimal strategies....

  4. Dynamic Request Routing for Online Video-on-Demand Service: A Markov Decision Process Approach

    Directory of Open Access Journals (Sweden)

    Jianxiong Wan

    2014-01-01

    Full Text Available We investigate the request routing problem in the CDN-based Video-on-Demand system. We model the system as a controlled queueing system including a dispatcher and several edge servers. The system is formulated as a Markov decision process (MDP. Since the MDP formulation suffers from the so-called “the curse of dimensionality” problem, we then develop a greedy heuristic algorithm, which is simple and can be implemented online, to approximately solve the MDP model. However, we do not know how far it deviates from the optimal solution. To address this problem, we further aggregate the state space of the original MDP model and use the bounded-parameter MDP (BMDP to reformulate the system. This allows us to obtain a suboptimal solution with a known performance bound. The effectiveness of two approaches is evaluated in a simulation study.

  5. Partially ordered mixed hidden Markov model for the disablement process of older adults.

    Science.gov (United States)

    Ip, Edward H; Zhang, Qiang; Rejeski, W Jack; Harris, Tamara B; Kritchevsky, Stephen

    2013-06-01

    At both the individual and societal levels, the health and economic burden of disability in older adults is enormous in developed countries, including the U.S. Recent studies have revealed that the disablement process in older adults often comprises episodic periods of impaired functioning and periods that are relatively free of disability, amid a secular and natural trend of decline in functioning. Rather than an irreversible, progressive event that is analogous to a chronic disease, disability is better conceptualized and mathematically modeled as states that do not necessarily follow a strict linear order of good-to-bad. Statistical tools, including Markov models, which allow bidirectional transition between states, and random effects models, which allow individual-specific rate of secular decline, are pertinent. In this paper, we propose a mixed effects, multivariate, hidden Markov model to handle partially ordered disability states. The model generalizes the continuation ratio model for ordinal data in the generalized linear model literature and provides a formal framework for testing the effects of risk factors and/or an intervention on the transitions between different disability states. Under a generalization of the proportional odds ratio assumption, the proposed model circumvents the problem of a potentially large number of parameters when the number of states and the number of covariates are substantial. We describe a maximum likelihood method for estimating the partially ordered, mixed effects model and show how the model can be applied to a longitudinal data set that consists of N = 2,903 older adults followed for 10 years in the Health Aging and Body Composition Study. We further statistically test the effects of various risk factors upon the probabilities of transition into various severe disability states. The result can be used to inform geriatric and public health science researchers who study the disablement process.

  6. Extremes of Markov-additive processes with one-sided jumps, with queueing applications

    NARCIS (Netherlands)

    A.B. Dieker (Ton); M.R.H. Mandjes (Michel)

    2009-01-01

    htmlabstractThrough Laplace transforms, we study the extremes of a continuous-time Markov-additive pro- cess with one-sided jumps and a finite-state background Markovian state-space, jointly with the epoch at which the extreme is ‘attained’. For this, we investigate discrete-time Markov-additive

  7. Effects of stochastic interest rates in decision making under risk: A Markov decision process model for forest management

    Science.gov (United States)

    Mo Zhou; Joseph Buongiorno

    2011-01-01

    Most economic studies of forest decision making under risk assume a fixed interest rate. This paper investigated some implications of this stochastic nature of interest rates. Markov decision process (MDP) models, used previously to integrate stochastic stand growth and prices, can be extended to include variable interest rates as well. This method was applied to...

  8. Simulating the formation of keratin filament networks by a piecewise-deterministic Markov process.

    Science.gov (United States)

    Beil, Michael; Lück, Sebastian; Fleischer, Frank; Portet, Stéphanie; Arendt, Wolfgang; Schmidt, Volker

    2009-02-21

    Keratin intermediate filament networks are part of the cytoskeleton in epithelial cells. They were found to regulate viscoelastic properties and motility of cancer cells. Due to unique biochemical properties of keratin polymers, the knowledge of the mechanisms controlling keratin network formation is incomplete. A combination of deterministic and stochastic modeling techniques can be a valuable source of information since they can describe known mechanisms of network evolution while reflecting the uncertainty with respect to a variety of molecular events. We applied the concept of piecewise-deterministic Markov processes to the modeling of keratin network formation with high spatiotemporal resolution. The deterministic component describes the diffusion-driven evolution of a pool of soluble keratin filament precursors fueling various network formation processes. Instants of network formation events are determined by a stochastic point process on the time axis. A probability distribution controlled by model parameters exercises control over the frequency of different mechanisms of network formation to be triggered. Locations of the network formation events are assigned dependent on the spatial distribution of the soluble pool of filament precursors. Based on this modeling approach, simulation studies revealed that the architecture of keratin networks mostly depends on the balance between filament elongation and branching processes. The spatial distribution of network mesh size, which strongly influences the mechanical characteristics of filament networks, is modulated by lateral annealing processes. This mechanism which is a specific feature of intermediate filament networks appears to be a major and fast regulator of cell mechanics.

  9. Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium

    Science.gov (United States)

    Kapfer, Sebastian C.; Krauth, Werner

    2017-12-01

    We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.

  10. Mapping absorption processes onto a Markov chain, conserving the mean first passage time

    International Nuclear Information System (INIS)

    Biswas, Katja

    2013-01-01

    The dynamics of a multidimensional system is projected onto a discrete state master equation using the transition rates W(k → k′; t, t + dt) between a set of states {k} represented by the regions {ζ k } in phase or discrete state space. Depending on the dynamics Γ i (t) of the original process and the choice of ζ k , the discretized process can be Markovian or non-Markovian. For absorption processes, it is shown that irrespective of these properties of the projection, a master equation with time-independent transition rates W-bar (k→k ' ) can be obtained, which conserves the total occupation time of the partitions of the phase or discrete state space of the original process. An expression for the transition probabilities p-bar (k ' |k) is derived based on either time-discrete measurements {t i } with variable time stepping Δ (i+1)i = t i+1 − t i or the theoretical knowledge at continuous times t. This allows computational methods of absorbing Markov chains to be used to obtain the mean first passage time (MFPT) of the system. To illustrate this approach, the procedure is applied to obtain the MFPT for the overdamped Brownian motion of particles subject to a system with dichotomous noise and the escape from an entropic barrier. The high accuracy of the simulation results confirms with the theory. (paper)

  11. Inferring the parameters of a Markov process from snapshots of the steady state

    Science.gov (United States)

    Dettmer, Simon L.; Berg, Johannes

    2018-02-01

    We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.

  12. On Markov Chains and Filtrations

    OpenAIRE

    Spreij, Peter

    1997-01-01

    In this paper we rederive some well known results for continuous time Markov processes that live on a finite state space.Martingale techniques are used throughout the paper. Special attention is paid to the construction of a continuous timeMarkov process, when we start from a discrete time Markov chain. The Markov property here holds with respect tofiltrations that need not be minimal.

  13. Using Bayesian Nonparametric Hidden Semi-Markov Models to Disentangle Affect Processes during Marital Interaction.

    Directory of Open Access Journals (Sweden)

    William A Griffin

    Full Text Available Sequential affect dynamics generated during the interaction of intimate dyads, such as married couples, are associated with a cascade of effects-some good and some bad-on each partner, close family members, and other social contacts. Although the effects are well documented, the probabilistic structures associated with micro-social processes connected to the varied outcomes remain enigmatic. Using extant data we developed a method of classifying and subsequently generating couple dynamics using a Hierarchical Dirichlet Process Hidden semi-Markov Model (HDP-HSMM. Our findings indicate that several key aspects of existing models of marital interaction are inadequate: affect state emissions and their durations, along with the expected variability differences between distressed and nondistressed couples are present but highly nuanced; and most surprisingly, heterogeneity among highly satisfied couples necessitate that they be divided into subgroups. We review how this unsupervised learning technique generates plausible dyadic sequences that are sensitive to relationship quality and provide a natural mechanism for computational models of behavioral and affective micro-social processes.

  14. Towards a Theory of Sampled-Data Piecewise-Deterministic Markov Processes

    Science.gov (United States)

    Herencia-Zapana, Heber; Gonzalez, Oscar R.; Gray, W. Steven

    2006-01-01

    The analysis and design of practical control systems requires that stochastic models be employed. Analysis and design tools have been developed, for example, for Markovian jump linear continuous and discrete-time systems, piecewise-deterministic processes (PDP's), and general stochastic hybrid systems (GSHS's). These model classes have been used in many applications, including fault tolerant control and networked control systems. This paper presents initial results on the analysis of a sampled-data PDP representation of a nonlinear sampled-data system with a jump linear controller. In particular, it is shown that the state of the sampled-data PDP satisfies the strong Markov property. In addition, a relation between the invariant measures of a sampled-data system driven by a stochastic process and its associated discrete-time representation are presented. As an application, when the plant is linear with no external input, a sufficient testable condition for the convergence in distribution to the invariant delta Dirac measure is given.

  15. Symmetrical modified dual tree complex wavelet transform for processing quadrature Doppler ultrasound signals.

    Science.gov (United States)

    Serbes, G; Aydin, N

    2011-01-01

    Dual-tree complex wavelet transform (DTCWT), which is a shift invariant transform with limited redundancy, is an improved version of discrete wavelet transform. Complex quadrature signals are dual channel signals obtained from the systems employing quadrature demodulation. An example of such signals is quadrature Doppler signal obtained from blood flow analysis systems. Prior to processing Doppler signals using the DTCWT, directional flow signals must be obtained and then two separate DTCWT applied, increasing the computational complexity. In this study, in order to decrease computational complexity, a symmetrical modified DTCWT algorithm is proposed (SMDTCWT). A comparison between the new transform and the symmetrical phasing-filter technique is presented. Additionally denoising performance of SMDTCWT is compared with the DWT and the DTCWT using simulated signals. The results show that the proposed method gives the same output as the symmetrical phasing-filter method, the computational complexity for processing quadrature signals using DTCWT is greatly reduced and finally the SMDTCWT based denoising outperforms conventional DWT with same computational complexity.

  16. Markov decision processes and the belief-desire-intention model bridging the gap for autonomous agents

    CERN Document Server

    Simari, Gerardo I

    2011-01-01

    In this work, we provide a treatment of the relationship between two models that have been widely used in the implementation of autonomous agents: the Belief DesireIntention (BDI) model and Markov Decision Processes (MDPs). We start with an informal description of the relationship, identifying the common features of the two approaches and the differences between them. Then we hone our understanding of these differences through an empirical analysis of the performance of both models on the TileWorld testbed. This allows us to show that even though the MDP model displays consistently better behavior than the BDI model for small worlds, this is not the case when the world becomes large and the MDP model cannot be solved exactly. Finally we present a theoretical analysis of the relationship between the two approaches, identifying mappings that allow us to extract a set of intentions from a policy (a solution to an MDP), and to extract a policy from a set of intentions.

  17. Markov-CA model using analytical hierarchy process and multiregression technique

    Science.gov (United States)

    Omar, N. Q.; Sanusi, S. A. M.; Hussin, W. M. W.; Samat, N.; Mohammed, K. S.

    2014-06-01

    The unprecedented increase in population and rapid rate of urbanisation has led to extensive land use changes. Cellular automata (CA) are increasingly used to simulate a variety of urban dynamics. This paper introduces a new CA based on an integration model built-in multi regression and multi-criteria evaluation to improve the representation of CA transition rule. This multi-criteria evaluation is implemented by utilising data relating to the environmental and socioeconomic factors in the study area in order to produce suitability maps (SMs) using an analytical hierarchical process, which is a well-known method. Before being integrated to generate suitability maps for the periods from 1984 to 2010 based on the different decision makings, which have become conditioned for the next step of CA generation. The suitability maps are compared in order to find the best maps based on the values of the root equation (R2). This comparison can help the stakeholders make better decisions. Thus, the resultant suitability map derives a predefined transition rule for the last step for CA model. The approach used in this study highlights a mechanism for monitoring and evaluating land-use and land-cover changes in Kirkuk city, Iraq owing changes in the structures of governments, wars, and an economic blockade over the past decades. The present study asserts the high applicability and flexibility of Markov-CA model. The results have shown that the model and its interrelated concepts are performing rather well.

  18. A test of multiple correlation temporal window characteristic of non-Markov processes

    Science.gov (United States)

    Arecchi, F. T.; Farini, A.; Megna, N.

    2016-03-01

    We introduce a sensitive test of memory effects in successive events. The test consists of a combination K of binary correlations at successive times. K decays monotonically from K = 1 for uncorrelated events as a Markov process. For a monotonic memory fading, K1 temporal window in cognitive tasks consisting of the visual identification of the front face of the Necker cube after a previous presentation of the same. We speculate that memory effects provide a temporal window with K>1 and this experiment could be a possible first step towards a better comprehension of this phenomenon. The K>1 behaviour is maximal at an inter-measurement time τ around 2s with inter-subject differences. The K>1 persists over a time window of 1s around τ; outside this window the K1 window in pairs of successive perceptions suggests that, at variance with single visual stimuli eliciting a suitable response, a pair of stimuli shortly separated in time displays mutual correlations.

  19. Composition of Web Services Using Markov Decision Processes and Dynamic Programming

    Science.gov (United States)

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247

  20. Optimizing Prescription of Chinese Herbal Medicine for Unstable Angina Based on Partially Observable Markov Decision Process

    Directory of Open Access Journals (Sweden)

    Yan Feng

    2013-01-01

    Full Text Available Objective. Initial optimized prescription of Chinese herb medicine for unstable angina (UA. Methods. Based on partially observable Markov decision process model (POMDP, we choose hospitalized patients of 3 syndrome elements, such as qi deficiency, blood stasis, and turbid phlegm for the data mining, analysis, and objective evaluation of the diagnosis and treatment of UA at a deep level in order to optimize the prescription of Chinese herb medicine for UA. Results. The recommended treatment options of UA for qi deficiency, blood stasis, and phlegm syndrome patients were as follows: Milkvetch Root + Tangshen + Indian Bread + Largehead Atractylodes Rhizome (ADR=0.96630; Danshen Root + Chinese Angelica + Safflower + Red Peony Root + Szechwan Lovage Rhizome Orange Fruit (ADR=0.76; Snakegourd Fruit + Longstamen Onion Bulb + Pinellia Tuber + Dried Tangerine peel + Largehead Atractylodes Rhizome + Platycodon Root (ADR=0.658568. Conclusion. This study initially optimized prescriptions for UA based on POMDP, which can be used as a reference for further development of UA prescription in Chinese herb medicine.

  1. Performance Evaluation and Optimal Management of Distance-Based Registration Using a Semi-Markov Process

    Directory of Open Access Journals (Sweden)

    Jae Joon Suh

    2017-01-01

    Full Text Available We consider the distance-based registration (DBR which is a kind of dynamic location registration scheme in a mobile communication network. In the DBR, the location of a mobile station (MS is updated when it enters a base station more than or equal to a specified distance away from the base station where the location registration for the MS was done last. In this study, we first investigate the existing performance-evaluation methods on the DBR with implicit registration (DBIR presented to improve the performance of the DBR and point out some problems of the evaluation methods. We propose a new performance-evaluation method for the DBIR scheme using a semi-Markov process (SMP which can resolve the controversial issues of the existing methods. The numerical results obtained with the proposed SMP model are compared with those from previous models. It is shown that the SMP model should be considered to get an accurate performance of the DBIR scheme.

  2. Detecting phylogenetic signal in mutualistic interaction networks using a Markov process model.

    Science.gov (United States)

    Minoarivelo, H O; Hui, C; Terblanche, J S; Pond, S L Kosakovsky; Scheffler, K

    2014-10-01

    Ecological interaction networks, such as those describing the mutualistic interactions between plants and their pollinators or between plants and their frugivores, exhibit non-random structural properties that cannot be explained by simple models of network formation. One factor affecting the formation and eventual structure of such a network is its evolutionary history. We argue that this, in many cases, is closely linked to the evolutionary histories of the species involved in the interactions. Indeed, empirical studies of interaction networks along with the phylogenies of the interacting species have demonstrated significant associations between phylogeny and network structure. To date, however, no generative model explaining the way in which the evolution of individual species affects the evolution of interaction networks has been proposed. We present a model describing the evolution of pairwise interactions as a branching Markov process, drawing on phylogenetic models of molecular evolution. Using knowledge of the phylogenies of the interacting species, our model yielded a significantly better fit to 21% of a set of plant - pollinator and plant - frugivore mutualistic networks. This highlights the importance, in a substantial minority of cases, of inheritance of interaction patterns without excluding the potential role of ecological novelties in forming the current network architecture. We suggest that our model can be used as a null model for controlling evolutionary signals when evaluating the role of other factors in shaping the emergence of ecological networks.

  3. A markov decision process model for the optimal dispatch of military medical evacuation assets.

    Science.gov (United States)

    Keneally, Sean K; Robbins, Matthew J; Lunday, Brian J

    2016-06-01

    We develop a Markov decision process (MDP) model to examine aerial military medical evacuation (MEDEVAC) dispatch policies in a combat environment. The problem of deciding which aeromedical asset to dispatch to each service request is complicated by the threat conditions at the service locations and the priority class of each casualty event. We assume requests for MEDEVAC support arrive sequentially, with the location and the priority of each casualty known upon initiation of the request. The United States military uses a 9-line MEDEVAC request system to classify casualties as being one of three priority levels: urgent, priority, and routine. Multiple casualties can be present at a single casualty event, with the highest priority casualty determining the priority level for the casualty event. Moreover, an armed escort may be required depending on the threat level indicated by the 9-line MEDEVAC request. The proposed MDP model indicates how to optimally dispatch MEDEVAC helicopters to casualty events in order to maximize steady-state system utility. The utility gained from servicing a specific request depends on the number of casualties, the priority class for each of the casualties, and the locations of both the servicing ambulatory helicopter and casualty event. Instances of the dispatching problem are solved using a relative value iteration dynamic programming algorithm. Computational examples are used to investigate optimal dispatch policies under different threat situations and armed escort delays; the examples are based on combat scenarios in which United States Army MEDEVAC units support ground operations in Afghanistan.

  4. A Markov Decision Process Model for Cervical Cancer Screening Policies in Colombia.

    Science.gov (United States)

    Akhavan-Tabatabaei, Raha; Sánchez, Diana Marcela; Yeung, Thomas G

    2017-02-01

    Cervical cancer is the second most common cancer in women around the world, and the human papillomavirus (HPV) is universally known as the necessary agent for developing this disease. Through early detection of abnormal cells and HPV virus types, cervical cancer incidents can be reduced and disease progression prevented. We propose a finite-horizon Markov decision process model to determine the optimal screening policies for cervical cancer prevention. The optimal decision is given in terms of when and what type of screening test to be performed on a patient based on her current diagnosis, age, HPV contraction risk, and screening test results. The cost function considers the tradeoff between the cost of prevention and treatment procedures and the risk of taking no action while taking into account a cost assigned to loss of life quality in each state. We apply the model to data collected from a representative sample of 1141 affiliates at a health care provider located in Bogotá, Colombia. To track the disease incidence more effectively and avoid higher cancer rates and future costs, the optimal policies recommend more frequent colposcopies and Pap tests for women with riskier profiles.

  5. [Optimized treatment program for unstable angina by integrative medicine based on partially observable Markov decision process].

    Science.gov (United States)

    Feng, Yan; Xu, Hao; Liu, Kai; Zhou, Xue-Zhong; Chen, Ke-Ji

    2013-07-01

    To initially optimize comprehensive treatment program for treating and preventing unstable angina (UA) by integrative medicine (IM). Based on partially observable Markov decision process model (POMDP), we chose 3 syndrome elements, i.e., qi deficiency, blood stasis, and phlegm turbidity from UA inpatients. The efficacy of treating UA by IM was objectively assessed by in-depth data mining and analyses. The treatment programs for UA patients of qi deficiency syndrome, blood stasis syndrome, and phlegm turbidity syndrome were recommended as follows: nitrates +statins +clopidogrel +angiotensin II receptor blockers +heparins +Astragalus membranaceus +Condonopsis + poria and large-head atractylodes rhizome (ADR = 0.85077869); nitrates + aspirin + clopidogrel + statins + heparins + Astragalus membranaceus + safflower + peach seed + red peony root (ADR = 0.70773000); nitrates + aspirin + statins + angiotensin-converting inhibitors + snakegourd fruit + onion bulb + ternate pinellia + tangerine peel (ADR = 0.72509600). As a POMDP based optimized treatment programs for UA, it can be used as a reference for further standardization and formulation of UA program by integrative medicine.

  6. Strategic level proton therapy patient admission planning: a Markov decision process modeling approach.

    Science.gov (United States)

    Gedik, Ridvan; Zhang, Shengfan; Rainwater, Chase

    2017-06-01

    A relatively new consideration in proton therapy planning is the requirement that the mix of patients treated from different categories satisfy desired mix percentages. Deviations from these percentages and their impacts on operational capabilities are of particular interest to healthcare planners. In this study, we investigate intelligent ways of admitting patients to a proton therapy facility that maximize the total expected number of treatment sessions (fractions) delivered to patients in a planning period with stochastic patient arrivals and penalize the deviation from the patient mix restrictions. We propose a Markov Decision Process (MDP) model that provides very useful insights in determining the best patient admission policies in the case of an unexpected opening in the facility (i.e., no-shows, appointment cancellations, etc.). In order to overcome the curse of dimensionality for larger and more realistic instances, we propose an aggregate MDP model that is able to approximate optimal patient admission policies using the worded weight aggregation technique. Our models are applicable to healthcare treatment facilities throughout the United States, but are motivated by collaboration with the University of Florida Proton Therapy Institute (UFPTI).

  7. Analyses of Markov decision process structure regarding the possible strategic use of interacting memory systems

    Directory of Open Access Journals (Sweden)

    Eric A Zilli

    2008-12-01

    Full Text Available Behavioral tasks are often used to study the different memory systems present in humans and animals. Such tasks are usually designed to isolate and measure some aspect of a single memory system. However, it is not necessarily clear that any given task actually does isolate a system or that the strategy used by a subject in the experiment is the one desired by the experimenter. We have previously shown that when tasks are written mathematically as a form of partially-observable Markov decision processes, the structure of the tasks provide information regarding the possible utility of certain memory systems. These previous analyses dealt with the disambiguation problem: given a specific ambiguous observation of the environment, is there information provided by a given memory strategy that can disambiguate that observation to allow a correct decisionµ Here we extend this approach to cases where multiple memory systems can be strategically combined in different ways. Specifically, we analyze the disambiguation arising from three ways by which episodic-like memory retrieval might be cued (by another episodic-like memory, by a semantic association, or by working memory for some earlier observation. We also consider the disambiguation arising from holding earlier working memories, episodic-like memories or semantic associations in working memory. From these analyses we can begin to develop a quantitative hierarchy among memory systems in which stimulus-response memories and semantic associations provide no disambiguation while the episodic memory system provides the most flexible

  8. Human Gait Modeling and Analysis Using a Semi-Markov Process With Ground Reaction Forces.

    Science.gov (United States)

    Ma, Hao; Liao, Wei-Hsin

    2017-06-01

    Modeling and evaluation of patients' gait patterns is the basis for both gait assessment and gait rehabilitation. This paper presents a convenient and real-time gait modeling, analysis, and evaluation method based on ground reaction forces (GRFs) measured by a pair of smart insoles. Gait states are defined based on the foot-ground contact forms of both legs. From the obtained gait state sequence and the duration of each state, the human gait is modeled as a semi-Markov process (SMP). Four groups of gait features derived from the SMP gait model are used for characterizing individual gait patterns. With this model, both the normal gaits of healthy people and the abnormal gaits of patients with impaired mobility are analyzed. Abnormal evaluation indices (AEI) are further proposed for gait abnormality assessment. Gait analysis experiments are conducted on 23 subjects with different ages and health conditions. The results show that gait patterns are successfully obtained and evaluated for normal, age-related, and pathological gaits. The effectiveness of the proposed AEI for gait assessment is verified through comparison with a video-based gait abnormality rating scale.

  9. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. I. Biological model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    Several replacement models have been presented in literature. In other applicational areas like dairy cow replacement, various methodological improvements like hierarchical Markov processes and Bayesian updating have been implemented, but not in sow models. Furthermore, there are methodological...... improvements like multi-level hierarchical Markov processes with decisions on multiple time scales, efficient methods for parameter estimations at herd level and standard software that has been hardly implemented at all in any replacement model. The aim of this study is to present a sow replacement model...... that really uses all these methodological improvements. In this paper, the biological model describing the performance and feed intake of sows is presented. In particular, estimation of herd specific parameters is emphasized. The optimization model is described in a subsequent paper...

  10. A Markov chain description of the stepwise mutation model: local and global behaviour of the allele process.

    Science.gov (United States)

    Caliebe, Amke; Jochens, Arne; Krawczak, Michael; Rösler, Uwe

    2010-09-21

    The stepwise mutation model (SMM) is a simple, widely used model to describe the evolutionary behaviour of microsatellites. We apply a Markov chain description of the SMM and derive the marginal and joint properties of this process. In addition to the standard SMM, we also consider the normalised allele process. In contrast to the standard process, the normalised process converges to a stationary distribution. We show that the marginal stationary distribution is unimodal. The standard and normalised processes capture the global and the local behaviour of the SMM, respectively. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  11. Projected metastable Markov processes and their estimation with observable operator models

    International Nuclear Information System (INIS)

    Wu, Hao; Prinz, Jan-Hendrik; Noé, Frank

    2015-01-01

    The determination of kinetics of high-dimensional dynamical systems, such as macromolecules, polymers, or spin systems, is a difficult and generally unsolved problem — both in simulation, where the optimal reaction coordinate(s) are generally unknown and are difficult to compute, and in experimental measurements, where only specific coordinates are observable. Markov models, or Markov state models, are widely used but suffer from the fact that the dynamics on a coarsely discretized state spaced are no longer Markovian, even if the dynamics in the full phase space are. The recently proposed projected Markov models (PMMs) are a formulation that provides a description of the kinetics on a low-dimensional projection without making the Markovianity assumption. However, as yet no general way of estimating PMMs from data has been available. Here, we show that the observed dynamics of a PMM can be exactly described by an observable operator model (OOM) and derive a PMM estimator based on the OOM learning

  12. Mathematical model of the loan portfolio dynamics in the form of Markov chain considering the process of new customers attraction

    Science.gov (United States)

    Bozhalkina, Yana

    2017-12-01

    Mathematical model of the loan portfolio structure change in the form of Markov chain is explored. This model considers in one scheme both the process of customers attraction, their selection based on the credit score, and loans repayment. The model describes the structure and volume of the loan portfolio dynamics, which allows to make medium-term forecasts of profitability and risk. Within the model corrective actions of bank management in order to increase lending volumes or to reduce the risk are formalized.

  13. Double-observer line transect surveys with Markov-modulated Poisson process models for animal availability.

    Science.gov (United States)

    Borchers, D L; Langrock, R

    2015-12-01

    We develop maximum likelihood methods for line transect surveys in which animals go undetected at distance zero, either because they are stochastically unavailable while within view or because they are missed when they are available. These incorporate a Markov-modulated Poisson process model for animal availability, allowing more clustered availability events than is possible with Poisson availability models. They include a mark-recapture component arising from the independent-observer survey, leading to more accurate estimation of detection probability given availability. We develop models for situations in which (a) multiple detections of the same individual are possible and (b) some or all of the availability process parameters are estimated from the line transect survey itself, rather than from independent data. We investigate estimator performance by simulation, and compare the multiple-detection estimators with estimators that use only initial detections of individuals, and with a single-observer estimator. Simultaneous estimation of detection function parameters and availability model parameters is shown to be feasible from the line transect survey alone with multiple detections and double-observer data but not with single-observer data. Recording multiple detections of individuals improves estimator precision substantially when estimating the availability model parameters from survey data, and we recommend that these data be gathered. We apply the methods to estimate detection probability from a double-observer survey of North Atlantic minke whales, and find that double-observer data greatly improve estimator precision here too. © 2015 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  14. Algorithmic analysis of the maximum level length in general-block two-dimensional Markov processes

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available Two-dimensional continuous-time Markov chains (CTMCs are useful tools for studying stochastic models such as queueing, inventory, and production systems. Of particular interest in this paper is the distribution of the maximal level visited in a busy period because this descriptor provides an excellent measure of the system congestion. We present an algorithmic analysis for the computation of its distribution which is valid for Markov chains with general-block structure. For a multiserver batch arrival queue with retrials and negative arrivals, we exploit the underlying internal block structure and present numerical examples that reveal some interesting facts of the system.

  15. A competitive Markov decision process model for the energy–water–climate change nexus

    International Nuclear Information System (INIS)

    Nanduri, Vishnu; Saavedra-Antolínez, Ivan

    2013-01-01

    Highlights: • Developed a CMDP model for the energy–water–climate change nexus. • Solved the model using a reinforcement learning algorithm. • Study demonstrated on 30-bus IEEE electric power network using DCOPF formulation. • Sixty percentage drop in CO 2 and 40% drop in H 2 O use when coal replaced by wind (over 10 years). • Higher profits for nuclear and wind as well as higher LMPs under CO 2 and H 2 O taxes. - Abstract: Drought-like conditions in some parts of the US and around the world are causing water shortages that lead to power failures, becoming a source of concern to independent system operators. Water shortages can cause significant challenges in electricity production and thereby a direct socioeconomic impact on the surrounding region. Our paper presents a new, comprehensive quantitative model that examines the electricity–water–climate change nexus. We investigate the impact of a joint water and carbon tax proposal on the operation of a transmission-constrained power network operating in a wholesale power market setting. We develop a competitive Markov decision process (CMDP) model for the dynamic competition in wholesale electricity markets, and solve the model using reinforcement learning. Several cases, including the impact of different tax schemes, integration of stochastic wind energy resources, and capacity disruptions due to droughts are investigated. Results from the analysis on the sample power network show that electricity prices increased with the adoption of water and carbon taxes compared with locational marginal prices without taxes. As expected, wind energy integration reduced both CO 2 emissions and water usage. Capacity disruptions also caused locational marginal prices to increase. Other detailed analyses and results obtained using a 30-bus IEEE network are discussed in detail

  16. Learning to maximize reward rate: a model based on semi-Markov decision processes.

    Science.gov (United States)

    Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R

    2014-01-01

    WHEN ANIMALS HAVE TO MAKE A NUMBER OF DECISIONS DURING A LIMITED TIME INTERVAL, THEY FACE A FUNDAMENTAL PROBLEM: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible "conditions." A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each "condition" being a "state" and the value of decision thresholds being the "actions" taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values.

  17. An integral equation approach to the interval reliability of systems modelled by finite semi-Markov processes

    International Nuclear Information System (INIS)

    Csenki, A.

    1995-01-01

    The interval reliability for a repairable system which alternates between working and repair periods is defined as the probability of the system being functional throughout a given time interval. In this paper, a set of integral equations is derived for this dependability measure, under the assumption that the system is modelled by an irreducible finite semi-Markov process. The result is applied to the semi-Markov model of a two-unit system with sequential preventive maintenance. The method used for the numerical solution of the resulting system of integral equations is a two-point trapezoidal rule. The system of implementation is the matrix computation package MATLAB on the Apple Macintosh SE/30. The numerical results are discussed and compared with those from simulation

  18. Estimation in autoregressive models with Markov regime

    OpenAIRE

    Ríos, Ricardo; Rodríguez, Luis

    2005-01-01

    In this paper we derive the consistency of the penalized likelihood method for the number state of the hidden Markov chain in autoregressive models with Markov regimen. Using a SAEM type algorithm to estimate the models parameters. We test the null hypothesis of hidden Markov Model against an autoregressive process with Markov regime.

  19. Modelling Faculty Replacement Strategies Using a Time-Dependent Finite Markov-Chain Process.

    Science.gov (United States)

    Hackett, E. Raymond; Magg, Alexander A.; Carrigan, Sarah D.

    1999-01-01

    Describes the use of a time-dependent Markov-chain model to develop faculty-replacement strategies within a college at a research university. The study suggests that a stochastic modelling approach can provide valuable insight when planning for personnel needs in the immediate (five-to-ten year) future. (MSE)

  20. Choosing the order of deceased donor and living donor kidney transplantation in pediatric recipients: a Markov decision process model.

    Science.gov (United States)

    Van Arendonk, Kyle J; Chow, Eric K H; James, Nathan T; Orandi, Babak J; Ellison, Trevor A; Smith, Jodi M; Colombani, Paul M; Segev, And Dorry L

    2015-02-01

    Most pediatric kidney transplant recipients eventually require retransplantation, and the most advantageous timing strategy regarding deceased and living donor transplantation in candidates with only 1 living donor remains unclear. A patient-oriented Markov decision process model was designed to compare, for a given patient with 1 living donor, living-donor-first followed if necessary by deceased donor retransplantation versus deceased-donor-first followed if necessary by living donor (if still able to donate) or deceased donor (if not) retransplantation. Based on Scientific Registry of Transplant Recipients data, the model was designed to account for waitlist, graft, and patient survival, sensitization, increased risk of graft failure seen during late adolescence, and differential deceased donor waiting times based on pediatric priority allocation policies. Based on national cohort data, the model was also designed to account for aging or disease development, leading to ineligibility of the living donor over time. Given a set of candidate and living donor characteristics, the Markov model provides the expected patient survival over a time horizon of 20 years. For the most highly sensitized patients (panel reactive antibody > 80%), a deceased-donor-first strategy was advantageous, but for all other patients (panel reactive antibody Markov model illustrates how patients, families, and providers can be provided information and predictions regarding the most advantageous use of deceased donor versus living donor transplantation for pediatric recipients.

  1. Applying a Markov approach as a Lean Thinking analysis of waste elimination in a Rice Production Process

    Directory of Open Access Journals (Sweden)

    Eldon Glen Caldwell Marin

    2015-01-01

    Full Text Available The Markov Chains Model was proposed to analyze stochastic events when recursive cycles occur; for example, when rework in a continuous flow production affects the overall performance. Typically, the analysis of rework and scrap is done through a wasted material cost perspective and not from the perspective of waste capacity that reduces throughput and economic value added (EVA. Also, we can not find many cases of this application in agro-industrial production in Latin America, given the complexity of the calculations and the need for robust applications. This scientific work presents the results of a quasi-experimental research approach in order to explain how to apply DOE methods and Markov analysis in a rice production process located in Central America, evaluating the global effects of a single reduction in rework and scrap in a part of the whole line. The results show that in this case it is possible to evaluate benefits from Global Throughput and EVA perspective and not only from the saving costs perspective, finding a relationship between operational indicators and corporate performance. However, it was found that it is necessary to analyze the markov chains configuration with many rework points, also it is still relevant to take into account the effects on takt time and not only scrap´s costs.

  2. Persistence of a continuous stochastic process with discrete-time sampling: non-Markov processes.

    Science.gov (United States)

    Ehrhardt, George C M A; Bray, Alan J; Majumdar, Satya N

    2002-04-01

    We consider the problem of "discrete-time persistence," which deals with the zero crossings of a continuous stochastic process X(T) measured at discrete times T=nDeltaT. For a Gaussian stationary process the persistence (no crossing) probability decays as exp(-theta(D)T)=[rho(a)](n) for large n, where a=exp(-DeltaT/2) and the discrete persistence exponent theta(D) is given by theta(D)=(ln rho)/(2 ln a). Using the "independent interval approximation," we show how theta(D) varies with DeltaT for small DeltaT and conclude that experimental measurements of persistence for smooth processes, such as diffusion, are less sensitive to the effects of discrete sampling than measurements of a randomly accelerated particle or random walker. We extend the matrix method developed by us previously [Phys. Rev. E 64, 015101(R) (2001)] to determine rho(a) for a two-dimensional random walk and the one-dimensional random-acceleration problem. We also consider "alternating persistence," which corresponds to a<0, and calculate rho(a) for this case.

  3. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    Keywords. Markov chain; state space; stationary transition probability; stationary distribution; irreducibility; aperiodicity; stationarity; M-H algorithm; proposal distribution; acceptance probability; image processing; Gibbs sampler.

  4. Markov or not Markov - this should be a question

    OpenAIRE

    Bode, Eckhardt; Bickenbach, Frank

    2002-01-01

    Although it is well known that Markov process theory, frequently applied in the literature on income convergence, imposes some very restrictive assumptions upon the data generating process, these assumptions have generally been taken for granted so far. The present paper proposes, resp. recalls chi-square tests of the Markov property, of spatial independence, and of homogeneity across time and space to assess the reliability of estimated Markov transition matrices. As an illustration we show ...

  5. Artificial intelligence framework for simulating clinical decision-making: a Markov decision process approach.

    Science.gov (United States)

    Bennett, Casey C; Hauser, Kris

    2013-01-01

    In the modern healthcare system, rapidly expanding costs/complexity, the growing myriad of treatment options, and exploding information streams that often do not effectively reach the front lines hinder the ability to choose optimal treatment decisions over time. The goal in this paper is to develop a general purpose (non-disease-specific) computational/artificial intelligence (AI) framework to address these challenges. This framework serves two potential functions: (1) a simulation environment for exploring various healthcare policies, payment methodologies, etc., and (2) the basis for clinical artificial intelligence - an AI that can "think like a doctor". This approach combines Markov decision processes and dynamic decision networks to learn from clinical data and develop complex plans via simulation of alternative sequential decision paths while capturing the sometimes conflicting, sometimes synergistic interactions of various components in the healthcare system. It can operate in partially observable environments (in the case of missing observations or data) by maintaining belief states about patient health status and functions as an online agent that plans and re-plans as actions are performed and new observations are obtained. This framework was evaluated using real patient data from an electronic health record. The results demonstrate the feasibility of this approach; such an AI framework easily outperforms the current treatment-as-usual (TAU) case-rate/fee-for-service models of healthcare. The cost per unit of outcome change (CPUC) was $189 vs. $497 for AI vs. TAU (where lower is considered optimal) - while at the same time the AI approach could obtain a 30-35% increase in patient outcomes. Tweaking certain AI model parameters could further enhance this advantage, obtaining approximately 50% more improvement (outcome change) for roughly half the costs. Given careful design and problem formulation, an AI simulation framework can approximate optimal

  6. Modeling of HIV/AIDS dynamic evolution using non-homogeneous semi-markov process.

    Science.gov (United States)

    Dessie, Zelalem Getahun

    2014-01-01

    The purpose of this study is to model the progression of HIV/AIDS disease of an individual patient under ART follow-up using non-homogeneous semi-Markov processes. The model focuses on the patient's age as a relevant factor to forecast the transitions among the different levels of seriousness of the disease. A sample of 1456 patients was taken from a hospital record at Amhara Referral Hospitals, Amhara Region, Ethiopia, who were under ART follow up from June 2006 to August 2013. The states of disease progression adopted in the model were defined based on of the following CD4 cell counts: >500 cells/mm(3) (SI); 349 to 500 cells/mm(3) (SII); 199 to 350 cells/mm(3)(SIII); ≤200 cells/mm(3) (SIV); and death (D). The first four states are referred as living states. The probability that an HIV/AIDS patient with any one of the living states will transition to the death state is greater with increasing age, irrespective of the current state and age of the patient. More generally, the probability of dying decreases with increasing CD4 counts over time. For an HIV/AIDS patient in a specific state of the disease, the probability of remaining in the same state decreases with increasing age. Within the living states, the results show that the probability of being in a better state is non-zero, but less than the probability of being in a worse state for all ages. A reliability analysis also revealed that the survival probabilities are all declining over time. Computed conditional probabilities show differential subject response that depends on the age of the patient. The dynamic nature of AIDS progression is confirmed with particular findings that patients are more likely to be in a worse state than a better one unless interventions are made. Our findings suggest that ongoing ART treatment services could be provided more effectively with careful consideration of the recent disease status of patients.

  7. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. II. Optimization model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    improvements. The biological model of the replacement model is described in a previous paper and in this paper the optimization model is described. The model is developed as a prototype for use under practical conditions. The application of the model is demonstrated using data from two commercial Danish sow......Recent methodological improvements in replacement models comprising multi-level hierarchical Markov processes and Bayesian updating have hardly been implemented in any replacement model and the aim of this study is to present a sow replacement model that really uses these methodological...

  8. Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2013-01-01

    Roč. 7, č. 3 (2013), s. 146-161 ISSN 0572-3043 R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150 Grant - others:AVČR a CONACyT(CZ) 171396 Institutional support: RVO:67985556 Keywords : Discrete-time Markov decision chains * exponential utility functions * certainty equivalent * mean-variance optimality * connections between risk -sensitive and risk -neutral models Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/sladky-0399099.pdf

  9. Markov analysis and Kramers-Moyal expansion of nonstationary stochastic processes with application to the fluctuations in the oil price.

    Science.gov (United States)

    Ghasemi, Fatemeh; Sahimi, Muhammad; Peinke, J; Friedrich, R; Jafari, G Reza; Tabar, M Reza Rahimi

    2007-06-01

    We describe a general method for analyzing a nonstationary stochastic process X(t) which, unlike many of the previous analysis methods, does not require X(t) to have any scaling feature. The method is used to study the fluctuations in the daily price of oil. It is shown that the returns time series, y(t)=ln[X(t+1)X(t)] , is a stationary and Markov process, characterized by a Markov time scale t_{M} . The coefficients of the Kramers-Moyal expansion for the probability density function P(y,tmid R:y_{0},t_{0}) are computed. P(y,tmid R:,y_{0},t_{0}) satisfies a Fokker-Planck equation, which is equivalent to a Langevin equation for y(t) that provides quantitative predictions for the oil price over times that are of the order of t_{M}. Also studied is the average frequency of positive-slope crossings, nu_{alpha};{+}=P(y_{i}>alpha,y_{i-1}

  10. Numerical simulations of piecewise deterministic Markov processes with an application to the stochastic Hodgkin-Huxley model

    Science.gov (United States)

    Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan

    2016-12-01

    The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.

  11. Using semi-Markov processes to study timeliness and tests used in the diagnostic evaluation of suspected breast cancer.

    Science.gov (United States)

    Hubbard, R A; Lange, J; Zhang, Y; Salim, B A; Stroud, J R; Inoue, L Y T

    2016-11-30

    Diagnostic evaluation of suspected breast cancer due to abnormal screening mammography results is common, creates anxiety for women and is costly for the healthcare system. Timely evaluation with minimal use of additional diagnostic testing is key to minimizing anxiety and cost. In this paper, we propose a Bayesian semi-Markov model that allows for flexible, semi-parametric specification of the sojourn time distributions and apply our model to an investigation of the process of diagnostic evaluation with mammography, ultrasound and biopsy following an abnormal screening mammogram. We also investigate risk factors associated with the sojourn time between diagnostic tests. By utilizing semi-Markov processes, we expand on prior work that described the timing of the first test received by providing additional information such as the mean time to resolution and proportion of women with unresolved mammograms after 90 days for women requiring different sequences of tests in order to reach a definitive diagnosis. Overall, we found that older women were more likely to have unresolved positive mammograms after 90 days. Differences in the timing of imaging evaluation and biopsy were generally on the order of days and thus did not represent clinically important differences in diagnostic delay. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Genetic distance for a general non-stationary markov substitution process.

    Science.gov (United States)

    Kaehler, Benjamin D; Yap, Von Bing; Zhang, Rongli; Huttley, Gavin A

    2015-03-01

    The genetic distance between biological sequences is a fundamental quantity in molecular evolution. It pertains to questions of rates of evolution, existence of a molecular clock, and phylogenetic inference. Under the class of continuous-time substitution models, the distance is commonly defined as the expected number of substitutions at any site in the sequence. We eschew the almost ubiquitous assumptions of evolution under stationarity and time-reversible conditions and extend the concept of the expected number of substitutions to nonstationary Markov models where the only remaining constraint is of time homogeneity between nodes in the tree. Our measure of genetic distance reduces to the standard formulation if the data in question are consistent with the stationarity assumption. We apply this general model to samples from across the tree of life to compare distances so obtained with those from the general time-reversible model, with and without rate heterogeneity across sites, and the paralinear distance, an empirical pairwise method explicitly designed to address nonstationarity. We discover that estimates from both variants of the general time-reversible model and the paralinear distance systematically overestimate genetic distance and departure from the molecular clock. The magnitude of the distance bias is proportional to departure from stationarity, which we demonstrate to be associated with longer edge lengths. The marked improvement in consistency between the general nonstationary Markov model and sequence alignments leads us to conclude that analyses of evolutionary rates and phylogenies will be substantively improved by application of this model. © The Author(s) 2014. Published by Oxford University Press on behalf of the Society of Systematic Biologists.

  13. Generalized Boolean logic Driven Markov Processes: A powerful modeling framework for Model-Based Safety Analysis of dynamic repairable and reconfigurable systems

    International Nuclear Information System (INIS)

    Piriou, Pierre-Yves; Faure, Jean-Marc; Lesage, Jean-Jacques

    2017-01-01

    This paper presents a modeling framework that permits to describe in an integrated manner the structure of the critical system to analyze, by using an enriched fault tree, the dysfunctional behavior of its components, by means of Markov processes, and the reconfiguration strategies that have been planned to ensure safety and availability, with Moore machines. This framework has been developed from BDMP (Boolean logic Driven Markov Processes), a previous framework for dynamic repairable systems. First, the contribution is motivated by pinpointing the limitations of BDMP to model complex reconfiguration strategies and the failures of the control of these strategies. The syntax and semantics of GBDMP (Generalized Boolean logic Driven Markov Processes) are then formally defined; in particular, an algorithm to analyze the dynamic behavior of a GBDMP model is developed. The modeling capabilities of this framework are illustrated on three representative examples. Last, qualitative and quantitative analysis of GDBMP models highlight the benefits of the approach.

  14. Effect of Stacking Layup on Spring-back Deformation of Symmetrical Flat Laminate Composites Manufactured through Autoclave Processing

    Science.gov (United States)

    Nasir, M. N. M.; Seman, M. A.; Mezeix, L.; Aminanda, Y.; Rivai, A.; Ali, K. M.

    2017-03-01

    The residual stresses that develop within fibre-reinforced laminate composites during autoclave processing lead to dimensional warpage known as spring-back deformation. A number of experiments have been conducted on flat laminate composites with unidirectional fibre orientation to examine the effects of both the intrinsic and extrinsic parameters on the warpage. This paper extends the study on to the symmetrical layup effect on spring-back for flat laminate composites. Plies stacked at various symmetrical sequences were fabricated to observe the severity of the resulting warpage. Essentially, the experimental results demonstrated that the symmetrical layups reduce the laminate stiffness in its principal direction compared to the unidirectional laminate thus, raising the spring-back warpage with the exception of the [45/-45]S layup due to its quasi-isotropic property.

  15. Tokunaga and Horton self-similarity for level set trees of Markov chains

    International Nuclear Information System (INIS)

    Zaliapin, Ilia; Kovchegov, Yevgeniy

    2012-01-01

    Highlights: ► Self-similar properties of the level set trees for Markov chains are studied. ► Tokunaga and Horton self-similarity are established for symmetric Markov chains and regular Brownian motion. ► Strong, distributional self-similarity is established for symmetric Markov chains with exponential jumps. ► It is conjectured that fractional Brownian motions are Tokunaga self-similar. - Abstract: The Horton and Tokunaga branching laws provide a convenient framework for studying self-similarity in random trees. The Horton self-similarity is a weaker property that addresses the principal branching in a tree; it is a counterpart of the power-law size distribution for elements of a branching system. The stronger Tokunaga self-similarity addresses so-called side branching. The Horton and Tokunaga self-similarity have been empirically established in numerous observed and modeled systems, and proven for two paradigmatic models: the critical Galton–Watson branching process with finite progeny and the finite-tree representation of a regular Brownian excursion. This study establishes the Tokunaga and Horton self-similarity for a tree representation of a finite symmetric homogeneous Markov chain. We also extend the concept of Horton and Tokunaga self-similarity to infinite trees and establish self-similarity for an infinite-tree representation of a regular Brownian motion. We conjecture that fractional Brownian motions are also Tokunaga and Horton self-similar, with self-similarity parameters depending on the Hurst exponent.

  16. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. II. Optimization model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    improvements. The biological model of the replacement model is described in a previous paper and in this paper the optimization model is described. The model is developed as a prototype for use under practical conditions. The application of the model is demonstrated using data from two commercial Danish sow......Recent methodological improvements in replacement models comprising multi-level hierarchical Markov processes and Bayesian updating have hardly been implemented in any replacement model and the aim of this study is to present a sow replacement model that really uses these methodological...... herds. It is concluded that the Bayesian updating technique and the hierarchical structure decrease the size of the state space dramatically. Since parameter estimates vary considerably among herds it is concluded that decision support concerning sow replacement only makes sense with parameters...

  17. Markov counting and reward processes for analysing the performance of a complex system subject to random inspections

    International Nuclear Information System (INIS)

    Ruiz-Castro, Juan Eloy

    2016-01-01

    In this paper, a discrete complex reliability system subject to internal failures and external shocks, is modelled algorithmically. Two types of internal failure are considered: repairable and non-repairable. When a repairable failure occurs, the unit goes to corrective repair. In addition, the unit is subject to external shocks that may produce an aggravation of the internal degradation level, cumulative damage or extreme failure. When a damage threshold is reached, the unit must be removed. When a non-repairable failure occurs, the device is replaced by a new, identical one. The internal performance and the external damage are partitioned in performance levels. Random inspections are carried out. When an inspection takes place, the internal performance of the system and the damage caused by external shocks are observed and if necessary the unit is sent to preventive maintenance. If the inspection observes minor state for the internal performance and/or external damage, then these states remain in memory when the unit goes to corrective or preventive maintenance. Transient and stationary analyses are performed. Markov counting and reward processes are developed in computational form to analyse the performance and profitability of the system with and without preventive maintenance. These aspects are implemented computationally with Matlab. - Highlights: • A multi-state device is modelled in an algorithmic and computational form. • The performance is partitioned in multi-states and degradation levels. • Several types of failures with repair times according to degradation levels. • Preventive maintenance as response to random inspection is introduced. • The performance-profitable is analysed through Markov counting and reward processes.

  18. A Markov decision process for managing habitat for Florida scrub-jays

    Science.gov (United States)

    Johnson, Fred A.; Breininger, David R.; Duncan, Brean W.; Nichols, James D.; Runge, Michael C.; Williams, B. Ken

    2011-01-01

    Florida scrub-jays Aphelocoma coerulescens are listed as threatened under the Endangered Species Act due to loss and degradation of scrub habitat. This study concerned the development of an optimal strategy for the restoration and management of scrub habitat at Merritt Island National Wildlife Refuge, which contains one of the few remaining large populations of scrub-jays in Florida. There are documented differences in the reproductive and survival rates of scrubjays among discrete classes of scrub height (Markov models to estimate annual transition probabilities among the four scrub-height classes under three possible management actions: scrub restoration (mechanical cutting followed by burning), a prescribed burn, or no intervention. A strategy prescribing the optimal management action for management units exhibiting different proportions of scrub-height classes was derived using dynamic programming. Scrub restoration was the optimal management action only in units dominated by mixed and tall scrub, and burning tended to be the optimal action for intermediate levels of short scrub. The optimal action was to do nothing when the amount of short scrub was greater than 30%, because short scrub mostly transitions to optimal height scrub (i.e., that state with the highest demographic success of scrub-jays) in the absence of intervention. Monte Carlo simulation of the optimal policy suggested that some form of management would be required every year. We note, however, that estimates of scrub-height transition probabilities were subject to several sources of uncertainty, and so we explored the management implications of alternative sets of transition probabilities. Generally, our analysis demonstrated the difficulty of managing for a species that requires midsuccessional habitat, and suggests that innovative management tools may be needed to help ensure the persistence of scrub-jays at Merritt Island National Wildlife Refuge. The development of a tailored monitoring

  19. Lifetime effectiveness of mifamurtide addition to chemotherapy in nonmetastatic and metastatic osteosarcoma: a Markov process model analysis.

    Science.gov (United States)

    Song, Hyun Jin; Lee, Jun Ah; Han, Euna; Lee, Eui-Kyung

    2015-09-01

    The mortality and progression rates in osteosarcoma differ depending on the presence of metastasis. A decision model would be useful for estimating long-term effectiveness of treatment with limited clinical trial data. The aim of this study was to explore the lifetime effectiveness of the addition of mifamurtide to chemotherapy for patients with metastatic and nonmetastatic osteosarcoma. The target population was osteosarcoma patients with or without metastasis. A Markov process model was used, whose time horizon was lifetime with a starting age of 13 years. There were five health states: disease-free (DF), recurrence, post-recurrence disease-free, post-recurrence disease-progression, and death. Transition probabilities of the starting state, DF, were calculated from the INT-0133 clinical trials for chemotherapy with and without mifamurtide. Quality-adjusted life-years (QALY) increased upon addition of mifamurtide to chemotherapy by 10.5 % (10.13 and 9.17 QALY with and without mifamurtide, respectively) and 45.2 % (7.23 and 4.98 QALY with and without mifamurtide, respectively) relative to the lifetime effectiveness of chemotherapy in nonmetastatic and metastatic osteosarcoma, respectively. Life-years gained (LYG) increased by 10.1 % (13.10 LYG with mifamurtide and 11.90 LYG without mifamurtide) in nonmetastatic patients and 42.2 % (9.43 LYG with mifamurtide and 6.63 LYG without mifamurtide) in metastatic osteosarcoma patients. The Markov model analysis showed that chemotherapy with mifamurtide improved the lifetime effectiveness compared to chemotherapy alone in both nonmetastatic and metastatic osteosarcoma. Relative effectiveness of the therapy was higher in metastatic than nonmetastatic osteosarcoma over lifetime. However, absolute lifetime effectiveness was higher in nonmetastatic than metastatic osteosarcoma.

  20. Using model-based proposals for fast parameter inference on discrete state space, continuous-time Markov processes.

    Science.gov (United States)

    Pooley, C M; Bishop, S C; Marion, G

    2015-06-06

    Bayesian statistics provides a framework for the integration of dynamic models with incomplete data to enable inference of model parameters and unobserved aspects of the system under study. An important class of dynamic models is discrete state space, continuous-time Markov processes (DCTMPs). Simulated via the Doob-Gillespie algorithm, these have been used to model systems ranging from chemistry to ecology to epidemiology. A new type of proposal, termed 'model-based proposal' (MBP), is developed for the efficient implementation of Bayesian inference in DCTMPs using Markov chain Monte Carlo (MCMC). This new method, which in principle can be applied to any DCTMP, is compared (using simple epidemiological SIS and SIR models as easy to follow exemplars) to a standard MCMC approach and a recently proposed particle MCMC (PMCMC) technique. When measurements are made on a single-state variable (e.g. the number of infected individuals in a population during an epidemic), model-based proposal MCMC (MBP-MCMC) is marginally faster than PMCMC (by a factor of 2-8 for the tests performed), and significantly faster than the standard MCMC scheme (by a factor of 400 at least). However, when model complexity increases and measurements are made on more than one state variable (e.g. simultaneously on the number of infected individuals in spatially separated subpopulations), MBP-MCMC is significantly faster than PMCMC (more than 100-fold for just four subpopulations) and this difference becomes increasingly large. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  1. A Monte Carlo approach to the ship-centric Markov decision process for analyzing decisions over converting a containership to LNG power

    NARCIS (Netherlands)

    Kana, A.A.; Harrison, B.M.

    2017-01-01

    A Monte Carlo approach to the ship-centric Markov decision process (SC-MDP) is presented for analyzing whether a container ship should convert to LNG power in the face of evolving Emission Control Area regulations. The SC-MDP model was originally developed as a means to analyze uncertain,

  2. On structural properties of the value function for an unbounded jump Markov process with an application to a processor-sharing retrial queue

    NARCIS (Netherlands)

    Bhulai, S.; Brooms, A.C.; Spieksma, F.M.

    2014-01-01

    The derivation of structural properties for unbounded jump Markov processes cannot be done using standard mathematical tools, since the analysis is hindered due to the fact that the system is not uniformizable. We present a promising technique, a smoothed rate truncation method, to overcome the

  3. The Discovery of Processing Stages: Analyzing EEG data with Hidden Semi-Markov Models

    NARCIS (Netherlands)

    Borst, Jelmer; Anderson, John R.

    2015-01-01

    In this paper we propose a new method for identifying processing stages in human information processing. Since the 1860s scientists have used different methods to identify processing stages, usually based on reaction time (RT) differences between conditions. To overcome the limitations of RT-based

  4. Two-boundary first exit time of Gauss-Markov processes for stochastic modeling of acto-myosin dynamics.

    Science.gov (United States)

    D'Onofrio, Giuseppe; Pirozzi, Enrica

    2017-05-01

    We consider a stochastic differential equation in a strip, with coefficients suitably chosen to describe the acto-myosin interaction subject to time-varying forces. By simulating trajectories of the stochastic dynamics via an Euler discretization-based algorithm, we fit experimental data and determine the values of involved parameters. The steps of the myosin are represented by the exit events from the strip. Motivated by these results, we propose a specific stochastic model based on the corresponding time-inhomogeneous Gauss-Markov and diffusion process evolving between two absorbing boundaries. We specify the mean and covariance functions of the stochastic modeling process taking into account time-dependent forces including the effect of an external load. We accurately determine the probability density function (pdf) of the first exit time (FET) from the strip by solving a system of two non singular second-type Volterra integral equations via a numerical quadrature. We provide numerical estimations of the mean of FET as approximations of the dwell-time of the proteins dynamics. The percentage of backward steps is given in agreement to experimental data. Numerical and simulation results are compared and discussed.

  5. Dual Sticky Hierarchical Dirichlet Process Hidden Markov Model and Its Application to Natural Language Description of Motions.

    Science.gov (United States)

    Hu, Weiming; Tian, Guodong; Kang, Yongxin; Yuan, Chunfeng; Maybank, Stephen

    2017-09-25

    In this paper, a new nonparametric Bayesian model called the dual sticky hierarchical Dirichlet process hidden Markov model (HDP-HMM) is proposed for mining activities from a collection of time series data such as trajectories. All the time series data are clustered. Each cluster of time series data, corresponding to a motion pattern, is modeled by an HMM. Our model postulates a set of HMMs that share a common set of states (topics in an analogy with topic models for document processing), but have unique transition distributions. For the application to motion trajectory modeling, topics correspond to motion activities. The learnt topics are clustered into atomic activities which are assigned predicates. We propose a Bayesian inference method to decompose a given trajectory into a sequence of atomic activities. On combining the learnt sources and sinks, semantic motion regions, and the learnt sequence of atomic activities, the action represented by the trajectory can be described in natural language in as automatic a way as possible. The effectiveness of our dual sticky HDP-HMM is validated on several trajectory datasets. The effectiveness of the natural language descriptions for motions is demonstrated on the vehicle trajectories extracted from a traffic scene.

  6. Markov chains theory and applications

    CERN Document Server

    Sericola, Bruno

    2013-01-01

    Markov chains are a fundamental class of stochastic processes. They are widely used to solve problems in a large number of domains such as operational research, computer science, communication networks and manufacturing systems. The success of Markov chains is mainly due to their simplicity of use, the large number of available theoretical results and the quality of algorithms developed for the numerical evaluation of many metrics of interest.The author presents the theory of both discrete-time and continuous-time homogeneous Markov chains. He carefully examines the explosion phenomenon, the

  7. Regeneration and general Markov chains

    Directory of Open Access Journals (Sweden)

    Vladimir V. Kalashnikov

    1994-01-01

    Full Text Available Ergodicity, continuity, finite approximations and rare visits of general Markov chains are investigated. The obtained results permit further quantitative analysis of characteristics, such as, rates of convergence, continuity (measured as a distance between perturbed and non-perturbed characteristics, deviations between Markov chains, accuracy of approximations and bounds on the distribution function of the first visit time to a chosen subset, etc. The underlying techniques use the embedding of the general Markov chain into a wide sense regenerative process with the help of splitting construction.

  8. Transforming unstructured natural language descriptions into measurable process performance indicators using Hidden Markov Models

    NARCIS (Netherlands)

    van der Aa, Han; Leopold, Henrik; del-Río-Ortega, Adela; Resinas, Manuel; Reijers, Hajo A.

    2017-01-01

    Monitoring process performance is an important means for organizations to identify opportunities to improve their operations. The definition of suitable Process Performance Indicators (PPIs) is a crucial task in this regard. Because PPIs need to be in line with strategic business objectives, the

  9. Time-homogeneous Markov process for HIV/AIDS progression under a combination treatment therapy: cohort study, South Africa.

    Science.gov (United States)

    Shoko, Claris; Chikobvu, Delson

    2018-01-18

    As HIV enters the human body, its main target is the CD4 cell which it turns into a factory that produces millions of other HIV particles. These HIV particles target new CD4 cells resulting in the progression of HIV infection to AIDS. A continuous depletion of CD4 cells results in opportunistic infections, for example tuberculosis (TB). The purpose of this study is to model and describe the progression of HIV/AIDS disease in an individual on antiretroviral therapy (ART) follow up using a continuous time homogeneous Markov process. A cohort of 319 HIV infected patients on ART follow up at a Wellness Clinic in Bela Bela, South Africa is used in this study. Though Markov models based on CD4 cell counts is a common approach in HIV/AIDS modelling, this paper is unique clinically in that tuberculosis (TB) co-infection is included as a covariate. The method partitions the HIV infection period into five CD4-cell count intervals followed by the end points; death, and withdrawal from study. The effectiveness of treatment is analysed by comparing the forward transitions with the backward transitions. The effects of reaction to treatment, TB co-infection, gender and age on the transition rates are also examined. The developed models give very good fit to the data. The results show that the strongest predictor of transition from a state of CD4 cell count greater than 750 to a state of CD4 between 500 and 750 is a negative reaction to drug therapy. Development of TB during the course of treatment is the greatest predictor of transitions to states of lower CD4 cell count. Transitions from good states to bad states are higher on male patients than their female counterparts. Patients in the cohort spend a greater proportion of their total follow-up time in higher CD4 states. From some of these findings we conclude that there is need to monitor adverse reaction to drugs more frequently, screen HIV/AIDS patients for any signs and symptoms of TB and check for factors that may explain

  10. Characterization results and Markov chain Monte Carlo algorithms including exact simulation for some spatial point processes

    DEFF Research Database (Denmark)

    Häggström, Olle; Lieshout, Marie-Colette van; Møller, Jesper

    1999-01-01

    The area-interaction process and the continuum random-cluster model are characterized in terms of certain functional forms of their respective conditional intensities. In certain cases, these two point process models can be derived from a bivariate point process model which in many respects...... is simpler to analyse and simulate. Using this correspondence we devise a two-component Gibbs sampler, which can be used for fast and exact simulation by extending the recent ideas of Propp and Wilson. We further introduce a Swendsen-Wang type algorithm. The relevance of the results within spatial statistics...

  11. Markov chains

    CERN Document Server

    Revuz, D

    1984-01-01

    This is the revised and augmented edition of a now classic book which is an introduction to sub-Markovian kernels on general measurable spaces and their associated homogeneous Markov chains. The first part, an expository text on the foundations of the subject, is intended for post-graduate students. A study of potential theory, the basic classification of chains according to their asymptotic behaviour and the celebrated Chacon-Ornstein theorem are examined in detail. The second part of the book is at a more advanced level and includes a treatment of random walks on general locally compact abelian groups. Further chapters develop renewal theory, an introduction to Martin boundary and the study of chains recurrent in the Harris sense. Finally, the last chapter deals with the construction of chains starting from a kernel satisfying some kind of maximum principle.

  12. Partially Observable Markov Decision Process-Based Transmission Policy over Ka-Band Channels for Space Information Networks

    Directory of Open Access Journals (Sweden)

    Jian Jiao

    2017-09-01

    Full Text Available The Ka-band and higher Q/V band channels can provide an appealing capacity for the future deep-space communications and Space Information Networks (SIN, which are viewed as a primary solution to satisfy the increasing demands for high data rate services. However, Ka-band channel is much more sensitive to the weather conditions than the conventional communication channels. Moreover, due to the huge distance and long propagation delay in SINs, the transmitter can only obtain delayed Channel State Information (CSI from feedback. In this paper, the noise temperature of time-varying rain attenuation at Ka-band channels is modeled to a two-state Gilbert–Elliot channel, to capture the channel capacity that randomly ranging from good to bad state. An optimal transmission scheme based on Partially Observable Markov Decision Processes (POMDP is proposed, and the key thresholds for selecting the optimal transmission method in the SIN communications are derived. Simulation results show that our proposed scheme can effectively improve the throughput.

  13. A multi-level hierarchic Markov process with Bayesian updating for herd optimization and simulation in dairy cattle.

    Science.gov (United States)

    Demeter, R M; Kristensen, A R; Dijkstra, J; Oude Lansink, A G J M; Meuwissen, M P M; van Arendonk, J A M

    2011-12-01

    Herd optimization models that determine economically optimal insemination and replacement decisions are valuable research tools to study various aspects of farming systems. The aim of this study was to develop a herd optimization and simulation model for dairy cattle. The model determines economically optimal insemination and replacement decisions for individual cows and simulates whole-herd results that follow from optimal decisions. The optimization problem was formulated as a multi-level hierarchic Markov process, and a state space model with Bayesian updating was applied to model variation in milk yield. Methodological developments were incorporated in 2 main aspects. First, we introduced an additional level to the model hierarchy to obtain a more tractable and efficient structure. Second, we included a recently developed cattle feed intake model. In addition to methodological developments, new parameters were used in the state space model and other biological functions. Results were generated for Dutch farming conditions, and outcomes were in line with actual herd performance in the Netherlands. Optimal culling decisions were sensitive to variation in milk yield but insensitive to energy requirements for maintenance and feed intake capacity. We anticipate that the model will be applied in research and extension. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Multimodal brain-tumor segmentation based on Dirichlet process mixture model with anisotropic diffusion and Markov random field prior.

    Science.gov (United States)

    Lu, Yisu; Jiang, Jun; Yang, Wei; Feng, Qianjin; Chen, Wufan

    2014-01-01

    Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use.

  15. Semi-Markov graph dynamics.

    Directory of Open Access Journals (Sweden)

    Marco Raberto

    Full Text Available In this paper, we outline a model of graph (or network dynamics based on two ingredients. The first ingredient is a Markov chain on the space of possible graphs. The second ingredient is a semi-Markov counting process of renewal type. The model consists in subordinating the Markov chain to the semi-Markov counting process. In simple words, this means that the chain transitions occur at random time instants called epochs. The model is quite rich and its possible connections with algebraic geometry are briefly discussed. Moreover, for the sake of simplicity, we focus on the space of undirected graphs with a fixed number of nodes. However, in an example, we present an interbank market model where it is meaningful to use directed graphs or even weighted graphs.

  16. Entropy: The Markov Ordering Approach

    Directory of Open Access Journals (Sweden)

    Alexander N. Gorban

    2010-05-01

    Full Text Available The focus of this article is on entropy and Markov processes. We study the properties of functionals which are invariant with respect to monotonic transformations and analyze two invariant “additivity” properties: (i existence of a monotonic transformation which makes the functional additive with respect to the joining of independent systems and (ii existence of a monotonic transformation which makes the functional additive with respect to the partitioning of the space of states. All Lyapunov functionals for Markov chains which have properties (i and (ii are derived. We describe the most general ordering of the distribution space, with respect to which all continuous-time Markov processes are monotonic (the Markov order. The solution differs significantly from the ordering given by the inequality of entropy growth. For inference, this approach results in a convex compact set of conditionally “most random” distributions.

  17. Markov and mixed models with applications

    DEFF Research Database (Denmark)

    Mortensen, Stig Bousgaard

    the individual in almost any thinkable way. This project focuses on measuring the eects on sleep in both humans and animals. The sleep process is usually analyzed by categorizing small time segments into a number of sleep states and this can be modelled using a Markov process. For this purpose new methods...... for non-parametric estimation of Markov processes are proposed to give a detailed description of the sleep process during the night. Statistically the Markov models considered for sleep states are closely related to the PK models based on SDEs as both models share the Markov property. When the models...

  18. Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models

    Science.gov (United States)

    Chan, Lay Guat; Ibrahim, Adriana Irawati Nur Binti

    2016-10-01

    A hidden Markov model (HMM) is a mixture model which has a Markov chain with finite states as its mixing distribution. HMMs have been applied to a variety of fields, such as speech and face recognitions. The main purpose of this study is to investigate the Bayesian approach to HMMs. Using this approach, we can simulate from the parameters' posterior distribution using some Markov chain Monte Carlo (MCMC) sampling methods. HMMs seem to be useful, but there are some limitations. Therefore, by using the Mixture of Dirichlet processes Hidden Markov Model (MDPHMM) based on Yau et. al (2011), we hope to overcome these limitations. We shall conduct a simulation study using MCMC methods to investigate the performance of this model.

  19. Unsupervised parsing of gaze data with a beta-process vector auto-regressive hidden Markov model.

    Science.gov (United States)

    Houpt, Joseph W; Frame, Mary E; Blaha, Leslie M

    2017-10-26

    The first stage of analyzing eye-tracking data is commonly to code the data into sequences of fixations and saccades. This process is usually automated using simple, predetermined rules for classifying ranges of the time series into events, such as "if the dispersion of gaze samples is lower than a particular threshold, then code as a fixation; otherwise code as a saccade." More recent approaches incorporate additional eye-movement categories in automated parsing algorithms by using time-varying, data-driven thresholds. We describe an alternative approach using the beta-process vector auto-regressive hidden Markov model (BP-AR-HMM). The BP-AR-HMM offers two main advantages over existing frameworks. First, it provides a statistical model for eye-movement classification rather than a single estimate. Second, the BP-AR-HMM uses a latent process to model the number and nature of the types of eye movements and hence is not constrained to predetermined categories. We applied the BP-AR-HMM both to high-sampling rate gaze data from Andersson et al. (Behavior Research Methods 49(2), 1-22 2016) and to low-sampling rate data from the DIEM project (Mital et al., Cognitive Computation 3(1), 5-24 2011). Driven by the data properties, the BP-AR-HMM identified over five categories of movements, some which clearly mapped on to fixations and saccades, and others potentially captured post-saccadic oscillations, smooth pursuit, and various recording errors. The BP-AR-HMM serves as an effective algorithm for data-driven event parsing alone or as an initial step in exploring the characteristics of gaze data sets.

  20. Distribution of chirality in the quantum walk: Markov process and entanglement

    International Nuclear Information System (INIS)

    Romanelli, Alejandro

    2010-01-01

    The asymptotic behavior of the quantum walk on the line is investigated, focusing on the probability distribution of chirality independently of position. It is shown analytically that this distribution has a longtime limit that is stationary and depends on the initial conditions. This result is unexpected in the context of the unitary evolution of the quantum walk as it is usually linked to a Markovian process. The asymptotic value of the entanglement between the coin and the position is determined by the chirality distribution. For given asymptotic values of both the entanglement and the chirality distribution, it is possible to find the corresponding initial conditions within a particular class of spatially extended Gaussian distributions.

  1. Probabilistic parameter estimation of activated sludge processes using Markov Chain Monte Carlo.

    Science.gov (United States)

    Sharifi, Soroosh; Murthy, Sudhir; Takács, Imre; Massoudieh, Arash

    2014-03-01

    One of the most important challenges in making activated sludge models (ASMs) applicable to design problems is identifying the values of its many stoichiometric and kinetic parameters. When wastewater characteristics data from full-scale biological treatment systems are used for parameter estimation, several sources of uncertainty, including uncertainty in measured data, external forcing (e.g. influent characteristics), and model structural errors influence the value of the estimated parameters. This paper presents a Bayesian hierarchical modeling framework for the probabilistic estimation of activated sludge process parameters. The method provides the joint probability density functions (JPDFs) of stoichiometric and kinetic parameters by updating prior information regarding the parameters obtained from expert knowledge and literature. The method also provides the posterior correlations between the parameters, as well as a measure of sensitivity of the different constituents with respect to the parameters. This information can be used to design experiments to provide higher information content regarding certain parameters. The method is illustrated using the ASM1 model to describe synthetically generated data from a hypothetical biological treatment system. The results indicate that data from full-scale systems can narrow down the ranges of some parameters substantially whereas the amount of information they provide regarding other parameters is small, due to either large correlations between some of the parameters or a lack of sensitivity with respect to the parameters. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Symmetric textures

    International Nuclear Information System (INIS)

    Ramond, P.

    1993-01-01

    The Wolfenstein parametrization is extended to the quark masses in the deep ultraviolet, and an algorithm to derive symmetric textures which are compatible with existing data is developed. It is found that there are only five such textures

  3. Evaluation of linearly solvable Markov decision process with dynamic model learning in a mobile robot navigation task.

    Science.gov (United States)

    Kinjo, Ken; Uchibe, Eiji; Doya, Kenji

    2013-01-01

    Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.

  4. The influence of Markov decision process structure on the possible strategic use of working memory and episodic memory.

    Directory of Open Access Journals (Sweden)

    Eric A Zilli

    2008-07-01

    Full Text Available Researchers use a variety of behavioral tasks to analyze the effect of biological manipulations on memory function. This research will benefit from a systematic mathematical method for analyzing memory demands in behavioral tasks. In the framework of reinforcement learning theory, these tasks can be mathematically described as partially-observable Markov decision processes. While a wealth of evidence collected over the past 15 years relates the basal ganglia to the reinforcement learning framework, only recently has much attention been paid to including psychological concepts such as working memory or episodic memory in these models. This paper presents an analysis that provides a quantitative description of memory states sufficient for correct choices at specific decision points. Using information from the mathematical structure of the task descriptions, we derive measures that indicate whether working memory (for one or more cues or episodic memory can provide strategically useful information to an agent. In particular, the analysis determines which observed states must be maintained in or retrieved from memory to perform these specific tasks. We demonstrate the analysis on three simplified tasks as well as eight more complex memory tasks drawn from the animal and human literature (two alternation tasks, two sequence disambiguation tasks, two non-matching tasks, the 2-back task, and the 1-2-AX task. The results of these analyses agree with results from quantitative simulations of the task reported in previous publications and provide simple indications of the memory demands of the tasks which can require far less computation than a full simulation of the task. This may provide a basis for a quantitative behavioral stoichiometry of memory tasks.

  5. The influence of Markov decision process structure on the possible strategic use of working memory and episodic memory.

    Science.gov (United States)

    Zilli, Eric A; Hasselmo, Michael E

    2008-07-23

    Researchers use a variety of behavioral tasks to analyze the effect of biological manipulations on memory function. This research will benefit from a systematic mathematical method for analyzing memory demands in behavioral tasks. In the framework of reinforcement learning theory, these tasks can be mathematically described as partially-observable Markov decision processes. While a wealth of evidence collected over the past 15 years relates the basal ganglia to the reinforcement learning framework, only recently has much attention been paid to including psychological concepts such as working memory or episodic memory in these models. This paper presents an analysis that provides a quantitative description of memory states sufficient for correct choices at specific decision points. Using information from the mathematical structure of the task descriptions, we derive measures that indicate whether working memory (for one or more cues) or episodic memory can provide strategically useful information to an agent. In particular, the analysis determines which observed states must be maintained in or retrieved from memory to perform these specific tasks. We demonstrate the analysis on three simplified tasks as well as eight more complex memory tasks drawn from the animal and human literature (two alternation tasks, two sequence disambiguation tasks, two non-matching tasks, the 2-back task, and the 1-2-AX task). The results of these analyses agree with results from quantitative simulations of the task reported in previous publications and provide simple indications of the memory demands of the tasks which can require far less computation than a full simulation of the task. This may provide a basis for a quantitative behavioral stoichiometry of memory tasks.

  6. Dynamic chromatin accessibility modeled by Markov process of randomly-moving molecules in the 3D genome.

    Science.gov (United States)

    Wang, Yinan; Fan, Caoqi; Zheng, Yuxuan; Li, Cheng

    2017-06-02

    Chromatin three-dimensional (3D) structure plays critical roles in gene expression regulation by influencing locus interactions and accessibility of chromatin regions. Here we propose a Markov process model to derive a chromosomal equilibrium distribution of randomly-moving molecules as a functional consequence of spatially organized genome 3D structures. The model calculates steady-state distributions (SSD) from Hi-C data as quantitative measures of each chromatin region's dynamic accessibility for transcription factors and histone modification enzymes. Different from other Hi-C derived features such as compartment A/B and interaction hubs, or traditional methods measuring chromatin accessibility such as DNase-seq and FAIRE-seq, SSD considers both chromatin-chromatin and protein-chromatin interactions. Through our model, we find that SSD could capture the chromosomal equilibrium distributions of activation histone modifications and transcription factors. Compared with compartment A/B, SSD has higher correlations with the binding of these histone modifications and transcription factors. In addition, we find that genes located in high SSD regions tend to be expressed at higher level. Furthermore, we track the change of genome organization during stem cell differentiation, and propose a two-stage model to explain the dynamic change of SSD and gene expression during differentiation, where chromatin organization genes first gain chromatin accessibility and are expressed before lineage-specific genes do. We conclude that SSD is a novel and better measure of dynamic chromatin activity and accessibility. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Modeling Dyadic Processes Using Hidden Markov Models: A Time Series Approach to Mother-Infant Interactions during Infant Immunization

    Science.gov (United States)

    Stifter, Cynthia A.; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at 2 and 6?months of age, used hidden Markov modelling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a…

  8. Approximate quantum Markov chains

    CERN Document Server

    Sutter, David

    2018-01-01

    This book is an introduction to quantum Markov chains and explains how this concept is connected to the question of how well a lost quantum mechanical system can be recovered from a correlated subsystem. To achieve this goal, we strengthen the data-processing inequality such that it reveals a statement about the reconstruction of lost information. The main difficulty in order to understand the behavior of quantum Markov chains arises from the fact that quantum mechanical operators do not commute in general. As a result we start by explaining two techniques of how to deal with non-commuting matrices: the spectral pinching method and complex interpolation theory. Once the reader is familiar with these techniques a novel inequality is presented that extends the celebrated Golden-Thompson inequality to arbitrarily many matrices. This inequality is the key ingredient in understanding approximate quantum Markov chains and it answers a question from matrix analysis that was open since 1973, i.e., if Lieb's triple ma...

  9. Markov Trends in Macroeconomic Time Series

    OpenAIRE

    Paap, Richard

    1997-01-01

    textabstractMany macroeconomic time series are characterised by long periods of positive growth, expansion periods, and short periods of negative growth, recessions. A popular model to describe this phenomenon is the Markov trend, which is a stochastic segmented trend where the slope depends on the value of an unobserved two-state first-order Markov process. The two slopes of the Markov trend describe the growth rates in the two phases of the business cycle. This thesis deals with a Bayesian ...

  10. Spectral methods for quantum Markov chains

    International Nuclear Information System (INIS)

    Szehr, Oleg

    2014-01-01

    The aim of this project is to contribute to our understanding of quantum time evolutions, whereby we focus on quantum Markov chains. The latter constitute a natural generalization of the ubiquitous concept of a classical Markov chain to describe evolutions of quantum mechanical systems. We contribute to the theory of such processes by introducing novel methods that allow us to relate the eigenvalue spectrum of the transition map to convergence as well as stability properties of the Markov chain.

  11. On Markov Earth Mover's Distance.

    Science.gov (United States)

    Wei, Jie

    2014-10-01

    In statistics, pattern recognition and signal processing, it is of utmost importance to have an effective and efficient distance to measure the similarity between two distributions and sequences. In statistics this is referred to as goodness-of-fit problem . Two leading goodness of fit methods are chi-square and Kolmogorov-Smirnov distances. The strictly localized nature of these two measures hinders their practical utilities in patterns and signals where the sample size is usually small. In view of this problem Rubner and colleagues developed the earth mover's distance (EMD) to allow for cross-bin moves in evaluating the distance between two patterns, which find a broad spectrum of applications. EMD-L1 was later proposed to reduce the time complexity of EMD from super-cubic by one order of magnitude by exploiting the special L1 metric. EMD-hat was developed to turn the global EMD to a localized one by discarding long-distance earth movements. In this work, we introduce a Markov EMD (MEMD) by treating the source and destination nodes absolutely symmetrically. In MEMD, like hat-EMD, the earth is only moved locally as dictated by the degree d of neighborhood system. Nodes that cannot be matched locally is handled by dummy source and destination nodes. By use of this localized network structure, a greedy algorithm that is linear to the degree d and number of nodes is then developed to evaluate the MEMD. Empirical studies on the use of MEMD on deterministic and statistical synthetic sequences and SIFT-based image retrieval suggested encouraging performances.

  12. Multi-rate Poisson tree processes for single-locus species delimitation under maximum likelihood and Markov chain Monte Carlo.

    Science.gov (United States)

    Kapli, P; Lutteropp, S; Zhang, J; Kobert, K; Pavlidis, P; Stamatakis, A; Flouri, T

    2017-06-01

    In recent years, molecular species delimitation has become a routine approach for quantifying and classifying biodiversity. Barcoding methods are of particular importance in large-scale surveys as they promote fast species discovery and biodiversity estimates. Among those, distance-based methods are the most common choice as they scale well with large datasets; however, they are sensitive to similarity threshold parameters and they ignore evolutionary relationships. The recently introduced "Poisson Tree Processes" (PTP) method is a phylogeny-aware approach that does not rely on such thresholds. Yet, two weaknesses of PTP impact its accuracy and practicality when applied to large datasets; it does not account for divergent intraspecific variation and is slow for a large number of sequences. We introduce the multi-rate PTP (mPTP), an improved method that alleviates the theoretical and technical shortcomings of PTP. It incorporates different levels of intraspecific genetic diversity deriving from differences in either the evolutionary history or sampling of each species. Results on empirical data suggest that mPTP is superior to PTP and popular distance-based methods as it, consistently yields more accurate delimitations with respect to the taxonomy (i.e., identifies more taxonomic species, infers species numbers closer to the taxonomy). Moreover, mPTP does not require any similarity threshold as input. The novel dynamic programming algorithm attains a speedup of at least five orders of magnitude compared to PTP, allowing it to delimit species in large (meta-) barcoding data. In addition, Markov Chain Monte Carlo sampling provides a comprehensive evaluation of the inferred delimitation in just a few seconds for millions of steps, independently of tree size. mPTP is implemented in C and is available for download at http://github.com/Pas-Kapli/mptp under the GNU Affero 3 license. A web-service is available at http://mptp.h-its.org . : paschalia.kapli@h-its.org or

  13. Markov Chain Monte Carlo

    Indian Academy of Sciences (India)

    . Keywords. Gibbs sampling, Markov Chain. Monte Carlo, Bayesian inference, stationary distribution, conver- gence, image restoration. Arnab Chakraborty. We describe the mathematics behind the Markov. Chain Monte Carlo method of ...

  14. On using continuoas Markov processes for unit service life evaluation taking as an example the RBMK-1000 gate-regulating valve

    International Nuclear Information System (INIS)

    Klemin, A.I.; Emel'yanov, V.S.; Rabchun, A.V.

    1984-01-01

    A technique is sugfested for estimating service life indices of equipment based on describing the process of the equipment ageing by continuous Markov diffusion process. It is noted that a number of problems on estimating durability indices of products is reduced to problems of estimating characteristics of the time of the first attainment of the preset boundary (boundaries) by a random process describing the ageing of a product. The methods of statistic estimation of the drift and diffusion coefficient in the continuous Markov diffusion process are considered formulae for their point and interval estimates are presented. A special description is given for a case of a stationary process and determining in this case mathematical expectation and dispersion of the time of the first attainment of a boundary (boundaries). The method of numerical simulation of the diffusion process with constant drift and diffusion coefficients is also described; results obtained on the basis of such a simulation are discussed. An example of using the suggested technique for quantitative estimate of the service life for the RBMK-1000 gate-regulating value is given

  15. Prediction of Annual Rainfall Pattern Using Hidden Markov Model ...

    African Journals Online (AJOL)

    ADOWIE PERE

    the stochastic processes is an underlying Markov chain, the other stochastic process is an observable stochastic ... Keywords: Markov model, Hidden Markov model, Transition probability, Observation probability, Crop. Production, Annual Rainfall .... with highest value of the forward probability at time. T+1 is taken as ...

  16. Symmetric Atom–Atom and Ion–Atom Processes in Stellar Atmospheres

    Directory of Open Access Journals (Sweden)

    Vladimir A. Srećković

    2017-12-01

    Full Text Available We present the results of the influence of two groups of collisional processes (atom–atom and ion–atom on the optical and kinetic properties of weakly ionized stellar atmospheres layers. The first type includes radiative processes of the photodissociation/association and radiative charge exchange, the second one the chemi-ionisation/recombination processes with participation of only hydrogen and helium atoms and ions. The quantitative estimation of the rate coefficients of the mentioned processes were made. The effect of the radiative processes is estimated by comparing their intensities with those of the known concurrent processes in application to the solar photosphere and to the photospheres of DB white dwarfs. The investigated chemi-ionisation/recombination processes are considered from the viewpoint of their influence on the populations of the excited states of the hydrogen atom (the Sun and an M-type red dwarf and helium atom (DB white dwarfs. The effect of these processes on the populations of the excited states of the hydrogen atom has been studied using the general stellar atmosphere code, which generates the model. The presented results demonstrate the undoubted influence of the considered radiative and chemi- ionisation/recombination processes on the optical properties and on the kinetics of the weakly ionized layers in stellar atmospheres.

  17. Optimal mixing of Markov decision rules for MDP control

    NARCIS (Netherlands)

    van der Laan, D.A.

    2011-01-01

    In this article we study Markov decision process (MDP) problems with the restriction that at decision epochs, only a finite number of given Markov decision rules are admissible. For example, the set of admissible Markov decision rules D could consist of some easy-implementable decision rules.

  18. A Markov chain approach to modelling charge exchange processes of an ion beam in monotonically increasing or decreasing potentials

    International Nuclear Information System (INIS)

    Shrier, O; Khachan, J; Bosi, S

    2006-01-01

    A Markov chain method is presented as an alternative approach to Monte Carlo simulations of charge exchange collisions by an energetic hydrogen ion beam with a cold background hydrogen gas. This method was used to determine the average energy of the resulting energetic neutrals along the path of the beam. A comparison with Monte Carlo modelling showed a good agreement but with the advantage that it required much less computing time and produced no numerical noise. In particular, the Markov chain method works well for monotonically increasing or decreasing electrostatic potentials. Finally, a good agreement is obtained with experimental results from Doppler shift spectroscopy on energetic beams from a hollow cathode discharge. In particular, the average energy of ions that undergo charge exchange reaches a plateau that can be well below the full energy that might be expected from the applied voltage bias, depending on the background gas pressure. For example, pressures of ∼20 mTorr limit the ion energy to ∼20% of the applied voltage

  19. SemiMarkov: An R Package for Parametric Estimation in Multi-State Semi-Markov Models

    OpenAIRE

    Listwon, Agnieszka; Saint-Pierre, Philippe

    2015-01-01

    Multi-state models provide a relevant tool for studying the observations of a continuous-time process at arbitrary times. Markov models are often considered even if semi-Markov are better adapted in various situations. Such models are still not frequently applied mainly due to lack of available software. We have developed the R package SemiMarkov to fit homogeneous semi-Markov models to longitudinal data. The package performs maximum likelihood estimation in a parametric framework where the d...

  20. Multiparty symmetric sum types

    DEFF Research Database (Denmark)

    Nielsen, Lasse; Yoshida, Nobuko; Honda, Kohei

    2010-01-01

    This paper introduces a new theory of multiparty session types based on symmetric sum types, by which we can type non-deterministic orchestration choice behaviours. While the original branching type in session types can represent a choice made by a single participant and accepted by others...... determining how the session proceeds, the symmetric sum type represents a choice made by agreement among all the participants of a session. Such behaviour can be found in many practical systems, including collaborative workflow in healthcare systems for clinical practice guidelines (CPGs). Processes...... with the symmetric sums can be embedded into the original branching types using conductor processes. We show that this type-driven embedding preserves typability, satisfies semantic soundness and completeness, and meets the encodability criteria adapted to the typed setting. The theory leads to an efficient...

  1. Quantum Markov Chain Mixing and Dissipative Engineering

    DEFF Research Database (Denmark)

    Kastoryano, Michael James

    2012-01-01

    (stationary states). The aim of Markov chain mixing is to obtain (upper and/or lower) bounds on the number of steps it takes for the Markov chain to reach a stationary state. The natural quantum extensions of these notions are density matrices and quantum channels. We set out to develop a general mathematical......This thesis is the fruit of investigations on the extension of ideas of Markov chain mixing to the quantum setting, and its application to problems of dissipative engineering. A Markov chain describes a statistical process where the probability of future events depends only on the state...... framework for studying quantum Markov chain mixing. We introduce two new distance measures into the quantum setting; the quantum $\\chi^2$-divergence and Hilbert's projective metric. With the help of these distance measures, we are able to derive some basic bounds on the the mixing times of quantum channels...

  2. Lie Markov models.

    Science.gov (United States)

    Sumner, J G; Fernández-Sánchez, J; Jarvis, P D

    2012-04-07

    Recent work has discussed the importance of multiplicative closure for the Markov models used in phylogenetics. For continuous-time Markov chains, a sufficient condition for multiplicative closure of a model class is ensured by demanding that the set of rate-matrices belonging to the model class form a Lie algebra. It is the case that some well-known Markov models do form Lie algebras and we refer to such models as "Lie Markov models". However it is also the case that some other well-known Markov models unequivocally do not form Lie algebras (GTR being the most conspicuous example). In this paper, we will discuss how to generate Lie Markov models by demanding that the models have certain symmetries under nucleotide permutations. We show that the Lie Markov models include, and hence provide a unifying concept for, "group-based" and "equivariant" models. For each of two and four character states, the full list of Lie Markov models with maximal symmetry is presented and shown to include interesting examples that are neither group-based nor equivariant. We also argue that our scheme is pleasing in the context of applied phylogenetics, as, for a given symmetry of nucleotide substitution, it provides a natural hierarchy of models with increasing number of parameters. We also note that our methods are applicable to any application of continuous-time Markov chains beyond the initial motivations we take from phylogenetics. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  3. Single-Server Queueing System with Markov-Modulated Arrivals and Service Times

    OpenAIRE

    Dimitrov, Mitko

    2011-01-01

    Key words: Markov-modulated queues, waiting time, heavy traffic. Markov-modulated queueing systems are those in which the input process or service mechanism is influenced by an underlying Markov chain. Several models for such systems have been investigated. In this paper we present heavy traffic analysis of single queueing system with Poisson arrival process whose arrival rate is a function of the state of Markov chain and service times depend on the state of the same Markov chain at the e...

  4. Fields From Markov Chains

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2005-01-01

    A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly.......A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly....

  5. Extended depth-of field imaging by both radially symmetrical conjugating phase masks with spatial frequency post-processing

    Science.gov (United States)

    Nhu, L. V.; Kuang, Cuifang; Liu, Xu

    2018-03-01

    In this paper, we propose a method to improve the contrast of image by using both radially symmetrical conjugating phase masks. The method is based on the generation of synthetic optical transfer function (OTF) from OTFs of both radially symmetrical conjugating phase masks. Both quartic phase mask (QPM) and its conjugating phase mask (cQPM) are used as an example. Two images are captured by QPM and cQPM. In Fourier domain, a combination of both QPM and cQPM produces the improvement of the contrast of images in all spatial frequency positions. The simulation results demonstrated that the contrast improvement of image of proposed method is obtained.

  6. Pair creation by a photon and the time-reversed process in a Robertson-Walker universe with time-symmetric expansion

    International Nuclear Information System (INIS)

    Lotze, K.H.

    1989-01-01

    We investigate pair creation by a photon and the time-reversed process in a spatially flat Robertson-Walker universe. The time-dependent external gravitational field breaks time translation invariance and thus energy conservation. So the otherwise forbidden processes are expected to occur even as first-order processes of quantum electrodynamics. We evaluate the total decay probabilities for electron-positron pairs which are non-relativistic at Compton time and soft photons in a time-symmetrically expanding radiation-dominated Friedman universe. As a characteristic trait there appears an infrared divergence. Special attention is drawn to CPT non-invariance as a consequence of time evolution of the states. As is to be expected it does not occur in a totally time-symmetric situation. (orig.)

  7. Semi-Markov Arnason-Schwarz models.

    Science.gov (United States)

    King, Ruth; Langrock, Roland

    2016-06-01

    We consider multi-state capture-recapture-recovery data where observed individuals are recorded in a set of possible discrete states. Traditionally, the Arnason-Schwarz model has been fitted to such data where the state process is modeled as a first-order Markov chain, though second-order models have also been proposed and fitted to data. However, low-order Markov models may not accurately represent the underlying biology. For example, specifying a (time-independent) first-order Markov process involves the assumption that the dwell time in each state (i.e., the duration of a stay in a given state) has a geometric distribution, and hence that the modal dwell time is one. Specifying time-dependent or higher-order processes provides additional flexibility, but at the expense of a potentially significant number of additional model parameters. We extend the Arnason-Schwarz model by specifying a semi-Markov model for the state process, where the dwell-time distribution is specified more generally, using, for example, a shifted Poisson or negative binomial distribution. A state expansion technique is applied in order to represent the resulting semi-Markov Arnason-Schwarz model in terms of a simpler and computationally tractable hidden Markov model. Semi-Markov Arnason-Schwarz models come with only a very modest increase in the number of parameters, yet permit a significantly more flexible state process. Model selection can be performed using standard procedures, and in particular via the use of information criteria. The semi-Markov approach allows for important biological inference to be drawn on the underlying state process, for example, on the times spent in the different states. The feasibility of the approach is demonstrated in a simulation study, before being applied to real data corresponding to house finches where the states correspond to the presence or absence of conjunctivitis. © 2015, The International Biometric Society.

  8. Irreducible complexity of iterated symmetric bimodal maps

    Directory of Open Access Journals (Sweden)

    J. P. Lampreia

    2005-01-01

    Full Text Available We introduce a tree structure for the iterates of symmetric bimodal maps and identify a subset which we prove to be isomorphic to the family of unimodal maps. This subset is used as a second factor for a ∗-product that we define in the space of bimodal kneading sequences. Finally, we give some properties for this product and study the ∗-product induced on the associated Markov shifts.

  9. Stencil method: a Markov model for transport in porous media

    Science.gov (United States)

    Delgoshaie, A. H.; Tchelepi, H.; Jenny, P.

    2016-12-01

    In porous media the transport of fluid is dominated by flow-field heterogeneity resulting from the underlying transmissibility field. Since the transmissibility is highly uncertain, many realizations of a geological model are used to describe the statistics of the transport phenomena in a Monte Carlo framework. One possible way to avoid the high computational cost of physics-based Monte Carlo simulations is to model the velocity field as a Markov process and use Markov Chain Monte Carlo. In previous works multiple Markov models for discrete velocity processes have been proposed. These models can be divided into two general classes of Markov models in time and Markov models in space. Both of these choices have been shown to be effective to some extent. However some studies have suggested that the Markov property cannot be confirmed for a temporal Markov process; Therefore there is not a consensus about the validity and value of Markov models in time. Moreover, previous spacial Markov models have only been used for modeling transport on structured networks and can not be readily applied to model transport in unstructured networks. In this work we propose a novel approach for constructing a Markov model in time (stencil method) for a discrete velocity process. The results form the stencil method are compared to previously proposed spacial Markov models for structured networks. The stencil method is also applied to unstructured networks and can successfully describe the dispersion of particles in this setting. Our conclusion is that both temporal Markov models and spacial Markov models for discrete velocity processes can be valid for a range of model parameters. Moreover, we show that the stencil model can be more efficient in many practical settings and is suited to model dispersion both on structured and unstructured networks.

  10. Markov chains and mixing times

    CERN Document Server

    Levin, David A

    2017-01-01

    Markov Chains and Mixing Times is a magical book, managing to be both friendly and deep. It gently introduces probabilistic techniques so that an outsider can follow. At the same time, it is the first book covering the geometric theory of Markov chains and has much that will be new to experts. It is certainly THE book that I will use to teach from. I recommend it to all comers, an amazing achievement. -Persi Diaconis, Mary V. Sunseri Professor of Statistics and Mathematics, Stanford University Mixing times are an active research topic within many fields from statistical physics to the theory of algorithms, as well as having intrinsic interest within mathematical probability and exploiting discrete analogs of important geometry concepts. The first edition became an instant classic, being accessible to advanced undergraduates and yet bringing readers close to current research frontiers. This second edition adds chapters on monotone chains, the exclusion process and hitting time parameters. Having both exercises...

  11. Symmetric waterbomb origami.

    Science.gov (United States)

    Chen, Yan; Feng, Huijuan; Ma, Jiayao; Peng, Rui; You, Zhong

    2016-06-01

    The traditional waterbomb origami, produced from a pattern consisting of a series of vertices where six creases meet, is one of the most widely used origami patterns. From a rigid origami viewpoint, it generally has multiple degrees of freedom, but when the pattern is folded symmetrically, the mobility reduces to one. This paper presents a thorough kinematic investigation on symmetric folding of the waterbomb pattern. It has been found that the pattern can have two folding paths under certain circumstance. Moreover, the pattern can be used to fold thick panels. Not only do the additional constraints imposed to fold the thick panels lead to single degree of freedom folding, but the folding process is also kinematically equivalent to the origami of zero-thickness sheets. The findings pave the way for the pattern being readily used to fold deployable structures ranging from flat roofs to large solar panels.

  12. Absorbing Markov Chain Models to Determine Optimum Process Target Levels in Production Systems with Dual Correlated Quality Characteristics

    Directory of Open Access Journals (Sweden)

    Mohammad Saber Fallah Nezhad

    2012-03-01

    Full Text Available For a manufacturing organization to compete effectively in the global marketplace, cutting costs and improving overall efficiency is essential.  A single-stage production system with two independent quality characteristics and different costs associated with each quality characteristic that falls below a lower specification limit (scrap or above an upper specification limit (rework is presented in this paper. The amount of reworks and scraps are assumed to be depending on the process parameters such as process mean and standard deviation thus the expected total profit is significantly dependent on the process parameters. This paper develops a Markovian decision making model for determining the process means. Sensitivity analyzes is performed to validate, and a numerical example is given to illustrate the proposed model. The results showed that the optimal process means extremely effects on the quality characteristics’ parameters.

  13. Consistent Estimation of Partition Markov Models

    Directory of Open Access Journals (Sweden)

    Jesús E. García

    2017-04-01

    Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.

  14. Asymptotics for Estimating Equations in Hidden Markov Models

    DEFF Research Database (Denmark)

    Hansen, Jørgen Vinsløv; Jensen, Jens Ledet

    Results on asymptotic normality for the maximum likelihood estimate in hidden Markov models are extended in two directions. The stationarity assumption is relaxed, which allows for a covariate process influencing the hidden Markov process. Furthermore a class of estimating equations is considered...

  15. Portfolio allocation under the vendor managed inventory: A Markov ...

    African Journals Online (AJOL)

    Portfolio allocation under the vendor managed inventory: A Markov decision process. ... Journal of Applied Sciences and Environmental Management ... a review of Markov decision processes and investigates its suitability for solutions to portfolio allocation problems under vendor managed inventory in an uncertain market ...

  16. Markov Random Fields on Triangle Meshes

    DEFF Research Database (Denmark)

    Andersen, Vedrana; Aanæs, Henrik; Bærentzen, Jakob Andreas

    2010-01-01

    In this paper we propose a novel anisotropic smoothing scheme based on Markov Random Fields (MRF). Our scheme is formulated as two coupled processes. A vertex process is used to smooth the mesh by displacing the vertices according to a MRF smoothness prior, while an independent edge process labels...

  17. Validation of Individual-Based Markov-Like Stochastic Process Model of Insect Behavior and a "Virtual Farm" Concept for Enhancement of Site-Specific IPM.

    Science.gov (United States)

    Lux, Slawomir A; Wnuk, Andrzej; Vogt, Heidrun; Belien, Tim; Spornberger, Andreas; Studnicki, Marcin

    2016-01-01

    The paper reports application of a Markov-like stochastic process agent-based model and a "virtual farm" concept for enhancement of site-specific Integrated Pest Management. Conceptually, the model represents a "bottom-up ethological" approach and emulates behavior of the "primary IPM actors"-large cohorts of individual insects-within seasonally changing mosaics of spatiotemporally complex faming landscape, under the challenge of the local IPM actions. Algorithms of the proprietary PESTonFARM model were adjusted to reflect behavior and ecology of R. cerasi. Model parametrization was based on compiled published information about R. cerasi and the results of auxiliary on-farm experiments. The experiments were conducted on sweet cherry farms located in Austria, Germany, and Belgium. For each farm, a customized model-module was prepared, reflecting its spatiotemporal features. Historical data about pest monitoring, IPM treatments and fruit infestation were used to specify the model assumptions and calibrate it further. Finally, for each of the farms, virtual IPM experiments were simulated and the model-generated results were compared with the results of the real experiments conducted on the same farms. Implications of the findings for broader applicability of the model and the "virtual farm" approach-were discussed.

  18. Validation of Individual-Based Markov-Like Stochastic Process Model of Insect Behavior and a “Virtual Farm” Concept for Enhancement of Site-Specific IPM

    Science.gov (United States)

    Lux, Slawomir A.; Wnuk, Andrzej; Vogt, Heidrun; Belien, Tim; Spornberger, Andreas; Studnicki, Marcin

    2016-01-01

    The paper reports application of a Markov-like stochastic process agent-based model and a “virtual farm” concept for enhancement of site-specific Integrated Pest Management. Conceptually, the model represents a “bottom-up ethological” approach and emulates behavior of the “primary IPM actors”—large cohorts of individual insects—within seasonally changing mosaics of spatiotemporally complex faming landscape, under the challenge of the local IPM actions. Algorithms of the proprietary PESTonFARM model were adjusted to reflect behavior and ecology of R. cerasi. Model parametrization was based on compiled published information about R. cerasi and the results of auxiliary on-farm experiments. The experiments were conducted on sweet cherry farms located in Austria, Germany, and Belgium. For each farm, a customized model-module was prepared, reflecting its spatiotemporal features. Historical data about pest monitoring, IPM treatments and fruit infestation were used to specify the model assumptions and calibrate it further. Finally, for each of the farms, virtual IPM experiments were simulated and the model-generated results were compared with the results of the real experiments conducted on the same farms. Implications of the findings for broader applicability of the model and the “virtual farm” approach—were discussed. PMID:27602000

  19. Validation of individual-based Markov-like stochastic process model of insect behaviour and a ‘virtual farm’ concept for enhancement of site-specific IPM

    Directory of Open Access Journals (Sweden)

    Slawomir Antoni Lux

    2016-08-01

    Full Text Available The paper reports application of a Markov-like stochastic process agent-based model and a ‘virtual farm’ concept for enhancement of site-specific Integrated Pest Management. Conceptually, the model represents a ‘bottom-up ethological’ approach and emulates behaviour of the ‘primary IPM actors’ - large cohorts of individual insects - within seasonally changing mosaics of spatiotemporally complex faming landscape, under the challenge of the local IPM actions. Algorithms of the proprietary PESTonFARM model were adjusted to reflect behaviour and ecology of R. cerasi. Model parametrization was based on compiled published information about R. cerasi and the results of auxiliary on-farm experiments. The experiments were conducted on sweet cherry farms located in Austria, Germany and Belgium. For each farm, a customised model-module was prepared, reflecting its spatiotemporal features. Historical data about pest monitoring, IPM treatments and fruit infestation were used to specify the model assumptions and calibrate it further. Finally, for each of the farms, virtual IPM experiments were simulated and the model-generated results were compared with the results of the real experiments conducted on the same farms. Implications of the findings for broader applicability of the model and the ‘virtual farm’ approach - were discussed.

  20. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  1. Efficient Markov Chain Monte Carlo Sampling for Hierarchical Hidden Markov Models

    OpenAIRE

    Turek, Daniel; de Valpine, Perry; Paciorek, Christopher J.

    2016-01-01

    Traditional Markov chain Monte Carlo (MCMC) sampling of hidden Markov models (HMMs) involves latent states underlying an imperfect observation process, and generates posterior samples for top-level parameters concurrently with nuisance latent variables. When potentially many HMMs are embedded within a hierarchical model, this can result in prohibitively long MCMC runtimes. We study combinations of existing methods, which are shown to vastly improve computational efficiency for these hierarchi...

  2. Stationary Markov Sets.

    Science.gov (United States)

    1986-04-01

    I, - H (2.10 ) 7.,’..,. . . . l- 0{o)/(ca+ Ii (1R)) = X(1- clil {O)/Ca1 + I 1 (iR+)) (2.11) In particular, when IT is an infinite measure...limits * of regenerative sets. Z. Wahrscheinlichkeitstheorie verw,. Gebiete 70, 157-173 (1985). 4. Hoffmann-j6rgensen, J.; Markov sets. Math . Scand. 24...1969). S . Krylov, N.V., Yushkevich, A.A.; Markov random sets. Trans. Mosc. Math . Soc. 13, 127-153 (1965). 6. M1aisonneuve, B, ; Ensembles

  3. Improving the capability of an integrated CA-Markov model to simulate spatio-temporal urban growth trends using an Analytical Hierarchy Process and Frequency Ratio

    Science.gov (United States)

    Aburas, Maher Milad; Ho, Yuek Ming; Ramli, Mohammad Firuz; Ash'aari, Zulfa Hanan

    2017-07-01

    The creation of an accurate simulation of future urban growth is considered one of the most important challenges in urban studies that involve spatial modeling. The purpose of this study is to improve the simulation capability of an integrated CA-Markov Chain (CA-MC) model using CA-MC based on the Analytical Hierarchy Process (AHP) and CA-MC based on Frequency Ratio (FR), both applied in Seremban, Malaysia, as well as to compare the performance and accuracy between the traditional and hybrid models. Various physical, socio-economic, utilities, and environmental criteria were used as predictors, including elevation, slope, soil texture, population density, distance to commercial area, distance to educational area, distance to residential area, distance to industrial area, distance to roads, distance to highway, distance to railway, distance to power line, distance to stream, and land cover. For calibration, three models were applied to simulate urban growth trends in 2010; the actual data of 2010 were used for model validation utilizing the Relative Operating Characteristic (ROC) and Kappa coefficient methods Consequently, future urban growth maps of 2020 and 2030 were created. The validation findings confirm that the integration of the CA-MC model with the FR model and employing the significant driving force of urban growth in the simulation process have resulted in the improved simulation capability of the CA-MC model. This study has provided a novel approach for improving the CA-MC model based on FR, which will provide powerful support to planners and decision-makers in the development of future sustainable urban planning.

  4. Viscosity Solution of Mean-Variance Portfolio Selection of a Jump Markov Process with No-Shorting Constraints

    Directory of Open Access Journals (Sweden)

    Moussa Kounta

    2016-01-01

    Full Text Available We consider the so-called mean-variance portfolio selection problem in continuous time under the constraint that the short-selling of stocks is prohibited where all the market coefficients are random processes. In this situation the Hamilton-Jacobi-Bellman (HJB equation of the value function of the auxiliary problem becomes a coupled system of backward stochastic partial differential equation. In fact, the value function V often does not have the smoothness properties needed to interpret it as a solution to the dynamic programming partial differential equation in the usual (classical sense; however, in such cases V can be interpreted as a viscosity solution. Here we show the unicity of the viscosity solution and we see that the optimal and the value functions are piecewise linear functions based on some Riccati differential equations. In particular we solve the open problem posed by Li and Zhou and Zhou and Yin.

  5. Markov chains for testing redundant software

    Science.gov (United States)

    White, Allan L.; Sjogren, Jon A.

    1988-01-01

    A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.

  6. Markov Chain Monte Carlo

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.

  7. Partially Hidden Markov Models

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Rissanen, Jorma

    1996-01-01

    Partially Hidden Markov Models (PHMM) are introduced. They differ from the ordinary HMM's in that both the transition probabilities of the hidden states and the output probabilities are conditioned on past observations. As an illustration they are applied to black and white image compression where...

  8. A Bayesian Infinite Hidden Markov Vector Autoregressive Model

    NARCIS (Netherlands)

    D. Nibbering (Didier); R. Paap (Richard); M. van der Wel (Michel)

    2016-01-01

    textabstractWe propose a Bayesian infinite hidden Markov model to estimate time-varying parameters in a vector autoregressive model. The Markov structure allows for heterogeneity over time while accounting for state-persistence. By modelling the transition distribution as a Dirichlet process mixture

  9. A Markov decision model for optimising economic production lot size ...

    African Journals Online (AJOL)

    Adopting such a Markov decision process approach, the states of a Markov chain represent possible states of demand. The decision of whether or not to produce additional inventory units is made using dynamic programming. This approach demonstrates the existence of an optimal state-dependent EPL size, and produces ...

  10. Symmetrization of Facade Layouts

    KAUST Repository

    Jiang, Haiyong

    2016-02-26

    We present an automatic approach for symmetrizing urban facade layouts. Our method can generate a symmetric layout through minimally modifying the original input layout. Based on the principles of symmetry in urban design, we formulate facade layout symmetrization as an optimization problem. Our method further enhances the regularity of the final layout by redistributing and aligning elements in the layout. We demonstrate that the proposed solution can effectively generate symmetric facade layouts.

  11. Facade Layout Symmetrization

    KAUST Repository

    Jiang, Haiyong

    2016-04-11

    We present an automatic algorithm for symmetrizing facade layouts. Our method symmetrizes a given facade layout while minimally modifying the original layout. Based on the principles of symmetry in urban design, we formulate the problem of facade layout symmetrization as an optimization problem. Our system further enhances the regularity of the final layout by redistributing and aligning boxes in the layout. We demonstrate that the proposed solution can generate symmetric facade layouts efficiently. © 2015 IEEE.

  12. Psychotherapy as Stochastic Process: Fitting a Markov Chain Model to Interviews of Ellis and Rogers. University of Minnesota Office of Student Affairs Research Bulletin, Vol. 15, No. 18.

    Science.gov (United States)

    Lichtenberg, James W.; Hummel, Thomas J.

    This investigation tested the hypothesis that the probabilistic structure underlying psychotherapy interviews is Markovian. The "goodness of fit" of a first-order Markov chain model to actual therapy interviews was assessed using a x squared test of homogeneity, and by generating by Monte Carlo methods empirical sampling distributions of…

  13. Markov bridges, bisection and variance reduction

    DEFF Research Database (Denmark)

    Asmussen, Søren; Hobolth, Asger

    Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints....... In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented...

  14. Bisimulation and Simulation Relations for Markov Chains

    NARCIS (Netherlands)

    Baier, Christel; Hermanns, H.; Katoen, Joost P.; Wolf, Verena; Aceto, L.; Gordon, A.

    2006-01-01

    Formal notions of bisimulation and simulation relation play a central role for any kind of process algebra. This short paper sketches the main concepts for bisimulation and simulation relations for probabilistic systems, modelled by discrete- or continuous-time Markov chains.

  15. A Martingale Decomposition of Discrete Markov Chains

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard

    We consider a multivariate time series whose increments are given from a homogeneous Markov chain. We show that the martingale component of this process can be extracted by a filtering method and establish the corresponding martingale decomposition in closed-form. This representation is useful...

  16. Dynamic treatment selection and modification for personalised blood pressure therapy using a Markov decision process model: a cost-effectiveness analysis.

    Science.gov (United States)

    Choi, Sung Eun; Brandeau, Margaret L; Basu, Sanjay

    2017-11-15

    Personalised medicine seeks to select and modify treatments based on individual patient characteristics and preferences. We sought to develop an automated strategy to select and modify blood pressure treatments, incorporating the likelihood that patients with different characteristics would benefit from different types of medications and dosages and the potential severity and impact of different side effects among patients with different characteristics. We developed a Markov decision process (MDP) model to incorporate meta-analytic data and estimate the optimal treatment for maximising discounted lifetime quality-adjusted life-years (QALYs) based on individual patient characteristics, incorporating medication adjustment choices when a patient incurs side effects. We compared the MDP to current US blood pressure treatment guidelines (the Eighth Joint National Committee, JNC8) and a variant of current guidelines that incorporates results of a major recent trial of intensive treatment (Intensive JNC8). We used a microsimulation model of patient demographics, cardiovascular disease risk factors and side effect probabilities, sampling from the National Health and Nutrition Examination Survey (2003-2014), to compare the expected population outcomes from adopting the MDP versus guideline-based strategies. Costs and QALYs for the MDP-based treatment (MDPT), JNC8 and Intensive JNC8 strategies. Compared with the JNC8 guideline, the MDPT strategy would be cost-saving from a societal perspective with discounted savings of US$1187 per capita (95% CI 1178 to 1209) and an estimated discounted gain of 0.06 QALYs per capita (95% CI 0.04 to 0.08) among the US adult population. QALY gains would largely accrue from reductions in severe side effects associated with higher treatment doses later in life. The Intensive JNC8 strategy was dominated by the MDPT strategy. An MDP-based approach can aid decision-making by incorporating meta-analytic evidence to personalise blood pressure

  17. A Model of Fuel Combustion Process in The Marine Reciprocating Engine Work Space Taking Into Account Load and Wear of Crankshaft-Piston Assembly and The Theory of Semi-Markov Processes

    Directory of Open Access Journals (Sweden)

    Girtler Jerzy

    2016-09-01

    Full Text Available The article analyses the operation of reciprocal internal combustion engines, with marine engines used as an example. The analysis takes into account types of energy conversion in the work spaces (cylinders of these engines, loads of their crankshaft-piston assemblies, and types of fuel combustion which can take place in these spaces during engine operation. It is highlighted that the analysed time-dependent loads of marine internal combustion engine crankshaft-piston assemblies are random processes. It is also indicated that the wear of elements of those assemblies resulting from their load should also be considered a random process. A hypothesis is formulated which explains random nature of load and the absence of the theoretically expected detonation combustion in engines supplied with such fuels as Diesel Oil, Marine Diesel Oil, and Heavy Fuel Oil. A model is proposed for fuel combustion in an arbitrary work space of a marine Diesel engine, which has the form of a stochastic four-state process, discrete in states and continuous in time. The model is based on the theory of semi-Markov processes.

  18. On Symmetric Polynomials

    OpenAIRE

    Golden, Ryan; Cho, Ilwoo

    2015-01-01

    In this paper, we study structure theorems of algebras of symmetric functions. Based on a certain relation on elementary symmetric polynomials generating such algebras, we consider perturbation in the algebras. In particular, we understand generators of the algebras as perturbations. From such perturbations, define injective maps on generators, which induce algebra-monomorphisms (or embeddings) on the algebras. They provide inductive structure theorems on algebras of symmetric polynomials. As...

  19. Specification test for Markov models with measurement errors.

    Science.gov (United States)

    Kim, Seonjin; Zhao, Zhibiao

    2014-09-01

    Most existing works on specification testing assume that we have direct observations from the model of interest. We study specification testing for Markov models based on contaminated observations. The evolving model dynamics of the unobservable Markov chain is implicitly coded into the conditional distribution of the observed process. To test whether the underlying Markov chain follows a parametric model, we propose measuring the deviation between nonparametric and parametric estimates of conditional regression functions of the observed process. Specifically, we construct a nonparametric simultaneous confidence band for conditional regression functions and check whether the parametric estimate is contained within the band.

  20. Scaling Limit of Symmetric Random Walk in High-Contrast Periodic Environment

    Science.gov (United States)

    Piatnitski, A.; Zhizhina, E.

    2017-11-01

    The paper deals with the asymptotic properties of a symmetric random walk in a high contrast periodic medium in Z^d, d≥1. From the existing homogenization results it follows that under diffusive scaling the limit behaviour of this random walk need not be Markovian. The goal of this work is to show that if in addition to the coordinate of the random walk in Z^d we introduce an extra variable that characterizes the position of the random walk inside the period then the limit dynamics of this two-component process is Markov. We describe the limit process and observe that the components of the limit process are coupled. We also prove the convergence in the path space for the said random walk.

  1. Musical Markov Chains

    Science.gov (United States)

    Volchenkov, Dima; Dawin, Jean René

    A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.

  2. Quantum Bounded Symmetric Domains

    OpenAIRE

    Vaksman, L. L.

    2008-01-01

    This is Leonid Vaksman's monograph "Quantum bounded symmetric domains" (in Russian), preceded with an English translation of the table of contents and (a part) of the introduction. Quantum bounded symmetric domains are interesting from several points of view. In particular, they provide interesting examples for noncommutative complex analysis (i.e., the theory of subalgebras of C^*-algebars) initiated by W. Arveson.

  3. The Bacterial Sequential Markov Coalescent.

    Science.gov (United States)

    De Maio, Nicola; Wilson, Daniel J

    2017-05-01

    Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is

  4. Symmetric cryptographic protocols

    CERN Document Server

    Ramkumar, Mahalingam

    2014-01-01

    This book focuses on protocols and constructions that make good use of symmetric pseudo random functions (PRF) like block ciphers and hash functions - the building blocks for symmetric cryptography. Readers will benefit from detailed discussion of several strategies for utilizing symmetric PRFs. Coverage includes various key distribution strategies for unicast, broadcast and multicast security, and strategies for constructing efficient digests of dynamic databases using binary hash trees.   •        Provides detailed coverage of symmetric key protocols •        Describes various applications of symmetric building blocks •        Includes strategies for constructing compact and efficient digests of dynamic databases

  5. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    time Technical Consultant to. Systat Software Asia-Pacific. (P) Ltd., in Bangalore, where the technical work for the development of the statistical software Systat takes place. His research interests have been in statistical pattern recognition and biostatistics. Keywords. Markov chain, Monte Carlo sampling, Markov chain Monte.

  6. YMCA: Why Markov Chain Algebra?

    NARCIS (Netherlands)

    Bravetti, Mario; Hermanns, H.; Katoen, Joost P.; Aceto, L.; Gordon, A.

    2006-01-01

    Markov chains are widely used to determine system performance and reliability characteristics. The vast majority of applications considers continuous-time Markov chains (CTMCs). This note motivates how concurrency theory can be extended (as opposed to twisted) to CTMCs. We provide the core

  7. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    Markov Chain Monte Carlo Methods. 2. The Markov Chain Case. K B Athreya, Mohan Delampady and T Krishnan. K B Athreya is a Professor at. Cornell University. His research interests include mathematical analysis, probability theory and its application and statistics. He enjoys writing for Resonance. His spare time is ...

  8. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    ter of the 20th century, due to rapid developments in computing technology ... early part of this development saw a host of Monte ... These iterative. Monte Carlo procedures typically generate a random se- quence with the Markov property such that the Markov chain is ergodic with a limiting distribution coinciding with the ...

  9. Markov Random Field Surface Reconstruction

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Bærentzen, Jakob Andreas; Larsen, Rasmus

    2010-01-01

    A method for implicit surface reconstruction is proposed. The novelty in this paper is the adaption of Markov Random Field regularization of a distance field. The Markov Random Field formulation allows us to integrate both knowledge about the type of surface we wish to reconstruct (the prior) and...

  10. Markov Karar Süreci İle Modellenen Stokastik ve Çok Amaçlı Üretim/Envanter Problemlerinin Hedef Programlama Yaklaşımı İle Çözülmesi (Solving Stochastic and Multi-Objective Production/Inventory Problems Modeled By MARKOV Decision Process with Goal Programming Approach

    Directory of Open Access Journals (Sweden)

    Aslı ÖZDEMİR

    2009-07-01

    Full Text Available To make decisions involving uncertainty while making future plans, Markov Decision Process (MDP, one of the stochastic approaches, may provide assistance to managers. Methods such as value iteration, policy iteration or linear programming can be used in the solution of MDPs when only one objective such as profit maximization or cost minimization is considered. However the decisions made by business while operating in a competition environment require considering multiple and usually conflicting objectives simultaneously. Goal programming (GP, can be used to solve such problems. The aim of this study is to provide an integrated perspective involving the utilization of MDP and GP approaches together for the solution of stochastic multi-objective decision problems. To this end the production/inventory system of a business operating in the automotive supplier industry is considered.

  11. Flux through a Markov chain

    International Nuclear Information System (INIS)

    Floriani, Elena; Lima, Ricardo; Ourrad, Ouerdia; Spinelli, Lionel

    2016-01-01

    Highlights: • The flux through a Markov chain of a conserved quantity (mass) is studied. • Mass is supplied by an external source and ends in the absorbing states of the chain. • Meaningful for modeling open systems whose dynamics has a Markov property. • The analytical expression of mass distribution is given for a constant source. • The expression of mass distribution is given for periodic or random sources. - Abstract: In this paper we study the flux through a finite Markov chain of a quantity, that we will call mass, which moves through the states of the chain according to the Markov transition probabilities. Mass is supplied by an external source and accumulates in the absorbing states of the chain. We believe that studying how this conserved quantity evolves through the transient (non-absorbing) states of the chain could be useful for the modelization of open systems whose dynamics has a Markov property.

  12. Markov properties of solar granulation

    Science.gov (United States)

    Asensio Ramos, A.

    2009-01-01

    Aims: We estimate the minimum length on which solar granulation can be considered to be a Markovian process. Methods: We measure the variation in the bright difference between two pixels in images of the solar granulation for different distances between the pixels. This scale-dependent data is empirically analyzed to find the minimum scale on which the process can be considered Markovian. Results: The results suggest that the solar granulation can be considered to be a Markovian process on scales longer than r_M=300-500 km. On longer length scales, solar images can be considered to be a Markovian stochastic process that consists of structures of size r_M. Smaller structures exhibit correlations on many scales simultaneously yet cannot be described by a hierarchical cascade in scales. An analysis of the longitudinal magnetic-flux density indicates that it cannot be a Markov process on any scale. Conclusions: The results presented in this paper constitute a stringent test for the realism of numerical magneto-hydrodynamical simulations of solar magneto-convection. In future exhaustive analyse, the non-Markovian properties of the magnetic flux density on all analyzed scales might help us to understand the physical mechanism generating the field that we detect in the solar surface.

  13. Stochastic Dynamics through Hierarchically Embedded Markov Chains

    Science.gov (United States)

    Vasconcelos, Vítor V.; Santos, Fernando P.; Santos, Francisco C.; Pacheco, Jorge M.

    2017-02-01

    Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects—such as mutations in evolutionary dynamics and a random exploration of choices in social systems—including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.

  14. Markov Modeling of Component Fault Growth Over A Derived Domain of Feasible Output Control Effort Modifications

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of...

  15. Markov Chain Models for the Stochastic Modeling of Pitting Corrosion

    OpenAIRE

    Valor, A.; Caleyo, F.; Alfonso, L.; Velázquez, J. C.; Hallen, J. M.

    2013-01-01

    The stochastic nature of pitting corrosion of metallic structures has been widely recognized. It is assumed that this kind of deterioration retains no memory of the past, so only the current state of the damage influences its future development. This characteristic allows pitting corrosion to be categorized as a Markov process. In this paper, two different models of pitting corrosion, developed using Markov chains, are presented. Firstly, a continuous-time, nonhomogeneous linear growth (pure ...

  16. Markov Chains For Testing Redundant Software

    Science.gov (United States)

    White, Allan L.; Sjogren, Jon A.

    1990-01-01

    Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.

  17. Quadratic Variation by Markov Chains

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Horel, Guillaume

    We introduce a novel estimator of the quadratic variation that is based on the the- ory of Markov chains. The estimator is motivated by some general results concerning filtering contaminated semimartingales. Specifically, we show that filtering can in prin- ciple remove the effects of market...... microstructure noise in a general framework where little is assumed about the noise. For the practical implementation, we adopt the dis- crete Markov chain model that is well suited for the analysis of financial high-frequency prices. The Markov chain framework facilitates simple expressions and elegant analyti...

  18. A symmetrical rail accelerator

    International Nuclear Information System (INIS)

    Igenbergs, E.

    1991-01-01

    This paper reports on the symmetrical rail accelerator that has four rails, which are arranged symmetrically around the bore. The opposite rails have the same polarity and the adjacent rails the opposite polarity. In this configuration the radial force acting upon the individual rails is significantly smaller than in a conventional 2-rail configuration and a plasma armature is focussed towards the axis of the barrel. Experimental results indicate a higher efficiency compared to a conventional rail accelerator

  19. Symmetric eikonal expansion

    International Nuclear Information System (INIS)

    Matsuki, Takayuki

    1976-01-01

    Symmetric eikonal expansion for the scattering amplitude is formulated for nonrelativistic and relativistic potential scatterings and also for the quantum field theory. The first approximations coincide with those of Levy and Sucher. The obtained scattering amplitudes are time reversal invariant for all cases and are crossing symmetric for the quantum field theory in each order of approximation. The improved eikonal phase introduced by Levy and Sucher is also derived from the different approximation scheme from the above. (auth.)

  20. Markov random fields on triangle meshes

    DEFF Research Database (Denmark)

    Andersen, Vedrana; Aanæs, Henrik; Bærentzen, Jakob Andreas

    2010-01-01

    In this paper we propose a novel anisotropic smoothing scheme based on Markov Random Fields (MRF). Our scheme is formulated as two coupled processes. A vertex process is used to smooth the mesh by displacing the vertices according to a MRF smoothness prior, while an independent edge process labels...... mesh edges according to a feature detecting prior. Since we should not smooth across a sharp feature, we use edge labels to control the vertex process. In a Bayesian framework, MRF priors are combined with the likelihood function related to the mesh formation method. The output of our algorithm...

  1. Constructing Dynamic Event Trees from Markov Models

    International Nuclear Information System (INIS)

    Paolo Bucci; Jason Kirschenbaum; Tunc Aldemir; Curtis Smith; Ted Wood

    2006-01-01

    In the probabilistic risk assessment (PRA) of process plants, Markov models can be used to model accurately the complex dynamic interactions between plant physical process variables (e.g., temperature, pressure, etc.) and the instrumentation and control system that monitors and manages the process. One limitation of this approach that has prevented its use in nuclear power plant PRAs is the difficulty of integrating the results of a Markov analysis into an existing PRA. In this paper, we explore a new approach to the generation of failure scenarios and their compilation into dynamic event trees from a Markov model of the system. These event trees can be integrated into an existing PRA using software tools such as SAPHIRE. To implement our approach, we first construct a discrete-time Markov chain modeling the system of interest by: (a) partitioning the process variable state space into magnitude intervals (cells), (b) using analytical equations or a system simulator to determine the transition probabilities between the cells through the cell-to-cell mapping technique, and, (c) using given failure/repair data for all the components of interest. The Markov transition matrix thus generated can be thought of as a process model describing the stochastic dynamic behavior of the finite-state system. We can therefore search the state space starting from a set of initial states to explore all possible paths to failure (scenarios) with associated probabilities. We can also construct event trees of arbitrary depth by tracing paths from a chosen initiating event and recording the following events while keeping track of the probabilities associated with each branch in the tree. As an example of our approach, we use the simple level control system often used as benchmark in the literature with one process variable (liquid level in a tank), and three control units: a drain unit and two supply units. Each unit includes a separate level sensor to observe the liquid level in the tank

  2. Bibliometric Application of Markov Chains.

    Science.gov (United States)

    Pao, Miranda Lee; McCreery, Laurie

    1986-01-01

    A rudimentary description of Markov Chains is presented in order to introduce its use to describe and to predict authors' movements among subareas of the discipline of ethnomusicology. Other possible applications are suggested. (Author)

  3. Markov Chain Models for the Stochastic Modeling of Pitting Corrosion

    Directory of Open Access Journals (Sweden)

    A. Valor

    2013-01-01

    Full Text Available The stochastic nature of pitting corrosion of metallic structures has been widely recognized. It is assumed that this kind of deterioration retains no memory of the past, so only the current state of the damage influences its future development. This characteristic allows pitting corrosion to be categorized as a Markov process. In this paper, two different models of pitting corrosion, developed using Markov chains, are presented. Firstly, a continuous-time, nonhomogeneous linear growth (pure birth Markov process is used to model external pitting corrosion in underground pipelines. A closed-form solution of the system of Kolmogorov's forward equations is used to describe the transition probability function in a discrete pit depth space. The transition probability function is identified by correlating the stochastic pit depth mean with the empirical deterministic mean. In the second model, the distribution of maximum pit depths in a pitting experiment is successfully modeled after the combination of two stochastic processes: pit initiation and pit growth. Pit generation is modeled as a nonhomogeneous Poisson process, in which induction time is simulated as the realization of a Weibull process. Pit growth is simulated using a nonhomogeneous Markov process. An analytical solution of Kolmogorov's system of equations is also found for the transition probabilities from the first Markov state. Extreme value statistics is employed to find the distribution of maximum pit depths.

  4. Symmetric states: Their nonlocality and entanglement

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zizhu; Markham, Damian [CNRS LTCI, Département Informatique et Réseaux, Telecom ParisTech, 23 avenue d' Italie, CS 51327, 75214 Paris CEDEX 13 (France)

    2014-12-04

    The nonlocality of permutation symmetric states of qubits is shown via an extension of the Hardy paradox and the extension of the associated inequality. This is achieved by using the Majorana representation, which is also a powerful tool in the study of entanglement properties of symmetric states. Through the Majorana representation, different nonlocal properties can be linked to different entanglement properties of a state, which is useful in determining the usefulness of different states in different quantum information processing tasks.

  5. Markov Chain Modelling for Short-Term NDVI Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Stepčenko Artūrs

    2016-12-01

    Full Text Available In this paper, the NDVI time series forecasting model has been developed based on the use of discrete time, continuous state Markov chain of suitable order. The normalised difference vegetation index (NDVI is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation; therefore, it is an important variable for vegetation forecasting. A Markov chain is a stochastic process that consists of a state space. This stochastic process undergoes transitions from one state to another in the state space with some probabilities. A Markov chain forecast model is flexible in accommodating various forecast assumptions and structures. The present paper discusses the considerations and techniques in building a Markov chain forecast model at each step. Continuous state Markov chain model is analytically described. Finally, the application of the proposed Markov chain model is illustrated with reference to a set of NDVI time series data.

  6. Symmetric Tensor Decomposition

    DEFF Research Database (Denmark)

    Brachat, Jerome; Comon, Pierre; Mourrain, Bernard

    2010-01-01

    We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables...... of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions and for detecting the rank....

  7. Prognostics for Steam Generator Tube Rupture using Markov Chain model

    International Nuclear Information System (INIS)

    Kim, Gibeom; Heo, Gyunyoung; Kim, Hyeonmin

    2016-01-01

    This paper will describe the prognostics method for evaluating and forecasting the ageing effect and demonstrate the procedure of prognostics for the Steam Generator Tube Rupture (SGTR) accident. Authors will propose the data-driven method so called MCMC (Markov Chain Monte Carlo) which is preferred to the physical-model method in terms of flexibility and availability. Degradation data is represented as growth of burst probability over time. Markov chain model is performed based on transition probability of state. And the state must be discrete variable. Therefore, burst probability that is continuous variable have to be changed into discrete variable to apply Markov chain model to the degradation data. The Markov chain model which is one of prognostics methods was described and the pilot demonstration for a SGTR accident was performed as a case study. The Markov chain model is strong since it is possible to be performed without physical models as long as enough data are available. However, in the case of the discrete Markov chain used in this study, there must be loss of information while the given data is discretized and assigned to the finite number of states. In this process, original information might not be reflected on prediction sufficiently. This should be noted as the limitation of discrete models. Now we will be studying on other prognostics methods such as GPM (General Path Model) which is also data-driven method as well as the particle filer which belongs to physical-model method and conducting comparison analysis

  8. Tornadoes and related damage costs: statistical modeling with a semi-Markov approach

    OpenAIRE

    Corini, Chiara; D'Amico, Guglielmo; Petroni, Filippo; Prattico, Flavio; Manca, Raimondo

    2015-01-01

    We propose a statistical approach to tornadoes modeling for predicting and simulating occurrences of tornadoes and accumulated cost distributions over a time interval. This is achieved by modeling the tornadoes intensity, measured with the Fujita scale, as a stochastic process. Since the Fujita scale divides tornadoes intensity into six states, it is possible to model the tornadoes intensity by using Markov and semi-Markov models. We demonstrate that the semi-Markov approach is able to reprod...

  9. Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution

    Directory of Open Access Journals (Sweden)

    Ivan B. Djordjevic

    2015-08-01

    Full Text Available Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i Markovian classical model, (ii Markovian-like quantum model, and (iii hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage Markov chain-like models of aging, which

  10. Analysis of drought areas in northern Algeria using Markov chains

    Indian Academy of Sciences (India)

    memoryless': loosely speaking, a process satisfies the Markov property if one can make predictions for the future of the process based solely on its present state just as well as one could know the process's full history. (Gabriel and Neuman 1962; ...

  11. PT-symmetric strings

    International Nuclear Information System (INIS)

    Amore, Paolo; Fernández, Francisco M.; Garcia, Javier; Gutierrez, German

    2014-01-01

    We study both analytically and numerically the spectrum of inhomogeneous strings with PT-symmetric density. We discuss an exactly solvable model of PT-symmetric string which is isospectral to the uniform string; for more general strings, we calculate exactly the sum rules Z(p)≡∑ n=1 ∞ 1/E n p , with p=1,2,… and find explicit expressions which can be used to obtain bounds on the lowest eigenvalue. A detailed numerical calculation is carried out for two non-solvable models depending on a parameter, obtaining precise estimates of the critical values where pair of real eigenvalues become complex. -- Highlights: •PT-symmetric Hamiltonians exhibit real eigenvalues when PT symmetry is unbroken. •We study PT-symmetric strings with complex density. •They exhibit regions of unbroken PT symmetry. •We calculate the critical parameters at the boundaries of those regions. •There are exact real sum rules for some particular complex densities

  12. On Markov Modulated Mean-Reverting Price-Difference Models

    Directory of Open Access Journals (Sweden)

    W. P. Malcom

    2008-06-01

    Full Text Available In this paper we develop a stochastic model incorporating a double-Markov modulated mean-reversion model. Unlike a price process the basis process X can take positive or negative values. This model is based on an explicit discretisation of the corresponding continuous time dynamics. The new feature in our model is that we suppose the mean reverting level in our dynamics as well as the noise coefficient can change according to the states of some finite-state Markov processes which could be the economy and some other unseen random phenomenon.

  13. Descriptive and predictive evaluation of high resolution Markov chain precipitation models

    DEFF Research Database (Denmark)

    Sørup, Hjalte Jomo Danielsen; Madsen, Henrik; Arnbjerg-Nielsen, Karsten

    2012-01-01

    . Continuous modelling of the Markov process proved attractive because of a marked decrease in the number of parameters. Inclusion of seasonality into the continuous Markov chain model proved difficult. Monte Carlo simulations with the models show that it is very difficult for all the model formulations......A time series of tipping bucket recordings of very high temporal and volumetric resolution precipitation is modelled using Markov chain models. Both first and second‐order Markov models as well as seasonal and diurnal models are investigated and evaluated using likelihood based techniques....... The first‐order Markov model seems to capture most of the properties of precipitation, but inclusion of seasonal and diurnal variation improves the model. Including a second‐order Markov Chain component does improve the descriptive capabilities of the model, but is very expensive in its parameter use...

  14. Markov Networks in Evolutionary Computation

    CERN Document Server

    Shakya, Siddhartha

    2012-01-01

    Markov networks and other probabilistic graphical modes have recently received an upsurge in attention from Evolutionary computation community, particularly in the area of Estimation of distribution algorithms (EDAs).  EDAs have arisen as one of the most successful experiences in the application of machine learning methods in optimization, mainly due to their efficiency to solve complex real-world optimization problems and their suitability for theoretical analysis. This book focuses on the different steps involved in the conception, implementation and application of EDAs that use Markov networks, and undirected models in general. It can serve as a general introduction to EDAs but covers also an important current void in the study of these algorithms by explaining the specificities and benefits of modeling optimization problems by means of undirected probabilistic models. All major developments to date in the progressive introduction of Markov networks based EDAs are reviewed in the book. Hot current researc...

  15. The reliability of systems with non-Markov-type components

    International Nuclear Information System (INIS)

    Widmer, U.

    1985-03-01

    Analytical methods are presented for determining the reliability of systems, that are constructed from independent non-Markov-type components. The calculations are based on the lifetime distribution of systems, system condition possibilities as a function of time and the system disposability. The special case of a system with only one repair channel of arbitrary repair time distribution and with Markovian failure of the components is calculated. The general case with arbitrary time distribution of failure and repair of the components and with arbitrary number of repair channels leads to a stochastic process, which represents an extension of the semi-Markov process. The theoretical arrangements are developed for this. Finally, a FORTRAN program is presented which is written for the following purposes: (a) handling of arbitrary semi-Markov processes; (b) handling of systems with a repair channel and Markovian component failure. (A.N.K.)

  16. Baryon symmetric big bang cosmology

    International Nuclear Information System (INIS)

    Stecker, F.W.

    1978-01-01

    It is stated that the framework of baryon symmetric big bang (BSBB) cosmology offers our greatest potential for deducting the evolution of the Universe because its physical laws and processes have the minimum number of arbitrary assumptions about initial conditions in the big-bang. In addition, it offers the possibility of explaining the photon-baryon ratio in the Universe and how galaxies and galaxy clusters are formed. BSBB cosmology also provides the only acceptable explanation at present for the origin of the cosmic γ-ray background radiation. (author)

  17. Markov Models for Handwriting Recognition

    CERN Document Server

    Plotz, Thomas

    2011-01-01

    Since their first inception, automatic reading systems have evolved substantially, yet the recognition of handwriting remains an open research problem due to its substantial variation in appearance. With the introduction of Markovian models to the field, a promising modeling and recognition paradigm was established for automatic handwriting recognition. However, no standard procedures for building Markov model-based recognizers have yet been established. This text provides a comprehensive overview of the application of Markov models in the field of handwriting recognition, covering both hidden

  18. Markov chains and mixing times

    CERN Document Server

    Levin, David A; Wilmer, Elizabeth L

    2009-01-01

    This book is an introduction to the modern approach to the theory of Markov chains. The main goal of this approach is to determine the rate of convergence of a Markov chain to the stationary distribution as a function of the size and geometry of the state space. The authors develop the key tools for estimating convergence times, including coupling, strong stationary times, and spectral methods. Whenever possible, probabilistic methods are emphasized. The book includes many examples and provides brief introductions to some central models of statistical mechanics. Also provided are accounts of r

  19. Dynamic system evolution and markov chain approximation

    Directory of Open Access Journals (Sweden)

    Roderick V. Nicholas Melnik

    1998-01-01

    Full Text Available In this paper computational aspects of the mathematical modelling of dynamic system evolution have been considered as a problem in information theory. The construction of mathematical models is treated as a decision making process with limited available information.The solution of the problem is associated with a computational model based on heuristics of a Markov Chain in a discrete space–time of events. A stable approximation of the chain has been derived and the limiting cases are discussed. An intrinsic interconnection of constructive, sequential, and evolutionary approaches in related optimization problems provides new challenges for future work.

  20. Homogenous finitary symmetric groups

    Directory of Open Access Journals (Sweden)

    Otto‎. ‎H‎. Kegel

    2015-03-01

    Full Text Available We characterize strictly diagonal type of embeddings of finitary symmetric groups in terms of cardinality and the characteristic. Namely, we prove the following. Let kappa be an infinite cardinal. If G=underseti=1stackrelinftybigcupG i , where G i =FSym(kappan i , (H=underseti=1stackrelinftybigcupH i , where H i =Alt(kappan i , is a group of strictly diagonal type and xi=(p 1 ,p 2 ,ldots is an infinite sequence of primes, then G is isomorphic to the homogenous finitary symmetric group FSym(kappa(xi (H is isomorphic to the homogenous alternating group Alt(kappa(xi , where n 0 =1,n i =p 1 p 2 ldotsp i .

  1. Integration by Parts and Martingale Representation for a Markov Chain

    Directory of Open Access Journals (Sweden)

    Tak Kuen Siu

    2014-01-01

    Full Text Available Integration-by-parts formulas for functions of fundamental jump processes relating to a continuous-time, finite-state Markov chain are derived using Bismut's change of measures approach to Malliavin calculus. New expressions for the integrands in stochastic integrals corresponding to representations of martingales for the fundamental jump processes are derived using the integration-by-parts formulas. These results are then applied to hedge contingent claims in a Markov chain financial market, which provides a practical motivation for the developments of the integration-by-parts formulas and the martingale representations.

  2. Numerical analysis of Markov-perfect equilibria with multiple stable steady states : A duopoly application with innovative firms

    NARCIS (Netherlands)

    Dawid, H.; Keoula, M.Y.; Kort, Peter

    2017-01-01

    This paper presents a numerical method for the characterization of Markov-perfect equilibria of symmetric differential games exhibiting coexisting stable steady states. The method relying on the calculation of ‘local value functions’ through collocation in overlapping parts of the state space, is

  3. Symmetric vectors and algebraic classification

    International Nuclear Information System (INIS)

    Leibowitz, E.

    1980-01-01

    The concept of symmetric vector field in Riemannian manifolds, which arises in the study of relativistic cosmological models, is analyzed. Symmetric vectors are tied up with the algebraic properties of the manifold curvature. A procedure for generating a congruence of symmetric fields out of a given pair is outlined. The case of a three-dimensional manifold of constant curvature (''isotropic universe'') is studied in detail, with all its symmetric vector fields being explicitly constructed

  4. Consistency and Refinement for Interval Markov Chains

    DEFF Research Database (Denmark)

    Delahaye, Benoit; Larsen, Kim Guldstrand; Legay, Axel

    2012-01-01

    Interval Markov Chains (IMC), or Markov Chains with probability intervals in the transition matrix, are the base of a classic specification theory for probabilistic systems [18]. The standard semantics of IMCs assigns to a specification the set of all Markov Chains that satisfy its interval...

  5. Markov chain modelling of pitting corrosion in underground pipelines

    International Nuclear Information System (INIS)

    Caleyo, F.; Velazquez, J.C.; Valor, A.; Hallen, J.M.

    2009-01-01

    A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.

  6. A Dependent Hidden Markov Model of Credit Quality

    Directory of Open Access Journals (Sweden)

    Małgorzata Wiktoria Korolkiewicz

    2012-01-01

    Full Text Available We propose a dependent hidden Markov model of credit quality. We suppose that the "true" credit quality is not observed directly but only through noisy observations given by posted credit ratings. The model is formulated in discrete time with a Markov chain observed in martingale noise, where "noise" terms of the state and observation processes are possibly dependent. The model provides estimates for the state of the Markov chain governing the evolution of the credit rating process and the parameters of the model, where the latter are estimated using the EM algorithm. The dependent dynamics allow for the so-called "rating momentum" discussed in the credit literature and also provide a convenient test of independence between the state and observation dynamics.

  7. Representations of locally symmetric spaces

    International Nuclear Information System (INIS)

    Rahman, M.S.

    1995-09-01

    Locally symmetric spaces in reference to globally and Hermitian symmetric Riemannian spaces are studied. Some relations between locally and globally symmetric spaces are exhibited. A lucid account of results on relevant spaces, motivated by fundamental problems, are formulated as theorems and propositions. (author). 10 refs

  8. Adaptive Partially Hidden Markov Models

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Rasmussen, Tage

    1996-01-01

    Partially Hidden Markov Models (PHMM) have recently been introduced. The transition and emission probabilities are conditioned on the past. In this report, the PHMM is extended with a multiple token version. The different versions of the PHMM are applied to bi-level image coding....

  9. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    GENERAL ! ARTICLE. Markov Chain Monte Carlo Methods. 3. Statistical Concepts. K B Athreya, Mohan Delampady and T Krishnan. K B Athreya is a Professor at. Cornell University. His research interests include mathematical analysis, probability theory and its application and statistics. He enjoys writing for Resonance.

  10. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    2. The Markov Chain Case. K B Athreya, Mohan Delampady and T Krishnan. K B Athreya is a Professor at. Cornell University. His research interests include mathematical analysis, probability theory and its application and statistics. He enjoys writing for Resonance. His spare time is spent listening to Indian classical music.

  11. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    Systat Software Asia-Pacific. (P) Ltd., in Bangalore, where the technical work for the development of the ... Markov chain structure) with applications to integration including integration in a Bayesian context. In Pa.rt 2, ... The applications of MCMC to Bayesian infer- ence will have to wait for the concluding pa,rt of this series.

  12. A semi-Markov model for the duration of stay in a non-homogenous ...

    African Journals Online (AJOL)

    The semi-Markov approach to a non-homogenous manpower system is considered. The mean duration of stay in a grade and the total duration of stay in the system are obtained. A renewal type equation is developed and used in deriving the limiting distribution of the semi – Markov process. Empirical estimators of the ...

  13. New measure selection for Hunt-Devolder semi-Markov regime switching interest rate models

    Science.gov (United States)

    Preda, Vasile; Dedu, Silvia; Sheraz, Muhammad

    2014-08-01

    In this paper we construct the minimal entropy martingale for semi-Markov regime switching interest rate models using some general entropy measures. We prove that, for the one-period model, the minimal entropy martingale for semi-Markov processes in the case of the Tsallis and Kaniadakis entropies are the same as in the case of Shannon entropy.

  14. Limits on the dipole moments of the $\\tau$-lepton via the process $e^{+}e^{-} \\to \\tau^{+}\\tau^{-}\\gamma$ in a left-right symmetric model

    CERN Document Server

    Gutiérrez-Rodríguez, A; Noriega, Luis; 10.1142/S0217732304014689

    2004-01-01

    Limits on the anomalous magnetic moment and the electric dipole moment of the tau lepton are calculated through the reaction e/sup + /e/sup -/ to tau /sup +/ tau /sup -/ gamma at the Z/sub 1/-pole and in the framework of a left-right symmetric model. The results are based on the recent data reported by the L3 collaboration at CERN LEP. Due to the stringent limit of the model mixing angle phi , the effect of this angle on the dipole moments is quite small.

  15. Benchmarking of a Markov multizone model of contaminant transport.

    Science.gov (United States)

    Jones, Rachael M; Nicas, Mark

    2014-10-01

    A Markov chain model previously applied to the simulation of advection and diffusion process of gaseous contaminants is extended to three-dimensional transport of particulates in indoor environments. The model framework and assumptions are described. The performance of the Markov model is benchmarked against simple conventional models of contaminant transport. The Markov model is able to replicate elutriation predictions of particle deposition with distance from a point source, and the stirred settling of respirable particles. Comparisons with turbulent eddy diffusion models indicate that the Markov model exhibits numerical diffusion in the first seconds after release, but over time accurately predicts mean lateral dispersion. The Markov model exhibits some instability with grid length aspect when turbulence is incorporated by way of the turbulent diffusion coefficient, and advection is present. However, the magnitude of prediction error may be tolerable for some applications and can be avoided by incorporating turbulence by way of fluctuating velocity (e.g. turbulence intensity). © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  16. Influence of credit scoring on the dynamics of Markov chain

    Science.gov (United States)

    Galina, Timofeeva

    2015-11-01

    Markov processes are widely used to model the dynamics of a credit portfolio and forecast the portfolio risk and profitability. In the Markov chain model the loan portfolio is divided into several groups with different quality, which determined by presence of indebtedness and its terms. It is proposed that dynamics of portfolio shares is described by a multistage controlled system. The article outlines mathematical formalization of controls which reflect the actions of the bank's management in order to improve the loan portfolio quality. The most important control is the organization of approval procedure of loan applications. The credit scoring is studied as a control affecting to the dynamic system. Different formalizations of "good" and "bad" consumers are proposed in connection with the Markov chain model.

  17. Markov chain solution of photon multiple scattering through turbid slabs.

    Science.gov (United States)

    Lin, Ying; Northrop, William F; Li, Xuesong

    2016-11-14

    This work introduces a Markov Chain solution to model photon multiple scattering through turbid slabs via anisotropic scattering process, i.e., Mie scattering. Results show that the proposed Markov Chain model agree with commonly used Monte Carlo simulation for various mediums such as medium with non-uniform phase functions and absorbing medium. The proposed Markov Chain solution method successfully converts the complex multiple scattering problem with practical phase functions into a matrix form and solves transmitted/reflected photon angular distributions by matrix multiplications. Such characteristics would potentially allow practical inversions by matrix manipulation or stochastic algorithms where widely applied stochastic methods such as Monte Carlo simulations usually fail, and thus enable practical diagnostics reconstructions such as medical diagnosis, spray analysis, and atmosphere sciences.

  18. Application of Hidden Markov Models in Biomolecular Simulations.

    Science.gov (United States)

    Shukla, Saurabh; Shamsi, Zahra; Moffett, Alexander S; Selvam, Balaji; Shukla, Diwakar

    2017-01-01

    Hidden Markov models (HMMs) provide a framework to analyze large trajectories of biomolecular simulation datasets. HMMs decompose the conformational space of a biological molecule into finite number of states that interconvert among each other with certain rates. HMMs simplify long timescale trajectories for human comprehension, and allow comparison of simulations with experimental data. In this chapter, we provide an overview of building HMMs for analyzing bimolecular simulation datasets. We demonstrate the procedure for building a Hidden Markov model for Met-enkephalin peptide simulation dataset and compare the timescales of the process.

  19. Markov chain analysis of single spin flip Ising simulations

    International Nuclear Information System (INIS)

    Hennecke, M.

    1997-01-01

    The Markov processes defined by random and loop-based schemes for single spin flip attempts in Monte Carlo simulations of the 2D Ising model are investigated, by explicitly constructing their transition matrices. Their analysis reveals that loops over all lattice sites using a Metropolis-type single spin flip probability often do not define ergodic Markov chains, and have distorted dynamical properties even if they are ergodic. The transition matrices also enable a comparison of the dynamics of random versus loop spin selection and Glauber versus Metropolis probabilities

  20. Detecting Faults By Use Of Hidden Markov Models

    Science.gov (United States)

    Smyth, Padhraic J.

    1995-01-01

    Frequency of false alarms reduced. Faults in complicated dynamic system (e.g., antenna-aiming system, telecommunication network, or human heart) detected automatically by method of automated, continuous monitoring. Obtains time-series data by sampling multiple sensor outputs at discrete intervals of t and processes data via algorithm determining whether system in normal or faulty state. Algorithm implements, among other things, hidden first-order temporal Markov model of states of system. Mathematical model of dynamics of system not needed. Present method is "prior" method mentioned in "Improved Hidden-Markov-Model Method of Detecting Faults" (NPO-18982).

  1. Hierarchical Multiple Markov Chain Model for Unsupervised Texture Segmentation

    Czech Academy of Sciences Publication Activity Database

    Scarpa, G.; Gaetano, R.; Haindl, Michal; Zerubia, J.

    2009-01-01

    Roč. 18, č. 8 (2009), s. 1830-1843 ISSN 1057-7149 R&D Projects: GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : Classification * texture analysis * segmentation * hierarchical image models * Markov process Subject RIV: BD - Theory of Information Impact factor: 2.848, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-hierarchical multiple markov chain model for unsupervised texture segmentation.pdf

  2. Honest Importance Sampling with Multiple Markov Chains.

    Science.gov (United States)

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable

  3. A Markov Chain Model for Contagion

    Directory of Open Access Journals (Sweden)

    Angelos Dassios

    2014-11-01

    Full Text Available We introduce a bivariate Markov chain counting process with contagion for modelling the clustering arrival of loss claims with delayed settlement for an insurance company. It is a general continuous-time model framework that also has the potential to be applicable to modelling the clustering arrival of events, such as jumps, bankruptcies, crises and catastrophes in finance, insurance and economics with both internal contagion risk and external common risk. Key distributional properties, such as the moments and probability generating functions, for this process are derived. Some special cases with explicit results and numerical examples and the motivation for further actuarial applications are also discussed. The model can be considered a generalisation of the dynamic contagion process introduced by Dassios and Zhao (2011.

  4. The Candy model revisited: Markov properties and inference

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette); R.S. Stoica

    2001-01-01

    textabstractThis paper studies the Candy model, a marked point process introduced by Stoica et al. (2000). We prove Ruelle and local stability, investigate its Markov properties, and discuss how the model may be sampled. Finally, we consider estimation of the model parameters and present some

  5. finite markov chain model in lithofacies analysis: an example from ...

    African Journals Online (AJOL)

    Admin

    The Markov Chain Stochastic Process has been used both to analyze the vertical lithofacies of the Bida. Sandstone (Campanian – Maastrichtian) in Bida area .... a particular lithofacies state overlies another. Fig. 2 a: Lithofacies F1 – F6 in outcrop section of the Bida Sandstone at the Bida Cemetery behind the Government.

  6. Transportation and concentration inequalities for bifurcating Markov chains

    DEFF Research Database (Denmark)

    Penda, S. Valère Bitseki; Escobar-Bach, Mikael; Guillin, Arnaud

    2017-01-01

    concentration inequalities.We also study deviation inequalities for the empirical means under relaxed assumptions on the Wasserstein contraction for the Markov kernels. Applications to bifurcating nonlinear autoregressive processes are considered for point-wise estimates of the non-linear autoregressive...

  7. Estimation of the workload correlation in a Markov fluid queue

    NARCIS (Netherlands)

    Kaynar, B.; Mandjes, M.R.H.

    2013-01-01

    This paper considers a Markov fluid queue, focusing on the correlation function of the stationary workload process. A simulation-based computation technique is proposed, which relies on a coupling idea. Then an upper bound on the variance of the resulting estimator is given, which reveals how the

  8. Application of Markov chain and entropy analysis to lithologic ...

    Indian Academy of Sciences (India)

    A statistical approach by a modified Markov process model and entropy function is used to prove that the early Permian Barakar Formation of the Bellampalli coalfield developed distinct cyclicities during deposition. From results, the transition path of lithological states typical for the Bellampalli basin is as: coarse to ...

  9. Efficient Approximation of Optimal Control for Markov Games

    DEFF Research Database (Denmark)

    Fearnley, John; Rabe, Markus; Schewe, Sven

    2011-01-01

    We study the time-bounded reachability problem for continuous-time Markov decision processes (CTMDPs) and games (CTMGs). Existing techniques for this problem use discretisation techniques to break time into discrete intervals, and optimal control is approximated for each interval separately. Curr...

  10. Portfolio Allocation under the Vendor Managed Inventory: A Markov ...

    African Journals Online (AJOL)

    ADOWIE PERE

    also studied. The main objective of the study is to apply Markov decision process to portfolio allocation problem under vendor managed inventory environment in order to obtain the expected reward for each decision and the optimal policy that maps an action to a given state. Inventory management is very important in most.

  11. Hidden Markov model-based approach for generation of Pitman ...

    Indian Academy of Sciences (India)

    In this paper, an approach for feature extraction using Mel frequency cep- stral coefficients (MFCC) and classification using hidden Markov models (HMM) for generating strokes comprising consonants and vowels (CV) in the process of production of Pitman shorthand language from spoken English is proposed. The.

  12. Markov-modulated Brownian motion with two reflecting barriers

    NARCIS (Netherlands)

    Ivanovs, J.

    2010-01-01

    We consider a Markov-modulated Brownian motion reflected to stay in a strip [0, B]. The stationary distribution of this process is known to have a simple form under some assumptions. We provide a short probabilistic argument leading to this result and explain its simplicity. Moreover, this argument

  13. Markov Chain Ontology Analysis (MCOA

    Directory of Open Access Journals (Sweden)

    Frost H

    2012-02-01

    Full Text Available Abstract Background Biomedical ontologies have become an increasingly critical lens through which researchers analyze the genomic, clinical and bibliographic data that fuels scientific research. Of particular relevance are methods, such as enrichment analysis, that quantify the importance of ontology classes relative to a collection of domain data. Current analytical techniques, however, remain limited in their ability to handle many important types of structural complexity encountered in real biological systems including class overlaps, continuously valued data, inter-instance relationships, non-hierarchical relationships between classes, semantic distance and sparse data. Results In this paper, we describe a methodology called Markov Chain Ontology Analysis (MCOA and illustrate its use through a MCOA-based enrichment analysis application based on a generative model of gene activation. MCOA models the classes in an ontology, the instances from an associated dataset and all directional inter-class, class-to-instance and inter-instance relationships as a single finite ergodic Markov chain. The adjusted transition probability matrix for this Markov chain enables the calculation of eigenvector values that quantify the importance of each ontology class relative to other classes and the associated data set members. On both controlled Gene Ontology (GO data sets created with Escherichia coli, Drosophila melanogaster and Homo sapiens annotations and real gene expression data extracted from the Gene Expression Omnibus (GEO, the MCOA enrichment analysis approach provides the best performance of comparable state-of-the-art methods. Conclusion A methodology based on Markov chain models and network analytic metrics can help detect the relevant signal within large, highly interdependent and noisy data sets and, for applications such as enrichment analysis, has been shown to generate superior performance on both real and simulated data relative to existing

  14. Markov Chain Ontology Analysis (MCOA).

    Science.gov (United States)

    Frost, H Robert; McCray, Alexa T

    2012-02-03

    Biomedical ontologies have become an increasingly critical lens through which researchers analyze the genomic, clinical and bibliographic data that fuels scientific research. Of particular relevance are methods, such as enrichment analysis, that quantify the importance of ontology classes relative to a collection of domain data. Current analytical techniques, however, remain limited in their ability to handle many important types of structural complexity encountered in real biological systems including class overlaps, continuously valued data, inter-instance relationships, non-hierarchical relationships between classes, semantic distance and sparse data. In this paper, we describe a methodology called Markov Chain Ontology Analysis (MCOA) and illustrate its use through a MCOA-based enrichment analysis application based on a generative model of gene activation. MCOA models the classes in an ontology, the instances from an associated dataset and all directional inter-class, class-to-instance and inter-instance relationships as a single finite ergodic Markov chain. The adjusted transition probability matrix for this Markov chain enables the calculation of eigenvector values that quantify the importance of each ontology class relative to other classes and the associated data set members. On both controlled Gene Ontology (GO) data sets created with Escherichia coli, Drosophila melanogaster and Homo sapiens annotations and real gene expression data extracted from the Gene Expression Omnibus (GEO), the MCOA enrichment analysis approach provides the best performance of comparable state-of-the-art methods. A methodology based on Markov chain models and network analytic metrics can help detect the relevant signal within large, highly interdependent and noisy data sets and, for applications such as enrichment analysis, has been shown to generate superior performance on both real and simulated data relative to existing state-of-the-art approaches.

  15. Monitoring volcano activity through Hidden Markov Model

    Science.gov (United States)

    Cassisi, C.; Montalto, P.; Prestifilippo, M.; Aliotta, M.; Cannata, A.; Patanè, D.

    2013-12-01

    During 2011-2013, Mt. Etna was mainly characterized by cyclic occurrences of lava fountains, totaling to 38 episodes. During this time interval Etna volcano's states (QUIET, PRE-FOUNTAIN, FOUNTAIN, POST-FOUNTAIN), whose automatic recognition is very useful for monitoring purposes, turned out to be strongly related to the trend of RMS (Root Mean Square) of the seismic signal recorded by stations close to the summit area. Since RMS time series behavior is considered to be stochastic, we can try to model the system generating its values, assuming to be a Markov process, by using Hidden Markov models (HMMs). HMMs are a powerful tool in modeling any time-varying series. HMMs analysis seeks to recover the sequence of hidden states from the observed emissions. In our framework, observed emissions are characters generated by the SAX (Symbolic Aggregate approXimation) technique, which maps RMS time series values with discrete literal emissions. The experiments show how it is possible to guess volcano states by means of HMMs and SAX.

  16. Epitope discovery with phylogenetic hidden Markov models.

    LENUS (Irish Health Repository)

    Lacerda, Miguel

    2010-05-01

    Existing methods for the prediction of immunologically active T-cell epitopes are based on the amino acid sequence or structure of pathogen proteins. Additional information regarding the locations of epitopes may be acquired by considering the evolution of viruses in hosts with different immune backgrounds. In particular, immune-dependent evolutionary patterns at sites within or near T-cell epitopes can be used to enhance epitope identification. We have developed a mutation-selection model of T-cell epitope evolution that allows the human leukocyte antigen (HLA) genotype of the host to influence the evolutionary process. This is one of the first examples of the incorporation of environmental parameters into a phylogenetic model and has many other potential applications where the selection pressures exerted on an organism can be related directly to environmental factors. We combine this novel evolutionary model with a hidden Markov model to identify contiguous amino acid positions that appear to evolve under immune pressure in the presence of specific host immune alleles and that therefore represent potential epitopes. This phylogenetic hidden Markov model provides a rigorous probabilistic framework that can be combined with sequence or structural information to improve epitope prediction. As a demonstration, we apply the model to a data set of HIV-1 protein-coding sequences and host HLA genotypes.

  17. Using social network analysis tools in ecology : Markov process transition models applied to the seasonal trophic network dynamics of the Chesapeake Bay

    NARCIS (Netherlands)

    Johnson, Jeffrey C.; Luczkovich, Joseph J.; Borgatti, Stephen P.; Snijders, Tom A. B.; Luczkovich, S.P.

    2009-01-01

    Ecosystem components interact in complex ways and change over time due to a variety of both internal and external influences (climate change, season cycles, human impacts). Such processes need to be modeled dynamically using appropriate statistical methods for assessing change in network structure.

  18. Lie Markov models with purine/pyrimidine symmetry.

    Science.gov (United States)

    Fernández-Sánchez, Jesús; Sumner, Jeremy G; Jarvis, Peter D; Woodhams, Michael D

    2015-03-01

    Continuous-time Markov chains are a standard tool in phylogenetic inference. If homogeneity is assumed, the chain is formulated by specifying time-independent rates of substitutions between states in the chain. In applications, there are usually extra constraints on the rates, depending on the situation. If a model is formulated in this way, it is possible to generalise it and allow for an inhomogeneous process, with time-dependent rates satisfying the same constraints. It is then useful to require that, under some time restrictions, there exists a homogeneous average of this inhomogeneous process within the same model. This leads to the definition of "Lie Markov models" which, as we will show, are precisely the class of models where such an average exists. These models form Lie algebras and hence concepts from Lie group theory are central to their derivation. In this paper, we concentrate on applications to phylogenetics and nucleotide evolution, and derive the complete hierarchy of Lie Markov models that respect the grouping of nucleotides into purines and pyrimidines-that is, models with purine/pyrimidine symmetry. We also discuss how to handle the subtleties of applying Lie group methods, most naturally defined over the complex field, to the stochastic case of a Markov process, where parameter values are restricted to be real and positive. In particular, we explore the geometric embedding of the cone of stochastic rate matrices within the ambient space of the associated complex Lie algebra.

  19. Symmetric instability of monsoon flows

    OpenAIRE

    Krishnakumar, V.; Lau, K.-M.

    2011-01-01

    Using a zonally symmetric multi-level moist linear model, we have examined the possibility of symmetric instability in the monsoon region. Stability analyses with a zonally symmetric model using monthly ECMWF (Jan – Dec) zonal basic flows revealed both unstable as well as neutral modes. In the absence of cumulus heating, the linear stability of the monsoon flow changes dramatically with the emergence of many unstable modes in the month of May and lasting through August; whereas with the inclu...

  20. Data-driven Markov models and their application in the evaluation of adverse events in radiotherapy

    CERN Document Server

    Abler, Daniel; Davies, Jim; Dosanjh, Manjit; Jena, Raj; Kirkby, Norman; Peach, Ken

    2013-01-01

    Decision-making processes in medicine rely increasingly on modelling and simulation techniques; they are especially useful when combining evidence from multiple sources. Markov models are frequently used to synthesize the available evidence for such simulation studies, by describing disease and treatment progress, as well as associated factors such as the treatment's effects on a patient's life and the costs to society. When the same decision problem is investigated by multiple stakeholders, differing modelling assumptions are often applied, making synthesis and interpretation of the results difficult. This paper proposes a standardized approach towards the creation of Markov models. It introduces the notion of ‘general Markov models’, providing a common definition of the Markov models that underlie many similar decision problems, and develops a language for their specification. We demonstrate the application of this language by developing a general Markov model for adverse event analysis in radiotherapy ...

  1. Symmetric minimally entangled typical thermal states for canonical and grand-canonical ensembles

    Science.gov (United States)

    Binder, Moritz; Barthel, Thomas

    2017-05-01

    Based on the density matrix renormalization group (DMRG), strongly correlated quantum many-body systems at finite temperatures can be simulated by sampling over a certain class of pure matrix product states (MPS) called minimally entangled typical thermal states (METTS). When a system features symmetries, these can be utilized to substantially reduce MPS computation costs. It is conceptually straightforward to simulate canonical ensembles using symmetric METTS. In practice, it is important to alternate between different symmetric collapse bases to decrease autocorrelations in the Markov chain of METTS. To this purpose, we introduce symmetric Fourier and Haar-random block bases that are efficiently mixing. We also show how grand-canonical ensembles can be simulated efficiently with symmetric METTS. We demonstrate these approaches for spin-1 /2 X X Z chains and discuss how the choice of the collapse bases influences autocorrelations as well as the distribution of measurement values and, hence, convergence speeds.

  2. Symmetric q-Bessel functions

    Directory of Open Access Journals (Sweden)

    Giuseppe Dattoli

    1996-05-01

    Full Text Available q analog of bessel functions, symmetric under the interchange of q and q^ −1 are introduced. The definition is based on the generating function realized as product of symmetric q-exponential functions with appropriate arguments. Symmetric q-Bessel function are shown to satisfy various identities as well as second-order q-differential equations, which in the limit q → 1 reproduce those obeyed by the usual cylindrical Bessel functions. A brief discussion on the possible algebraic setting for symmetric q-Bessel functions is also provided.

  3. Markov Chains on Orbits of Permutation Groups

    OpenAIRE

    Niepert, Mathias

    2014-01-01

    We present a novel approach to detecting and utilizing symmetries in probabilistic graphical models with two main contributions. First, we present a scalable approach to computing generating sets of permutation groups representing the symmetries of graphical models. Second, we introduce orbital Markov chains, a novel family of Markov chains leveraging model symmetries to reduce mixing times. We establish an insightful connection between model symmetries and rapid mixing of orbital Markov chai...

  4. A scaling analysis of a cat and mouse Markov chain

    NARCIS (Netherlands)

    Litvak, Nelli; Robert, Philippe

    2012-01-01

    If ($C_n$) a Markov chain on a discrete state space $S$, a Markov chain ($C_n, M_n$) on the product space $S \\times S$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain behaves like the original Markov chain and the second component changes only when both

  5. Markov chain aggregation for agent-based models

    CERN Document Server

    Banisch, Sven

    2016-01-01

    This self-contained text develops a Markov chain approach that makes the rigorous analysis of a class of microscopic models that specify the dynamics of complex systems at the individual level possible. It presents a general framework of aggregation in agent-based and related computational models, one which makes use of lumpability and information theory in order to link the micro and macro levels of observation. The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent-based model (ABM), which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. An explicit formal representation of a resulting “micro-chain” including microscopic transition rates is derived for a class of models by using the random mapping representation of a Markov process. The type of probability distribution used to implement the stochastic part of the model, which defines the upd...

  6. Projection methods for the numerical solution of Markov chain models

    Science.gov (United States)

    Saad, Youcef

    1989-01-01

    Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.

  7. Resonance Energy Transfer-Based Molecular Switch Designed Using a Systematic Design Process Based on Monte Carlo Methods and Markov Chains

    Science.gov (United States)

    Rallapalli, Arjun

    A RET network consists of a network of photo-active molecules called chromophores that can participate in inter-molecular energy transfer called resonance energy transfer (RET). RET networks are used in a variety of applications including cryptographic devices, storage systems, light harvesting complexes, biological sensors, and molecular rulers. In this dissertation, we focus on creating a RET device called closed-diffusive exciton valve (C-DEV) in which the input to output transfer function is controlled by an external energy source, similar to a semiconductor transistor like the MOSFET. Due to their biocompatibility, molecular devices like the C-DEVs can be used to introduce computing power in biological, organic, and aqueous environments such as living cells. Furthermore, the underlying physics in RET devices are stochastic in nature, making them suitable for stochastic computing in which true random distribution generation is critical. In order to determine a valid configuration of chromophores for the C-DEV, we developed a systematic process based on user-guided design space pruning techniques and built-in simulation tools. We show that our C-DEV is 15x better than C-DEVs designed using ad hoc methods that rely on limited data from prior experiments. We also show ways in which the C-DEV can be improved further and how different varieties of C-DEVs can be combined to form more complex logic circuits. Moreover, the systematic design process can be used to search for valid chromophore network configurations for a variety of RET applications. We also describe a feasibility study for a technique used to control the orientation of chromophores attached to DNA. Being able to control the orientation can expand the design space for RET networks because it provides another parameter to tune their collective behavior. While results showed limited control over orientation, the analysis required the development of a mathematical model that can be used to determine the

  8. Schmidt games and Markov partitions

    Science.gov (United States)

    Tseng, Jimmy

    2009-03-01

    Let T be a C2-expanding self-map of a compact, connected, C∞, Riemannian manifold M. We correct a minor gap in the proof of a theorem from the literature: the set of points whose forward orbits are nondense has full Hausdorff dimension. Our correction allows us to strengthen the theorem. Combining the correction with Schmidt games, we generalize the theorem in dimension one: given a point x0 ∈ M, the set of points whose forward orbit closures miss x0 is a winning set. Finally, our key lemma, the no matching lemma, may be of independent interest in the theory of symbolic dynamics or the theory of Markov partitions.

  9. Numerical research of the optimal control problem in the semi-Markov inventory model

    International Nuclear Information System (INIS)

    Gorshenin, Andrey K.; Belousov, Vasily V.; Shnourkoff, Peter V.; Ivanov, Alexey V.

    2015-01-01

    This paper is devoted to the numerical simulation of stochastic system for inventory management products using controlled semi-Markov process. The results of a special software for the system’s research and finding the optimal control are presented

  10. Estimation and uncertainty of reversible Markov models.

    Science.gov (United States)

    Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank

    2015-11-07

    Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software--http://pyemma.org--as of version 2.0.

  11. Monte Carlo simulation of Markov unreliability models

    International Nuclear Information System (INIS)

    Lewis, E.E.; Boehm, F.

    1984-01-01

    A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)

  12. Markov-modulated and feedback fluid queues

    NARCIS (Netherlands)

    Scheinhardt, Willem R.W.

    1998-01-01

    In the last twenty years the field of Markov-modulated fluid queues has received considerable attention. In these models a fluid reservoir receives and/or releases fluid at rates which depend on the actual state of a background Markov chain. In the first chapter of this thesis we give a short

  13. Model Checking Algorithms for Markov Reward Models

    NARCIS (Netherlands)

    Cloth, Lucia; Cloth, L.

    2006-01-01

    Model checking Markov reward models unites two different approaches of model-based system validation. On the one hand, Markov reward models have a long tradition in model-based performance and dependability evaluation. On the other hand, a formal method like model checking allows for the precise

  14. Symmetric Decomposition of Asymmetric Games.

    Science.gov (United States)

    Tuyls, Karl; Pérolat, Julien; Lanctot, Marc; Ostrovski, Georg; Savani, Rahul; Leibo, Joel Z; Ord, Toby; Graepel, Thore; Legg, Shane

    2018-01-17

    We introduce new theoretical insights into two-population asymmetric games allowing for an elegant symmetric decomposition into two single population symmetric games. Specifically, we show how an asymmetric bimatrix game (A,B) can be decomposed into its symmetric counterparts by envisioning and investigating the payoff tables (A and B) that constitute the asymmetric game, as two independent, single population, symmetric games. We reveal several surprising formal relationships between an asymmetric two-population game and its symmetric single population counterparts, which facilitate a convenient analysis of the original asymmetric game due to the dimensionality reduction of the decomposition. The main finding reveals that if (x,y) is a Nash equilibrium of an asymmetric game (A,B), this implies that y is a Nash equilibrium of the symmetric counterpart game determined by payoff table A, and x is a Nash equilibrium of the symmetric counterpart game determined by payoff table B. Also the reverse holds and combinations of Nash equilibria of the counterpart games form Nash equilibria of the asymmetric game. We illustrate how these formal relationships aid in identifying and analysing the Nash structure of asymmetric games, by examining the evolutionary dynamics of the simpler counterpart games in several canonical examples.

  15. The embedding problem for markov models of nucleotide substitution.

    Directory of Open Access Journals (Sweden)

    Klara L Verbyla

    Full Text Available Continuous-time Markov processes are often used to model the complex natural phenomenon of sequence evolution. To make the process of sequence evolution tractable, simplifying assumptions are often made about the sequence properties and the underlying process. The validity of one such assumption, time-homogeneity, has never been explored. Violations of this assumption can be found by identifying non-embeddability. A process is non-embeddable if it can not be embedded in a continuous time-homogeneous Markov process. In this study, non-embeddability was demonstrated to exist when modelling sequence evolution with Markov models. Evidence of non-embeddability was found primarily at the third codon position, possibly resulting from changes in mutation rate over time. Outgroup edges and those with a deeper time depth were found to have an increased probability of the underlying process being non-embeddable. Overall, low levels of non-embeddability were detected when examining individual edges of triads across a diverse set of alignments. Subsequent phylogenetic reconstruction analyses demonstrated that non-embeddability could impact on the correct prediction of phylogenies, but at extremely low levels. Despite the existence of non-embeddability, there is minimal evidence of violations of the local time homogeneity assumption and consequently the impact is likely to be minor.

  16. Learning Markov Decision Processes for Model Checking

    DEFF Research Database (Denmark)

    Mao, Hua; Chen, Yingke; Jaeger, Manfred

    2012-01-01

    Constructing an accurate system model for formal model verification can be both resource demanding and time-consuming. To alleviate this shortcoming, algorithms have been proposed for automatically learning system models based on observed system behaviors. In this paper we extend the algorithm...... is performed by analyzing the probabilistic linear temporal logic properties of the system as well as by analyzing the schedulers, in particular the optimal schedulers, induced by the learned models....

  17. Markov operators, positive semigroups and approximation processes

    CERN Document Server

    Altomare, Francesco; Leonessa, Vita; Rasa, Ioan

    2015-01-01

    In recent years several investigations have been devoted to the study of large classes of (mainly degenerate) initial-boundary value evolution problems in connection with the possibility to obtain a constructive approximation of the associated positive C_0-semigroups. In this research monograph we present the main lines of a theory which finds its root in the above-mentioned research field.

  18. Long time behavior of Markov processes

    Directory of Open Access Journals (Sweden)

    Cattiaux Patrick

    2014-01-01

    Full Text Available These notes correspond to a three hours lecture given during the workshop “Metastability and Stochastic Processes”held in Marne-la-Vallée in September 21st-23rd 2011. I would like to warmly thank the organizers Tony Lelièvre and Arnaud Guillin for a very nice organization and for obliging me first to give the lecture, second to write these notes. I also want to acknowledge all the people who attended the lecture.

  19. Markov Decision Processes Discrete Stochastic Dynamic Programming

    CERN Document Server

    Puterman, Martin L

    2005-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "This text is unique in bringing together so many results hitherto found only in part in other texts and papers. . . . The text is fairly self-contained, inclusive of some basic mathematical results needed, and provides a rich diet

  20. Classification Using Markov Blanket for Feature Selection

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Luo, Jian

    2009-01-01

    Selecting relevant features is in demand when a large data set is of interest in a classification task. It produces a tractable number of features that are sufficient and possibly improve the classification performance. This paper studies a statistical method of Markov blanket induction algorithm...... for filtering features and then applies a classifier using the Markov blanket predictors. The Markov blanket contains a minimal subset of relevant features that yields optimal classification performance. We experimentally demonstrate the improved performance of several classifiers using a Markov blanket...... induction as a feature selection method. In addition, we point out an important assumption behind the Markov blanket induction algorithm and show its effect on the classification performance....

  1. State space orderings for Gauss-Seidel in Markov chains revisited

    Energy Technology Data Exchange (ETDEWEB)

    Dayar, T. [Bilkent Univ., Ankara (Turkey)

    1996-12-31

    Symmetric state space orderings of a Markov chain may be used to reduce the magnitude of the subdominant eigenvalue of the (Gauss-Seidel) iteration matrix. Orderings that maximize the elemental mass or the number of nonzero elements in the dominant term of the Gauss-Seidel splitting (that is, the term approximating the coefficient matrix) do not necessarily converge faster. An ordering of a Markov chain that satisfies Property-R is semi-convergent. On the other hand, there are semi-convergent symmetric state space orderings that do not satisfy Property-R. For a given ordering, a simple approach for checking Property-R is shown. An algorithm that orders the states of a Markov chain so as to increase the likelihood of satisfying Property-R is presented. The computational complexity of the ordering algorithm is less than that of a single Gauss-Seidel iteration (for sparse matrices). In doing all this, the aim is to gain an insight for faster converging orderings. Results from a variety of applications improve the confidence in the algorithm.

  2. Estimation of Time-Varying Autoregressive Symmetric Alpha Stable

    Data.gov (United States)

    National Aeronautics and Space Administration — In the last decade alpha-stable distributions have become a standard model for impulsive data. Especially the linear symmetric alpha-stable processes have found...

  3. Threshold partitioning of sparse matrices and applications to Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hwajeong; Szyld, D.B. [Temple Univ., Philadelphia, PA (United States)

    1996-12-31

    It is well known that the order of the variables and equations of a large, sparse linear system influences the performance of classical iterative methods. In particular if, after a symmetric permutation, the blocks in the diagonal have more nonzeros, classical block methods have a faster asymptotic rate of convergence. In this paper, different ordering and partitioning algorithms for sparse matrices are presented. They are modifications of PABLO. In the new algorithms, in addition to the location of the nonzeros, the values of the entries are taken into account. The matrix resulting after the symmetric permutation has dense blocks along the diagonal, and small entries in the off-diagonal blocks. Parameters can be easily adjusted to obtain, for example, denser blocks, or blocks with elements of larger magnitude. In particular, when the matrices represent Markov chains, the permuted matrices are well suited for block iterative methods that find the corresponding probability distribution. Applications to three types of methods are explored: (1) Classical block methods, such as Block Gauss Seidel. (2) Preconditioned GMRES, where a block diagonal preconditioner is used. (3) Iterative aggregation method (also called aggregation/disaggregation) where the partition obtained from the ordering algorithm with certain parameters is used as an aggregation scheme. In all three cases, experiments are presented which illustrate the performance of the methods with the new orderings. The complexity of the new algorithms is linear in the number of nonzeros and the order of the matrix, and thus adding little computational effort to the overall solution.

  4. Symmetric $q$-deformed KP hierarch

    OpenAIRE

    Tian, Kelei; He, Jingsong; Su, Yucai

    2014-01-01

    Based on the analytic property of the symmetric $q$-exponent $e_q(x)$, a new symmetric $q$-deformed Kadomtsev-Petviashvili ($q$-KP) hierarchy associated with the symmetric $q$-derivative operator $\\partial_q$ is constructed. Furthermore, the symmetric $q$-CKP hierarchy and symmetric $q$-BKP hierarchy are defined. Here we also investigate the additional symmetries of the symmetric $q$-KP hierarchy.

  5. Schmidt games and Markov partitions

    International Nuclear Information System (INIS)

    Tseng, Jimmy

    2009-01-01

    Let T be a C 2 -expanding self-map of a compact, connected, C ∞ , Riemannian manifold M. We correct a minor gap in the proof of a theorem from the literature: the set of points whose forward orbits are nondense has full Hausdorff dimension. Our correction allows us to strengthen the theorem. Combining the correction with Schmidt games, we generalize the theorem in dimension one: given a point x 0 in M, the set of points whose forward orbit closures miss x 0 is a winning set. Finally, our key lemma, the no matching lemma, may be of independent interest in the theory of symbolic dynamics or the theory of Markov partitions

  6. Master equation for She-Leveque scaling and its classification in terms of other Markov models of developed turbulence

    OpenAIRE

    Nickelsen, Daniel

    2017-01-01

    We derive the Markov process equivalent to She-Leveque scaling in homogeneous and isotropic turbulence. The Markov process is a jump process for velocity increments $u(r)$ in scale $r$ in which the jumps occur randomly but with deterministic width in $u$. From its master equation we establish a prescription to simulate the She-Leveque process and compare it with Kolmogorov scaling. To put the She-Leveque process into the context of other established turbulence models on the Markov level, we d...

  7. Markov chains analytic and Monte Carlo computations

    CERN Document Server

    Graham, Carl

    2014-01-01

    Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec

  8. Markov chains models, algorithms and applications

    CERN Document Server

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  9. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  10. Extracting Markov Models of Peptide Conformational Dynamics from Simulation Data.

    Science.gov (United States)

    Schultheis, Verena; Hirschberger, Thomas; Carstens, Heiko; Tavan, Paul

    2005-07-01

    A high-dimensional time series obtained by simulating a complex and stochastic dynamical system (like a peptide in solution) may code an underlying multiple-state Markov process. We present a computational approach to most plausibly identify and reconstruct this process from the simulated trajectory. Using a mixture of normal distributions we first construct a maximum likelihood estimate of the point density associated with this time series and thus obtain a density-oriented partition of the data space. This discretization allows us to estimate the transfer operator as a matrix of moderate dimension at sufficient statistics. A nonlinear dynamics involving that matrix and, alternatively, a deterministic coarse-graining procedure are employed to construct respective hierarchies of Markov models, from which the model most plausibly mapping the generating stochastic process is selected by consideration of certain observables. Within both procedures the data are classified in terms of prototypical points, the conformations, marking the various Markov states. As a typical example, the approach is applied to analyze the conformational dynamics of a tripeptide in solution. The corresponding high-dimensional time series has been obtained from an extended molecular dynamics simulation.

  11. A scaling analysis of a cat and mouse Markov chain

    NARCIS (Netherlands)

    Litvak, Nelli; Robert, Philippe

    Motivated by an original on-line page-ranking algorithm, starting from an arbitrary Markov chain $(C_n)$ on a discrete state space ${\\cal S}$, a Markov chain $(C_n,M_n)$ on the product space ${\\cal S}^2$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain

  12. Ricin and the assassination of Georgi Markov.

    Science.gov (United States)

    Papaloucas, M; Papaloucas, C; Stergioulas, A

    2008-10-01

    The purpose of this study was to investigate the dead reasons of Georgi Markov. Georgi Markov, a well known Bulgarian novelist and playwright, dissident of the communist regime in his country, escaped to England where, he dedicated himself in broadcasting from BBC World Service, the Radio Free Europe and the German Deutsche Welle against the communist party and especially against its leader Todor Zhivkov who in a party's meeting told that he wanted Markov silenced for ever. On the 7th September 1978 Markov received a deadly dose of the poison ricin by injection to his thigh by a specially modified umbrella. He died without a final diagnosis a few days later. The autopsy revealed the poisoning. The murderer, in spite of the efforts of the Police, Interpol and Diplomacy still remains unknown.

  13. Recursive utility in a Markov environment with stochastic growth.

    Science.gov (United States)

    Hansen, Lars Peter; Scheinkman, José A

    2012-07-24

    Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron-Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility.

  14. Multivariate longitudinal data analysis with mixed effects hidden Markov models.

    Science.gov (United States)

    Raffa, Jesse D; Dubin, Joel A

    2015-09-01

    Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.

  15. Robust Dynamics and Control of a Partially Observed Markov Chain

    International Nuclear Information System (INIS)

    Elliott, R. J.; Malcolm, W. P.; Moore, J. P.

    2007-01-01

    In a seminal paper, Martin Clark (Communications Systems and Random Process Theory, Darlington, 1977, pp. 721-734, 1978) showed how the filtered dynamics giving the optimal estimate of a Markov chain observed in Gaussian noise can be expressed using an ordinary differential equation. These results offer substantial benefits in filtering and in control, often simplifying the analysis and an in some settings providing numerical benefits, see, for example Malcolm et al. (J. Appl. Math. Stoch. Anal., 2007, to appear).Clark's method uses a gauge transformation and, in effect, solves the Wonham-Zakai equation using variation of constants. In this article, we consider the optimal control of a partially observed Markov chain. This problem is discussed in Elliott et al. (Hidden Markov Models Estimation and Control, Applications of Mathematics Series, vol. 29, 1995). The innovation in our results is that the robust dynamics of Clark are used to compute forward in time dynamics for a simplified adjoint process. A stochastic minimum principle is established

  16. A Markov chain representation of the multiple testing problem.

    Science.gov (United States)

    Cabras, Stefano

    2018-02-01

    The problem of multiple hypothesis testing can be represented as a Markov process where a new alternative hypothesis is accepted in accordance with its relative evidence to the currently accepted one. This virtual and not formally observed process provides the most probable set of non null hypotheses given the data; it plays the same role as Markov Chain Monte Carlo in approximating a posterior distribution. To apply this representation and obtain the posterior probabilities over all alternative hypotheses, it is enough to have, for each test, barely defined Bayes Factors, e.g. Bayes Factors obtained up to an unknown constant. Such Bayes Factors may either arise from using default and improper priors or from calibrating p-values with respect to their corresponding Bayes Factor lower bound. Both sources of evidence are used to form a Markov transition kernel on the space of hypotheses. The approach leads to easy interpretable results and involves very simple formulas suitable to analyze large datasets as those arising from gene expression data (microarray or RNA-seq experiments).

  17. Robust filtering and prediction for systems with embedded finite-state Markov-Chain dynamics

    International Nuclear Information System (INIS)

    Pate, E.B.

    1986-01-01

    This research developed new methodologies for the design of robust near-optimal filters/predictors for a class of system models that exhibit embedded finite-state Markov-chain dynamics. These methodologies are developed through the concepts and methods of stochastic model building (including time-series analysis), game theory, decision theory, and filtering/prediction for linear dynamic systems. The methodology is based on the relationship between the robustness of a class of time-series models and quantization which is applied to the time series as part of the model identification process. This relationship is exploited by utilizing the concept of an equivalence, through invariance of spectra, between the class of Markov-chain models and the class of autoregressive moving average (ARMA) models. This spectral equivalence permits a straightforward implementation of the desirable robust properties of the Markov-chain approximation in a class of models which may be applied in linear-recursive form in a linear Kalman filter/predictor structure. The linear filter/predictor structure is shown to provide asymptotically optimal estimates of states which represent one or more integrations of the Markov-chain state. The development of a new saddle-point theorem for a game based on the Markov-chain model structure gives rise to a technique for determining a worst case Markov-chain process, upon which a robust filter/predictor design if based

  18. Adaptive Markov Chain Monte Carlo

    KAUST Repository

    Jadoon, Khan

    2016-08-08

    A substantial interpretation of electromagnetic induction (EMI) measurements requires quantifying optimal model parameters and uncertainty of a nonlinear inverse problem. For this purpose, an adaptive Bayesian Markov chain Monte Carlo (MCMC) algorithm is used to assess multi-orientation and multi-offset EMI measurements in an agriculture field with non-saline and saline soil. In the MCMC simulations, posterior distribution was computed using Bayes rule. The electromagnetic forward model based on the full solution of Maxwell\\'s equations was used to simulate the apparent electrical conductivity measured with the configurations of EMI instrument, the CMD mini-Explorer. The model parameters and uncertainty for the three-layered earth model are investigated by using synthetic data. Our results show that in the scenario of non-saline soil, the parameters of layer thickness are not well estimated as compared to layers electrical conductivity because layer thicknesses in the model exhibits a low sensitivity to the EMI measurements, and is hence difficult to resolve. Application of the proposed MCMC based inversion to the field measurements in a drip irrigation system demonstrate that the parameters of the model can be well estimated for the saline soil as compared to the non-saline soil, and provide useful insight about parameter uncertainty for the assessment of the model outputs.

  19. Differential geometry and symmetric spaces

    CERN Document Server

    Helgason, Sigurdur

    2001-01-01

    Sigurdur Helgason's Differential Geometry and Symmetric Spaces was quickly recognized as a remarkable and important book. For many years, it was the standard text both for Riemannian geometry and for the analysis and geometry of symmetric spaces. Several generations of mathematicians relied on it for its clarity and careful attention to detail. Although much has happened in the field since the publication of this book, as demonstrated by Helgason's own three-volume expansion of the original work, this single volume is still an excellent overview of the subjects. For instance, even though there

  20. Looking for symmetric Bell inequalities

    International Nuclear Information System (INIS)

    Bancal, Jean-Daniel; Gisin, Nicolas; Pironio, Stefano

    2010-01-01

    Finding all Bell inequalities for a given number of parties, measurement settings and measurement outcomes is in general a computationally hard task. We show that all Bell inequalities which are symmetric under the exchange of parties can be found by examining a symmetrized polytope which is simpler than the full Bell polytope. As an illustration of our method, we generate 238 885 new Bell inequalities and 1085 new Svetlichny inequalities. We find, in particular, facet inequalities for Bell experiments involving two parties and two measurement settings that are not of the Collins-Gisin-Linden-Massar-Popescu type.

  1. Symmetric autocompensating quantum key distribution

    Science.gov (United States)

    Walton, Zachary D.; Sergienko, Alexander V.; Levitin, Lev B.; Saleh, Bahaa E. A.; Teich, Malvin C.

    2004-08-01

    We present quantum key distribution schemes which are autocompensating (require no alignment) and symmetric (Alice and Bob receive photons from a central source) for both polarization and time-bin qubits. The primary benefit of the symmetric configuration is that both Alice and Bob may have passive setups (neither Alice nor Bob is required to make active changes for each run of the protocol). We show that both the polarization and the time-bin schemes may be implemented with existing technology. The new schemes are related to previously described schemes by the concept of advanced waves.

  2. Grey-Markov prediction model based on background value optimization and central-point triangular whitenization weight function

    Science.gov (United States)

    Ye, Jing; Dang, Yaoguo; Li, Bingjun

    2018-01-01

    Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.

  3. Finding metastabilities in reversible Markov chains based on incomplete sampling

    Directory of Open Access Journals (Sweden)

    Fackeldey Konstantin

    2017-01-01

    Full Text Available In order to fully characterize the state-transition behaviour of finite Markov chains one needs to provide the corresponding transition matrix P. In many applications such as molecular simulation and drug design, the entries of the transition matrix P are estimated by generating realizations of the Markov chain and determining the one-step conditional probability Pij for a transition from one state i to state j. This sampling can be computational very demanding. Therefore, it is a good idea to reduce the sampling effort. The main purpose of this paper is to design a sampling strategy, which provides a partial sampling of only a subset of the rows of such a matrix P. Our proposed approach fits very well to stochastic processes stemming from simulation of molecular systems or random walks on graphs and it is different from the matrix completion approaches which try to approximate the transition matrix by using a low-rank-assumption. It will be shown how Markov chains can be analyzed on the basis of a partial sampling. More precisely. First, we will estimate the stationary distribution from a partially given matrix P. Second, we will estimate the infinitesimal generator Q of P on the basis of this stationary distribution. Third, from the generator we will compute the leading invariant subspace, which should be identical to the leading invariant subspace of P. Forth, we will apply Robust Perron Cluster Analysis (PCCA+ in order to identify metastabilities using this subspace.

  4. Hidden Markov model to predict the amino acid profile

    Science.gov (United States)

    Handamari, Endang Wahyu

    2017-12-01

    Sequence alignment is the basic method in sequence analysis, which is the process of composing or aligning two or more primary sequences so that the sequence similarity is apparent. One of the uses of this method is to predict the structure or function of an unknown protein by using a known protein information structure or function if the protein has the same sequence in database. Protein are macromolecules that make up more than half of the cell. Proteins are a chain of 20 amino acid combinations. Each type of protein has a unique number and sequence of amino acids. The method that can be applied for sequence alignment is the Genetic Algorithm, the other method is related to the Hidden Markov Model (HMM). The Hidden Markov Model (HMM) is a developmental form of the Markov Chain, which can be applied in cases that can not be directly observed. As Observed State (O) for sequence alignment is the sequence of amino acids in three categories: deletion, insertion and match. As for the Hidden State is the amino acid residue, which can determine the family protein corresponds to observation O.

  5. Spatial Markov Kernels for Image Categorization and Annotation.

    Science.gov (United States)

    Zhiwu Lu; Ip, H H S

    2011-08-01

    This paper presents a novel discriminative stochastic method for image categorization and annotation. We first divide the images into blocks on a regular grid and then generate visual keywords through quantizing the features of image blocks. The traditional Markov chain model is generalized to capture 2-D spatial dependence between visual keywords by defining the notion of "past" as what we have observed in a row-wise raster scan. The proposed spatial Markov chain model can be trained via maximum-likelihood estimation and then be used directly for image categorization. Since this is completely a generative method, we can further improve it through developing new discriminative learning. Hence, spatial dependence between visual keywords is incorporated into kernels in two different ways, for use with a support vector machine in a discriminative approach to the image categorization problem. Moreover, a kernel combination is used to handle rotation and multiscale issues. Experiments on several image databases demonstrate that our spatial Markov kernel method for image categorization can achieve promising results. When applied to image annotation, which can be considered as a multilabel image categorization process, our method also outperforms state-of-the-art techniques.

  6. A Bayesian Markov geostatistical model for estimation of hydrogeological properties

    International Nuclear Information System (INIS)

    Rosen, L.; Gustafson, G.

    1996-01-01

    A geostatistical methodology based on Markov-chain analysis and Bayesian statistics was developed for probability estimations of hydrogeological and geological properties in the siting process of a nuclear waste repository. The probability estimates have practical use in decision-making on issues such as siting, investigation programs, and construction design. The methodology is nonparametric which makes it possible to handle information that does not exhibit standard statistical distributions, as is often the case for classified information. Data do not need to meet the requirements on additivity and normality as with the geostatistical methods based on regionalized variable theory, e.g., kriging. The methodology also has a formal way for incorporating professional judgments through the use of Bayesian statistics, which allows for updating of prior estimates to posterior probabilities each time new information becomes available. A Bayesian Markov Geostatistical Model (BayMar) software was developed for implementation of the methodology in two and three dimensions. This paper gives (1) a theoretical description of the Bayesian Markov Geostatistical Model; (2) a short description of the BayMar software; and (3) an example of application of the model for estimating the suitability for repository establishment with respect to the three parameters of lithology, hydraulic conductivity, and rock quality designation index (RQD) at 400--500 meters below ground surface in an area around the Aespoe Hard Rock Laboratory in southeastern Sweden

  7. Hidden Markov models: the best models for forager movements?

    Science.gov (United States)

    Joo, Rocio; Bertrand, Sophie; Tam, Jorge; Fablet, Ronan

    2013-01-01

    One major challenge in the emerging field of movement ecology is the inference of behavioural modes from movement patterns. This has been mainly addressed through Hidden Markov models (HMMs). We propose here to evaluate two sets of alternative and state-of-the-art modelling approaches. First, we consider hidden semi-Markov models (HSMMs). They may better represent the behavioural dynamics of foragers since they explicitly model the duration of the behavioural modes. Second, we consider discriminative models which state the inference of behavioural modes as a classification issue, and may take better advantage of multivariate and non linear combinations of movement pattern descriptors. For this work, we use a dataset of >200 trips from human foragers, Peruvian fishermen targeting anchovy. Their movements were recorded through a Vessel Monitoring System (∼1 record per hour), while their behavioural modes (fishing, searching and cruising) were reported by on-board observers. We compare the efficiency of hidden Markov, hidden semi-Markov, and three discriminative models (random forests, artificial neural networks and support vector machines) for inferring the fishermen behavioural modes, using a cross-validation procedure. HSMMs show the highest accuracy (80%), significantly outperforming HMMs and discriminative models. Simulations show that data with higher temporal resolution, HSMMs reach nearly 100% of accuracy. Our results demonstrate to what extent the sequential nature of movement is critical for accurately inferring behavioural modes from a trajectory and we strongly recommend the use of HSMMs for such purpose. In addition, this work opens perspectives on the use of hybrid HSMM-discriminative models, where a discriminative setting for the observation process of HSMMs could greatly improve inference performance.

  8. Hidden Markov models: the best models for forager movements?

    Directory of Open Access Journals (Sweden)

    Rocio Joo

    Full Text Available One major challenge in the emerging field of movement ecology is the inference of behavioural modes from movement patterns. This has been mainly addressed through Hidden Markov models (HMMs. We propose here to evaluate two sets of alternative and state-of-the-art modelling approaches. First, we consider hidden semi-Markov models (HSMMs. They may better represent the behavioural dynamics of foragers since they explicitly model the duration of the behavioural modes. Second, we consider discriminative models which state the inference of behavioural modes as a classification issue, and may take better advantage of multivariate and non linear combinations of movement pattern descriptors. For this work, we use a dataset of >200 trips from human foragers, Peruvian fishermen targeting anchovy. Their movements were recorded through a Vessel Monitoring System (∼1 record per hour, while their behavioural modes (fishing, searching and cruising were reported by on-board observers. We compare the efficiency of hidden Markov, hidden semi-Markov, and three discriminative models (random forests, artificial neural networks and support vector machines for inferring the fishermen behavioural modes, using a cross-validation procedure. HSMMs show the highest accuracy (80%, significantly outperforming HMMs and discriminative models. Simulations show that data with higher temporal resolution, HSMMs reach nearly 100% of accuracy. Our results demonstrate to what extent the sequential nature of movement is critical for accurately inferring behavioural modes from a trajectory and we strongly recommend the use of HSMMs for such purpose. In addition, this work opens perspectives on the use of hybrid HSMM-discriminative models, where a discriminative setting for the observation process of HSMMs could greatly improve inference performance.

  9. Data-driven Markov models and their application in the evaluation of adverse events in radiotherapy

    Science.gov (United States)

    Abler, Daniel; Kanellopoulos, Vassiliki; Davies, Jim; Dosanjh, Manjit; Jena, Raj; Kirkby, Norman; Peach, Ken

    2013-01-01

    Decision-making processes in medicine rely increasingly on modelling and simulation techniques; they are especially useful when combining evidence from multiple sources. Markov models are frequently used to synthesize the available evidence for such simulation studies, by describing disease and treatment progress, as well as associated factors such as the treatment's effects on a patient's life and the costs to society. When the same decision problem is investigated by multiple stakeholders, differing modelling assumptions are often applied, making synthesis and interpretation of the results difficult. This paper proposes a standardized approach towards the creation of Markov models. It introduces the notion of ‘general Markov models’, providing a common definition of the Markov models that underlie many similar decision problems, and develops a language for their specification. We demonstrate the application of this language by developing a general Markov model for adverse event analysis in radiotherapy and argue that the proposed method can automate the creation of Markov models from existing data. The approach has the potential to support the radiotherapy community in conducting systematic analyses involving predictive modelling of existing and upcoming radiotherapy data. We expect it to facilitate the application of modelling techniques in medical decision problems beyond the field of radiotherapy, and to improve the comparability of their results. PMID:23824126

  10. Data-driven Markov models and their application in the evaluation of adverse events in radiotherapy

    International Nuclear Information System (INIS)

    Abler, Daniel; Kanellopoulos, Vassiliki; Dosanjh, Manjit; Davies, Jim; Peach, Ken; Jena, Raj; Kirkby, Norman

    2013-01-01

    Decision-making processes in medicine rely increasingly on modelling and simulation techniques; they are especially useful when combining evidence from multiple sources. Markov models are frequently used to synthesize the available evidence for such simulation studies, by describing disease and treatment progress, as well as associated factors such as the treatment's effects on a patient's life and the costs to society. When the same decision problem is investigated by multiple stakeholders, differing modelling assumptions are often applied, making synthesis and interpretation of the results difficult. This paper proposes a standardized approach towards the creation of Markov models. It introduces the notion of 'general Markov models', providing a common definition of the Markov models that underlie many similar decision problems, and develops a language for their specification. We demonstrate the application of this language by developing a general Markov model for adverse event analysis in radiotherapy and argue that the proposed method can automate the creation of Markov models from existing data. The approach has the potential to support the radiotherapy community in conducting systematic analyses involving predictive modelling of existing and upcoming radiotherapy data. We expect it to facilitate the application of modelling techniques in medical decision problems beyond the field of radiotherapy, and to improve the comparability of their results. (author)

  11. Tornadoes and related damage costs: statistical modelling with a semi-Markov approach

    Directory of Open Access Journals (Sweden)

    Guglielmo D’Amico

    2016-09-01

    Full Text Available We propose a statistical approach to modelling for predicting and simulating occurrences of tornadoes and accumulated cost distributions over a time interval. This is achieved by modelling the tornado intensity, measured with the Fujita scale, as a stochastic process. Since the Fujita scale divides tornado intensity into six states, it is possible to model the tornado intensity by using Markov and semi-Markov models. We demonstrate that the semi-Markov approach is able to reproduce the duration effect that is detected in tornado occurrence. The superiority of the semi-Markov model as compared to the Markov chain model is also affirmed by means of a statistical test of hypothesis. As an application, we compute the expected value and the variance of the costs generated by the tornadoes over a given time interval in a given area. The paper contributes to the literature by demonstrating that semi-Markov models represent an effective tool for physical analysis of tornadoes as well as for the estimation of the economic damages to human things.

  12. Continuous-Time Semi-Markov Models in Health Economic Decision Making: An Illustrative Example in Heart Failure Disease Management.

    Science.gov (United States)

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    2016-01-01

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.

  13. Symmetric Key Authentication Services Revisited

    NARCIS (Netherlands)

    Crispo, B.; Popescu, B.C.; Tanenbaum, A.S.

    2004-01-01

    Most of the symmetric key authentication schemes deployed today are based on principles introduced by Needham and Schroeder [15] more than twenty years ago. However, since then, the computing environment has evolved from a LAN-based client-server world to include new paradigms, including wide area

  14. A charged spherically symmetric solution

    Indian Academy of Sciences (India)

    A charged spherically symmetric solution. K MOODLEY, S D MAHARAJ and K S GOVINDER. School of Mathematical and Statistical Sciences, University of Natal, Durban 4041, South Africa. Email: maharaj@nu.ac.za. MS received 8 April 2002; revised 7 April 2003; accepted 23 June 2003. Abstract. We find a solution of the ...

  15. The symmetric longest queue system

    NARCIS (Netherlands)

    van Houtum, Geert-Jan; Adan, Ivo; van der Wal, Jan

    1997-01-01

    We derive the performance of the exponential symmetric longest queue system from two variants: a longest queue system with Threshold Rejection of jobs and one with Threshold Addition of jobs. It is shown that these two systems provide lower and upper bounds for the performance of the longest queue

  16. Symmetric group representations and Z

    OpenAIRE

    Adve, Anshul; Yong, Alexander

    2017-01-01

    We discuss implications of the following statement about the representation theory of symmetric groups: every integer appears infinitely often as an irreducible character evaluation, and every nonnegative integer appears infinitely often as a Littlewood-Richardson coefficient and as a Kronecker coefficient.

  17. A characterization of symmetric domains

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2006-01-01

    Roč. 46, č. 1 (2006), s. 123-146 ISSN 0023-608X R&D Projects: GA AV ČR(CZ) IAA1019304 Institutional research plan: CEZ:AV0Z10190503 Keywords : Kaehler manifold * symmetric space * Berezin transform Subject RIV: BA - General Mathematics Impact factor: 0.270, year: 2006

  18. Vassiliev Invariants from Symmetric Spaces

    DEFF Research Database (Denmark)

    Biswas, Indranil; Gammelgaard, Niels Leth

    We construct a natural framed weight system on chord diagrams from the curvature tensor of any pseudo-Riemannian symmetric space. These weight systems are of Lie algebra type and realized by the action of the holonomy Lie algebra on a tangent space. Among the Lie algebra weight systems, they are ...

  19. Different methods to estimate the Einstein-Markov coherence length in turbulence

    Science.gov (United States)

    Stresing, R.; Kleinhans, D.; Friedrich, R.; Peinke, J.

    2011-04-01

    We study the Markov property of experimental velocity data of different homogeneous isotropic turbulent flows. In particular, we examine the stochastic “cascade” process of nested velocity increments ξ(r):=u(x+r)-u(x) as a function of scale r for different nesting structures. It was found in previous work that, for a certain nesting structure, the stochastic process of ξ(r) has the Markov property for step sizes larger than the so-called Einstein-Markov coherence length lEM, which is of the order of magnitude of the Taylor microscale λ [Phys. Lett. APYLAAG0375-960110.1016/j.physleta.2006.06.053 359, 335 (2006)]. We now show that, if a reasonable definition of the effective step size of the process is applied, this result holds independently of the nesting structure. Furthermore, we analyze the stochastic process of the velocity u as a function of the spatial position x. Although this process does not have the exact Markov property, a characteristic length scale lu(x)≈lEM can be identified on the basis of a statistical test for the Markov property. Using a method based on the matrix of transition probabilities, we examine the significance of the non-Markovian character of the velocity u(x) for the statistical properties of turbulence.

  20. Symmetric imaging findings in neuroradiology

    International Nuclear Information System (INIS)

    Zlatareva, D.

    2015-01-01

    Full text: Learning objectives: to make a list of diseases and syndromes which manifest as bilateral symmetric findings on computed tomography and magnetic resonance imaging; to discuss the clinical and radiological differential diagnosis for these diseases; to explain which of these conditions necessitates urgent therapy and when additional studies and laboratory can precise diagnosis. There is symmetry in human body and quite often we compare the affected side to the normal one but in neuroradiology we might have bilateral findings which affected pair structures or corresponding anatomic areas. It is very rare when clinical data prompt diagnosis. Usually clinicians suspect such an involvement but Ct and MRI can reveal symmetric changes and are one of the leading diagnostic tool. The most common location of bilateral findings is basal ganglia and thalamus. There are a number of diseases affecting these structures symmetrically: metabolic and systemic diseases, intoxication, neurodegeneration and vascular conditions, toxoplasmosis, tumors and some infections. Malformations of cortical development and especially bilateral perisylvian polymicrogyria requires not only exact report on the most affected parts but in some cases genetic tests or combination with other clinical symptoms. In the case of herpes simplex encephalitis bilateral temporal involvement is common and this finding very often prompt therapy even before laboratory results. Posterior reversible encephalopathy syndrome (PReS) and some forms of hypoxic ischemic encephalopathy can lead to symmetric changes. In these acute conditions MR plays a crucial role not only in diagnosis but also in monitoring of the therapeutic effect. Patients with neurofibromatosis type 1 or type 2 can demonstrate bilateral optic glioma combined with spinal neurofibroma and bilateral acoustic schwanoma respectively. Mirror-image aneurysm affecting both internal carotid or middle cerebral arteries is an example of symmetry in

  1. Understanding symmetrical components for power system modeling

    CERN Document Server

    Das, J C

    2017-01-01

    This book utilizes symmetrical components for analyzing unbalanced three-phase electrical systems, by applying single-phase analysis tools. The author covers two approaches for studying symmetrical components; the physical approach, avoiding many mathematical matrix algebra equations, and a mathematical approach, using matrix theory. Divided into seven sections, topics include: symmetrical components using matrix methods, fundamental concepts of symmetrical components, symmetrical components –transmission lines and cables, sequence components of rotating equipment and static load, three-phase models of transformers and conductors, unsymmetrical fault calculations, and some limitations of symmetrical components.

  2. Baryon symmetric big-bang cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Stecker, F.W.

    1978-04-01

    The framework of baryon-symmetric big-bang cosmology offers the greatest potential for deducing the evolution of the universe as a consequence of physical laws and processes with the minimum number of arbitrary assumptions as to initial conditions in the big-bang. In addition, it offers the possibility of explaining the photon-baryon ratio in the universe and how galaxies and galaxy clusters are formed, and also provides the only acceptable explanation at present for the origin of the cosmic gamma ray background radiation.

  3. Baryon symmetric big-bang cosmology

    International Nuclear Information System (INIS)

    Stecker, F.W.

    1978-04-01

    The framework of baryon-symmetric big-bang cosmology offers the greatest potential for deducing the evolution of the universe as a consequence of physical laws and processes with the minimum number of arbitrary assumptions as to initial conditions in the big-bang. In addition, it offers the possibility of explaining the photon-baryon ratio in the universe and how galaxies and galaxy clusters are formed, and also provides the only acceptable explanation at present for the origin of the cosmic gamma ray background radiation

  4. Homotheties of cylindrically symmetric static spacetimes

    International Nuclear Information System (INIS)

    Qadir, A.; Ziad, M.; Sharif, M.

    1998-08-01

    In this note we consider the homotheties of cylindrically symmetric static spacetimes. We find that we can provide a complete list of all metrics that admit non-trivial homothetic motions and are cylindrically symmetric static. (author)

  5. On convergence completeness in symmetric spaces | Moshokoa ...

    African Journals Online (AJOL)

    convergence complete symmetric space. As applications of convergence completeness, we present some fixed point results for self-maps defined on a symmetric space. Keywords: completeness; convergence completeness; fixed points; metric ...

  6. Estimation with Right-Censored Observations Under A Semi-Markov Model.

    Science.gov (United States)

    Zhao, Lihui; Hu, X Joan

    2013-06-01

    The semi-Markov process often provides a better framework than the classical Markov process for the analysis of events with multiple states. The purpose of this paper is twofold. First, we show that in the presence of right censoring, when the right end-point of the support of the censoring time is strictly less than the right end-point of the support of the semi-Markov kernel, the transition probability of the semi-Markov process is nonidentifiable, and the estimators proposed in the literature are inconsistent in general. We derive the set of all attainable values for the transition probability based on the censored data, and we propose a nonparametric inference procedure for the transition probability using this set. Second, the conventional approach to constructing confidence bands is not applicable for the semi-Markov kernel and the sojourn time distribution. We propose new perturbation resampling methods to construct these confidence bands. Different weights and transformations are explored in the construction. We use simulation to examine our proposals and illustrate them with hospitalization data from a recent cancer survivor study.

  7. [Symmetrical lividity of the fingers].

    Science.gov (United States)

    Kocsard, E; Kossard, S

    1988-07-01

    Symmetric lividity of the soles of the feet was first reported in two children in 1925 by Pernet. The characteristic manifestation of this dermatosis consisted in hyperkeratosis and hyperhidrosis with livid discoloration of the pressure areas of the soles. Later the same name was applied to a similar dermatosis in which the hyperkeratotic and hyperhidrotic patches of skin on the soles had a whitish grey discoloration and the livid color, if present at all, was seen only over the marginal areas not affected by the keratosis. Similar livid keratoses affecting the palmar sides of the fingers have been seen only occasionally. The 17-year-old girl presented in this paper had a 11-year history of emotional hyperhidrosis and is a rare illustration of symmetrical lividity in its original form, localized to the fingers only.

  8. Parity-Time Symmetric Photonics

    KAUST Repository

    Zhao, Han

    2018-01-17

    The establishment of non-Hermitian quantum mechanics (such as parity-time (PT) symmetry) stimulates a paradigmatic shift for studying symmetries of complex potentials. Owing to the convenient manipulation of optical gain and loss in analogy to the complex quantum potentials, photonics provides an ideal platform for visualization of many conceptually striking predictions from the non-Hermitian quantum theory. A rapidly developing field has emerged, namely, PT symmetric photonics, demonstrating intriguing optical phenomena including eigenstate coalescence and spontaneous PT symmetry breaking. The advance of quantum physics, as the feedback, provides photonics with brand-new paradigms to explore the entire complex permittivity plane for novel optical functionalities. Here, we review recent exciting breakthroughs in PT symmetric photonics while systematically presenting their underlying principles guided by non-Hermitian symmetries. The potential device applications for optical communication and computing, bio-chemical sensing, and healthcare are also discussed.

  9. Detecting Structural Breaks using Hidden Markov Models

    DEFF Research Database (Denmark)

    Ntantamis, Christos

    Testing for structural breaks and identifying their location is essential for econometric modeling. In this paper, a Hidden Markov Model (HMM) approach is used in order to perform these tasks. Breaks are defined as the data points where the underlying Markov Chain switches from one state to another....... The estimation of the HMM is conducted using a variant of the Iterative Conditional Expectation-Generalized Mixture (ICE-GEMI) algorithm proposed by Delignon et al. (1997), that permits analysis of the conditional distributions of economic data and allows for different functional forms across regimes...

  10. Inhomogeneous Markov Models for Describing Driving Patterns

    DEFF Research Database (Denmark)

    Iversen, Emil Banning; Møller, Jan K.; Morales, Juan Miguel

    2017-01-01

    . Specifically, an inhomogeneous Markov model that captures the diurnal variation in the use of a vehicle is presented. The model is defined by the time-varying probabilities of starting and ending a trip, and is justified due to the uncertainty associated with the use of the vehicle. The model is fitted to data...... collected from the actual utilization of a vehicle. Inhomogeneous Markov models imply a large number of parameters. The number of parameters in the proposed model is reduced using B-splines....

  11. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  12. Bilateral Symmetrical Parietal Extradural Hematoma

    OpenAIRE

    Agrawal, Amit

    2011-01-01

    The occurrence of bilateral extradural hematomas (EDH) is an uncommon consequence of craniocerebral trauma, and acute symmetrical bilateral epidural hematomas are extremely rare. We discuss the technique adopted by us for the management of this rare entity. A 55-year-old patient presented with history of fall of branch of tree on her head. She had loss of consciousness since then and had multiple episodes of vomiting. Examination of the scalp was suggestive of diffuse subgaleal hematoma. Her ...

  13. Symmetric two-coordinate photodiode

    Directory of Open Access Journals (Sweden)

    Dobrovolskiy Yu. G.

    2008-12-01

    Full Text Available The two-coordinate photodiode is developed and explored on the longitudinal photoeffect, which allows to get the coordinate descriptions symmetric on the steepness and longitudinal resistance great exactness. It was shown, that the best type of the coordinate description is observed in the case of scanning by the optical probe on the central part of the photosensitive element. The ways of improvement of steepness and linear of its coordinate description were analyzed.

  14. Maximally Symmetric Composite Higgs Models.

    Science.gov (United States)

    Csáki, Csaba; Ma, Teng; Shu, Jing

    2017-09-29

    Maximal symmetry is a novel tool for composite pseudo Goldstone boson Higgs models: it is a remnant of an enhanced global symmetry of the composite fermion sector involving a twisting with the Higgs field. Maximal symmetry has far-reaching consequences: it ensures that the Higgs potential is finite and fully calculable, and also minimizes the tuning. We present a detailed analysis of the maximally symmetric SO(5)/SO(4) model and comment on its observational consequences.

  15. Assessing type I error and power of multistate Markov models for panel data-A simulation study.

    Science.gov (United States)

    Cassarly, Christy; Martin, Renee' H; Chimowitz, Marc; Peña, Edsel A; Ramakrishnan, Viswanathan; Palesch, Yuko Y

    2017-01-01

    Ordinal outcomes collected at multiple follow-up visits are common in clinical trials. Sometimes, one visit is chosen for the primary analysis and the scale is dichotomized amounting to loss of information. Multistate Markov models describe how a process moves between states over time. Here, simulation studies are performed to investigate the type I error and power characteristics of multistate Markov models for panel data with limited non-adjacent state transitions. The results suggest that the multistate Markov models preserve the type I error and adequate power is achieved with modest sample sizes for panel data with limited non-adjacent state transitions.

  16. Hidden-Markov-Model Analysis Of Telemanipulator Data

    Science.gov (United States)

    Hannaford, Blake; Lee, Paul

    1991-01-01

    Mathematical model and procedure based on hidden-Markov-model concept undergoing development for use in analysis and prediction of outputs of force and torque sensors of telerobotic manipulators. In model, overall task broken down into subgoals, and transition probabilities encode ease with which operator completes each subgoal. Process portion of model encodes task-sequence/subgoal structure, and probability-density functions for forces and torques associated with each state of manipulation encode sensor signals that one expects to observe at subgoal. Parameters of model constructed from engineering knowledge of task.

  17. On the Markov Chain Monte Carlo (MCMC) method

    Indian Academy of Sciences (India)

    In this article, we give an introduction to Monte Carlo techniques with special emphasis on. Markov Chain Monte Carlo (MCMC). Since the latter needs Markov chains with state space that is R or Rd and most text books on Markov chains do not discuss such chains, we have included a short appendix that gives basic ...

  18. Uncovering and testing the fuzzy clusters based on lumped Markov chain in complex network.

    Science.gov (United States)

    Jing, Fan; Jianbin, Xie; Jinlong, Wang; Jinshuai, Qu

    2013-01-01

    Identifying clusters, namely groups of nodes with comparatively strong internal connectivity, is a fundamental task for deeply understanding the structure and function of a network. By means of a lumped Markov chain model of a random walker, we propose two novel ways of inferring the lumped markov transition matrix. Furthermore, some useful results are proposed based on the analysis of the properties of the lumped Markov process. To find the best partition of complex networks, a novel framework including two algorithms for network partition based on the optimal lumped Markovian dynamics is derived to solve this problem. The algorithms are constructed to minimize the objective function under this framework. It is demonstrated by the simulation experiments that our algorithms can efficiently determine the probabilities with which a node belongs to different clusters during the learning process and naturally supports the fuzzy partition. Moreover, they are successfully applied to real-world network, including the social interactions between members of a karate club.

  19. Model Checking Structured Infinite Markov Chains

    NARCIS (Netherlands)

    Remke, Anne Katharina Ingrid

    2008-01-01

    In the past probabilistic model checking hast mostly been restricted to finite state models. This thesis explores the possibilities of model checking with continuous stochastic logic (CSL) on infinite-state Markov chains. We present an in-depth treatment of model checking algorithms for two special

  20. Hidden Markov Models for Human Genes

    DEFF Research Database (Denmark)

    Baldi, Pierre; Brunak, Søren; Chauvin, Yves

    1997-01-01

    We analyse the sequential structure of human genomic DNA by hidden Markov models. We apply models of widely different design: conventional left-right constructs and models with a built-in periodic architecture. The models are trained on segments of DNA sequences extracted such that they cover com...