WorldWideScience

Sample records for markov branching processes

  1. Critical Age-Dependent Branching Markov Processes and their ...

    Indian Academy of Sciences (India)

    This paper studies: (i) the long-time behaviour of the empirical distribution of age and normalized position of an age-dependent critical branching Markov process conditioned on non-extinction; and (ii) the super-process limit of a sequence of age-dependent critical branching Brownian motions.

  2. Generalized Markov branching models

    OpenAIRE

    Li, Junping

    2005-01-01

    In this thesis, we first considered a modified Markov branching process incorporating both state-independent immigration and resurrection. After establishing the criteria for regularity and uniqueness, explicit expressions for the extinction probability and mean extinction time are presented. The criteria for recurrence and ergodicity are also established. In addition, an explicit expression for the equilibrium distribution is presented.\\ud \\ud We then moved on to investigate the basic proper...

  3. Markov processes

    CERN Document Server

    Kirkwood, James R

    2015-01-01

    Review of ProbabilityShort HistoryReview of Basic Probability DefinitionsSome Common Probability DistributionsProperties of a Probability DistributionProperties of the Expected ValueExpected Value of a Random Variable with Common DistributionsGenerating FunctionsMoment Generating FunctionsExercisesDiscrete-Time, Finite-State Markov ChainsIntroductionNotationTransition MatricesDirected Graphs: Examples of Markov ChainsRandom Walk with Reflecting BoundariesGambler’s RuinEhrenfest ModelCentral Problem of Markov ChainsCondition to Ensure a Unique Equilibrium StateFinding the Equilibrium StateTransient and Recurrent StatesIndicator FunctionsPerron-Frobenius TheoremAbsorbing Markov ChainsMean First Passage TimeMean Recurrence Time and the Equilibrium StateFundamental Matrix for Regular Markov ChainsDividing a Markov Chain into Equivalence ClassesPeriodic Markov ChainsReducible Markov ChainsSummaryExercisesDiscrete-Time, Infinite-State Markov ChainsRenewal ProcessesDelayed Renewal ProcessesEquilibrium State f...

  4. Markov Chains and Markov Processes

    OpenAIRE

    Ogunbayo, Segun

    2016-01-01

    Markov chain, which was named after Andrew Markov is a mathematical system that transfers a state to another state. Many real world systems contain uncertainty. This study helps us to understand the basic idea of a Markov chain and how is been useful in our daily lives. For some times there had been suspense on distinct predictions and future existences. Also in different games there had been different expectations or results involved. That is the reason why we need Markov chains to predict o...

  5. Markov processes and controlled Markov chains

    CERN Document Server

    Filar, Jerzy; Chen, Anyue

    2002-01-01

    The general theory of stochastic processes and the more specialized theory of Markov processes evolved enormously in the second half of the last century. In parallel, the theory of controlled Markov chains (or Markov decision processes) was being pioneered by control engineers and operations researchers. Researchers in Markov processes and controlled Markov chains have been, for a long time, aware of the synergies between these two subject areas. However, this may be the first volume dedicated to highlighting these synergies and, almost certainly, it is the first volume that emphasizes the contributions of the vibrant and growing Chinese school of probability. The chapters that appear in this book reflect both the maturity and the vitality of modern day Markov processes and controlled Markov chains. They also will provide an opportunity to trace the connections that have emerged between the work done by members of the Chinese school of probability and the work done by the European, US, Central and South Ameri...

  6. Semi-Markov processes

    CERN Document Server

    Grabski

    2014-01-01

    Semi-Markov Processes: Applications in System Reliability and Maintenance is a modern view of discrete state space and continuous time semi-Markov processes and their applications in reliability and maintenance. The book explains how to construct semi-Markov models and discusses the different reliability parameters and characteristics that can be obtained from those models. The book is a useful resource for mathematicians, engineering practitioners, and PhD and MSc students who want to understand the basic concepts and results of semi-Markov process theory. Clearly defines the properties and

  7. Branching processes in biology

    CERN Document Server

    Kimmel, Marek

    2015-01-01

    This book provides a theoretical background of branching processes and discusses their biological applications. Branching processes are a well-developed and powerful set of tools in the field of applied probability. The range of applications considered includes molecular biology, cellular biology, human evolution and medicine. The branching processes discussed include Galton-Watson, Markov, Bellman-Harris, Multitype, and General Processes. As an aid to understanding specific examples, two introductory chapters, and two glossaries are included that provide background material in mathematics and in biology. The book will be of interest to scientists who work in quantitative modeling of biological systems, particularly probabilists, mathematical biologists, biostatisticians, cell biologists, molecular biologists, and bioinformaticians. The authors are a mathematician and cell biologist who have collaborated for more than a decade in the field of branching processes in biology for this new edition. This second ex...

  8. Markov branching in the vertex splitting model

    International Nuclear Information System (INIS)

    Stefánsson, Sigurdur Örn

    2012-01-01

    We study a special case of the vertex splitting model which is a recent model of randomly growing trees. For any finite maximum vertex degree D, we find a one parameter model, with parameter α element of [0,1] which has a so-called Markov branching property. When D=∞ we find a two parameter model with an additional parameter γ element of [0,1] which also has this feature. In the case D = 3, the model bears resemblance to Ford's α-model of phylogenetic trees and when D=∞ it is similar to its generalization, the αγ-model. For α = 0, the model reduces to the well known model of preferential attachment. In the case α > 0, we prove convergence of the finite volume probability measures, generated by the growth rules, to a measure on infinite trees which is concentrated on the set of trees with a single spine. We show that the annealed Hausdorff dimension with respect to the infinite volume measure is 1/α. When γ = 0 the model reduces to a model of growing caterpillar graphs in which case we prove that the Hausdorff dimension is almost surely 1/α and that the spectral dimension is almost surely 2/(1 + α). We comment briefly on the distribution of vertex degrees and correlations between degrees of neighbouring vertices

  9. Process Algebra and Markov Chains

    NARCIS (Netherlands)

    Brinksma, Hendrik; Hermanns, H.; Brinksma, Hendrik; Hermanns, H.; Katoen, Joost P.

    This paper surveys and relates the basic concepts of process algebra and the modelling of continuous time Markov chains. It provides basic introductions to both fields, where we also study the Markov chains from an algebraic perspective, viz. that of Markov chain algebra. We then proceed to study

  10. Process algebra and Markov chains

    NARCIS (Netherlands)

    Brinksma, E.; Hermanns, H.; Brinksma, E.; Hermanns, H.; Katoen, J.P.

    2001-01-01

    This paper surveys and relates the basic concepts of process algebra and the modelling of continuous time Markov chains. It provides basic introductions to both fields, where we also study the Markov chains from an algebraic perspective, viz. that of Markov chain algebra. We then proceed to study

  11. Processing Branches

    DEFF Research Database (Denmark)

    Schindler, Christoph; Tamke, Martin; Tabatabai, Ali

    2014-01-01

    Angled and forked wood – a desired material until 19th century, was swept away by industrialization and its standardization of processes and materials. Contemporary information technology has the potential for the capturing and recognition of individual geometries through laser scanning...

  12. Nonlinear Markov processes: Deterministic case

    International Nuclear Information System (INIS)

    Frank, T.D.

    2008-01-01

    Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution

  13. Reviving Markov processes and applications

    International Nuclear Information System (INIS)

    Cai, H.

    1988-01-01

    In this dissertation we study a procedure which restarts a Markov process when the process is killed by some arbitrary multiplicative functional. The regenerative nature of this revival procedure is characterized through a Markov renewal equation. An interesting duality between the revival procedure and the classical killing operation is found. Under the condition that the multiplicative functional possesses an intensity, the generators of the revival process can be written down explicitly. An intimate connection is also found between the perturbation of the sample path of a Markov process and the perturbation of a generator (in Kato's sense). The applications of the theory include the study of the processes like piecewise-deterministic Markov process, virtual waiting time process and the first entrance decomposition (taboo probability)

  14. A relation between non-Markov and Markov processes

    International Nuclear Information System (INIS)

    Hara, H.

    1980-01-01

    With the aid of a transformation technique, it is shown that some memory effects in the non-Markov processes can be eliminated. In other words, some non-Markov processes are rewritten in a form obtained by the random walk process; the Markov process. To this end, two model processes which have some memory or correlation in the random walk process are introduced. An explanation of the memory in the processes is given. (orig.)

  15. Markov Processes in Image Processing

    Science.gov (United States)

    Petrov, E. P.; Kharina, N. L.

    2018-05-01

    Digital images are used as an information carrier in different sciences and technologies. The aspiration to increase the number of bits in the image pixels for the purpose of obtaining more information is observed. In the paper, some methods of compression and contour detection on the basis of two-dimensional Markov chain are offered. Increasing the number of bits on the image pixels will allow one to allocate fine object details more precisely, but it significantly complicates image processing. The methods of image processing do not concede by the efficiency to well-known analogues, but surpass them in processing speed. An image is separated into binary images, and processing is carried out in parallel with each without an increase in speed, when increasing the number of bits on the image pixels. One more advantage of methods is the low consumption of energy resources. Only logical procedures are used and there are no computing operations. The methods can be useful in processing images of any class and assignment in processing systems with a limited time and energy resources.

  16. Markov Decision Processes in Practice

    NARCIS (Netherlands)

    Boucherie, Richardus J.; van Dijk, N.M.

    2017-01-01

    It is over 30 years ago since D.J. White started his series of surveys on practical applications of Markov decision processes (MDP), over 20 years after the phenomenal book by Martin Puterman on the theory of MDP, and over 10 years since Eugene A. Feinberg and Adam Shwartz published their Handbook

  17. Markov processes characterization and convergence

    CERN Document Server

    Ethier, Stewart N

    2009-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists."[A]nyone who works with Markov processes whose state space is uncountably infinite will need this most impressive book as a guide and reference."-American Scientist"There is no question but that space should immediately be reserved for [this] book on the library shelf. Those who aspire to mastery of the contents should also reserve a large number of long winter evenings."-Zentralblatt f?r Mathematik und ihre Grenzgebiete/Mathematics Abstracts"Ethier and Kurtz have produced an excellent treatment of the modern theory of Markov processes that [is] useful both as a reference work and as a graduate textbook."-Journal of Statistical PhysicsMarkov Proce...

  18. Poisson branching point processes

    International Nuclear Information System (INIS)

    Matsuo, K.; Teich, M.C.; Saleh, B.E.A.

    1984-01-01

    We investigate the statistical properties of a special branching point process. The initial process is assumed to be a homogeneous Poisson point process (HPP). The initiating events at each branching stage are carried forward to the following stage. In addition, each initiating event independently contributes a nonstationary Poisson point process (whose rate is a specified function) located at that point. The additional contributions from all points of a given stage constitute a doubly stochastic Poisson point process (DSPP) whose rate is a filtered version of the initiating point process at that stage. The process studied is a generalization of a Poisson branching process in which random time delays are permitted in the generation of events. Particular attention is given to the limit in which the number of branching stages is infinite while the average number of added events per event of the previous stage is infinitesimal. In the special case when the branching is instantaneous this limit of continuous branching corresponds to the well-known Yule--Furry process with an initial Poisson population. The Poisson branching point process provides a useful description for many problems in various scientific disciplines, such as the behavior of electron multipliers, neutron chain reactions, and cosmic ray showers

  19. Markov Decision Process Measurement Model.

    Science.gov (United States)

    LaMar, Michelle M

    2018-03-01

    Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.

  20. Markov process of muscle motors

    International Nuclear Information System (INIS)

    Kondratiev, Yu; Pechersky, E; Pirogov, S

    2008-01-01

    We study a Markov random process describing muscle molecular motor behaviour. Every motor is either bound up with a thin filament or unbound. In the bound state the motor creates a force proportional to its displacement from the neutral position. In both states the motor spends an exponential time depending on the state. The thin filament moves at a velocity proportional to the average of all displacements of all motors. We assume that the time which a motor stays in the bound state does not depend on its displacement. Then one can find an exact solution of a nonlinear equation appearing in the limit of an infinite number of motors

  1. Inhomogeneous Markov point processes by transformation

    DEFF Research Database (Denmark)

    Jensen, Eva B. Vedel; Nielsen, Linda Stougaard

    2000-01-01

    We construct parametrized models for point processes, allowing for both inhomogeneity and interaction. The inhomogeneity is obtained by applying parametrized transformations to homogeneous Markov point processes. An interesting model class, which can be constructed by this transformation approach......, is that of exponential inhomogeneous Markov point processes. Statistical inference For such processes is discussed in some detail....

  2. Timed Comparisons of Semi-Markov Processes

    DEFF Research Database (Denmark)

    Pedersen, Mathias Ruggaard; Larsen, Kim Guldstrand; Bacci, Giorgio

    2018-01-01

    -Markov processes, and investigate the question of how to compare two semi-Markov processes with respect to their time-dependent behaviour. To this end, we introduce the relation of being “faster than” between processes and study its algorithmic complexity. Through a connection to probabilistic automata we obtain...

  3. Generated dynamics of Markov and quantum processes

    CERN Document Server

    Janßen, Martin

    2016-01-01

    This book presents Markov and quantum processes as two sides of a coin called generated stochastic processes. It deals with quantum processes as reversible stochastic processes generated by one-step unitary operators, while Markov processes are irreversible stochastic processes generated by one-step stochastic operators. The characteristic feature of quantum processes are oscillations, interference, lots of stationary states in bounded systems and possible asymptotic stationary scattering states in open systems, while the characteristic feature of Markov processes are relaxations to a single stationary state. Quantum processes apply to systems where all variables, that control reversibility, are taken as relevant variables, while Markov processes emerge when some of those variables cannot be followed and are thus irrelevant for the dynamic description. Their absence renders the dynamic irreversible. A further aim is to demonstrate that almost any subdiscipline of theoretical physics can conceptually be put in...

  4. Maximizing Entropy over Markov Processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2013-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code....

  5. Maximizing entropy over Markov processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2014-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code. © 2014 Elsevier...

  6. Bayesian analysis of Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2006-01-01

    Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...... a partially ordered Markov point process as the auxiliary variable. As the method requires simulation from the "unknown" likelihood, perfect simulation algorithms for spatial point processes become useful....

  7. Finite Markov processes and their applications

    CERN Document Server

    Iosifescu, Marius

    2007-01-01

    A self-contained treatment of finite Markov chains and processes, this text covers both theory and applications. Author Marius Iosifescu, vice president of the Romanian Academy and director of its Center for Mathematical Statistics, begins with a review of relevant aspects of probability theory and linear algebra. Experienced readers may start with the second chapter, a treatment of fundamental concepts of homogeneous finite Markov chain theory that offers examples of applicable models.The text advances to studies of two basic types of homogeneous finite Markov chains: absorbing and ergodic ch

  8. Strong, Weak and Branching Bisimulation for Transition Systems and Markov Reward Chains: A Unifying Matrix Approach

    Directory of Open Access Journals (Sweden)

    Nikola Trčka

    2009-12-01

    Full Text Available We first study labeled transition systems with explicit successful termination. We establish the notions of strong, weak, and branching bisimulation in terms of boolean matrix theory, introducing thus a novel and powerful algebraic apparatus. Next we consider Markov reward chains which are standardly presented in real matrix theory. By interpreting the obtained matrix conditions for bisimulations in this setting, we automatically obtain the definitions of strong, weak, and branching bisimulation for Markov reward chains. The obtained strong and weak bisimulations are shown to coincide with some existing notions, while the obtained branching bisimulation is new, but its usefulness is questionable.

  9. Converging from branching to linear metrics on Markov chains

    DEFF Research Database (Denmark)

    Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim G.

    2017-01-01

    -approximant is computable in polynomial time in the size of the MC. The upper-approximants are bisimilarity-like pseudometrics (hence, branching-time distances) that converge point-wise to the linear-time metrics. This convergence is interesting in itself, because it reveals a nontrivial relation between branching...

  10. Converging from Branching to Linear Metrics on Markov Chains

    DEFF Research Database (Denmark)

    Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand

    2015-01-01

    time in the size of the MC. The upper-approximants are Kantorovich-like pseudometrics, i.e. branching-time distances, that converge point-wise to the linear-time metrics. This convergence is interesting in itself, since it reveals a nontrivial relation between branching and linear-time metric...

  11. Markov processes an introduction for physical scientists

    CERN Document Server

    Gillespie, Daniel T

    1991-01-01

    Markov process theory is basically an extension of ordinary calculus to accommodate functions whos time evolutions are not entirely deterministic. It is a subject that is becoming increasingly important for many fields of science. This book develops the single-variable theory of both continuous and jump Markov processes in a way that should appeal especially to physicists and chemists at the senior and graduate level.Key Features* A self-contained, prgamatic exposition of the needed elements of random variable theory* Logically integrated derviations of the Chapman-Kolmogorov e

  12. Continuity Properties of Distances for Markov Processes

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Mao, Hua; Larsen, Kim Guldstrand

    2014-01-01

    In this paper we investigate distance functions on finite state Markov processes that measure the behavioural similarity of non-bisimilar processes. We consider both probabilistic bisimilarity metrics, and trace-based distances derived from standard Lp and Kullback-Leibler distances. Two desirable...

  13. A Metrized Duality Theorem for Markov Processes

    DEFF Research Database (Denmark)

    Kozen, Dexter; Mardare, Radu Iulian; Panangaden, Prakash

    2014-01-01

    We extend our previous duality theorem for Markov processes by equipping the processes with a pseudometric and the algebras with a notion of metric diameter. We are able to show that the isomorphisms of our previous duality theorem become isometries in this quantitative setting. This opens the wa...

  14. Markov decision processes in artificial intelligence

    CERN Document Server

    Sigaud, Olivier

    2013-01-01

    Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision problems under uncertainty as well as Reinforcement Learning problems. Written by experts in the field, this book provides a global view of current research using MDPs in Artificial Intelligence. It starts with an introductory presentation of the fundamental aspects of MDPs (planning in MDPs, Reinforcement Learning, Partially Observable MDPs, Markov games and the use of non-classical criteria). Then it presents more advanced research trends in the domain and gives some concrete examples using illustr

  15. Renewal characterization of Markov modulated Poisson processes

    Directory of Open Access Journals (Sweden)

    Marcel F. Neuts

    1989-01-01

    Full Text Available A Markov Modulated Poisson Process (MMPP M(t defined on a Markov chain J(t is a pure jump process where jumps of M(t occur according to a Poisson process with intensity λi whenever the Markov chain J(t is in state i. M(t is called strongly renewal (SR if M(t is a renewal process for an arbitrary initial probability vector of J(t with full support on P={i:λi>0}. M(t is called weakly renewal (WR if there exists an initial probability vector of J(t such that the resulting MMPP is a renewal process. The purpose of this paper is to develop general characterization theorems for the class SR and some sufficiency theorems for the class WR in terms of the first passage times of the bivariate Markov chain [J(t,M(t]. Relevance to the lumpability of J(t is also studied.

  16. Nonlinearly perturbed semi-Markov processes

    CERN Document Server

    Silvestrov, Dmitrii

    2017-01-01

    The book presents new methods of asymptotic analysis for nonlinearly perturbed semi-Markov processes with a finite phase space. These methods are based on special time-space screening procedures for sequential phase space reduction of semi-Markov processes combined with the systematical use of operational calculus for Laurent asymptotic expansions. Effective recurrent algorithms are composed for getting asymptotic expansions, without and with explicit upper bounds for remainders, for power moments of hitting times, stationary and conditional quasi-stationary distributions for nonlinearly perturbed semi-Markov processes. These results are illustrated by asymptotic expansions for birth-death-type semi-Markov processes, which play an important role in various applications. The book will be a useful contribution to the continuing intensive studies in the area. It is an essential reference for theoretical and applied researchers in the field of stochastic processes and their applications that will cont...

  17. Strong, weak and branching bisimulation for transition systems and Markov reward chains: A unifying matrix approach

    NARCIS (Netherlands)

    Trcka, N.; Andova, S.; McIver, A.; D'Argenio, P.; Cuijpers, P.J.L.; Markovski, J.; Morgan, C.; Núñez, M.

    2009-01-01

    We first study labeled transition systems with explicit successful termination. We establish the notions of strong, weak, and branching bisimulation in terms of boolean matrix theory, introducing thus a novel and powerful algebraic apparatus. Next we consider Markov reward chains which are

  18. Operational Markov Condition for Quantum Processes

    Science.gov (United States)

    Pollock, Felix A.; Rodríguez-Rosario, César; Frauenheim, Thomas; Paternostro, Mauro; Modi, Kavan

    2018-01-01

    We derive a necessary and sufficient condition for a quantum process to be Markovian which coincides with the classical one in the relevant limit. Our condition unifies all previously known definitions for quantum Markov processes by accounting for all potentially detectable memory effects. We then derive a family of measures of non-Markovianity with clear operational interpretations, such as the size of the memory required to simulate a process or the experimental falsifiability of a Markovian hypothesis.

  19. Modeling nonhomogeneous Markov processes via time transformation.

    Science.gov (United States)

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  20. Exact solution of the hidden Markov processes

    Science.gov (United States)

    Saakian, David B.

    2017-11-01

    We write a master equation for the distributions related to hidden Markov processes (HMPs) and solve it using a functional equation. Thus the solution of HMPs is mapped exactly to the solution of the functional equation. For a general case the latter can be solved only numerically. We derive an exact expression for the entropy of HMPs. Our expression for the entropy is an alternative to the ones given before by the solution of integral equations. The exact solution is possible because actually the model can be considered as a generalized random walk on a one-dimensional strip. While we give the solution for the two second-order matrices, our solution can be easily generalized for the L values of the Markov process and M values of observables: We should be able to solve a system of L functional equations in the space of dimension M -1 .

  1. Dynamical fluctuations for semi-Markov processes

    Czech Academy of Sciences Publication Activity Database

    Maes, C.; Netočný, Karel; Wynants, B.

    2009-01-01

    Roč. 42, č. 36 (2009), 365002/1-365002/21 ISSN 1751-8113 R&D Projects: GA ČR GC202/07/J051 Institutional research plan: CEZ:AV0Z10100520 Keywords : nonequilibrium fluctuations * semi-Markov processes Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.577, year: 2009 http://www.iop.org/EJ/abstract/1751-8121/42/36/365002

  2. Markov chains of nonlinear Markov processes and an application to a winner-takes-all model for social conformity

    Energy Technology Data Exchange (ETDEWEB)

    Frank, T D [Center for the Ecological Study of Perception and Action, Department of Psychology, University of Connecticut, 406 Babbidge Road, Storrs, CT 06269 (United States)

    2008-07-18

    We discuss nonlinear Markov processes defined on discrete time points and discrete state spaces using Markov chains. In this context, special attention is paid to the distinction between linear and nonlinear Markov processes. We illustrate that the Chapman-Kolmogorov equation holds for nonlinear Markov processes by a winner-takes-all model for social conformity. (fast track communication)

  3. Markov chains of nonlinear Markov processes and an application to a winner-takes-all model for social conformity

    International Nuclear Information System (INIS)

    Frank, T D

    2008-01-01

    We discuss nonlinear Markov processes defined on discrete time points and discrete state spaces using Markov chains. In this context, special attention is paid to the distinction between linear and nonlinear Markov processes. We illustrate that the Chapman-Kolmogorov equation holds for nonlinear Markov processes by a winner-takes-all model for social conformity. (fast track communication)

  4. Neyman, Markov processes and survival analysis.

    Science.gov (United States)

    Yang, Grace

    2013-07-01

    J. Neyman used stochastic processes extensively in his applied work. One example is the Fix and Neyman (F-N) competing risks model (1951) that uses finite homogeneous Markov processes to analyse clinical trials with breast cancer patients. We revisit the F-N model, and compare it with the Kaplan-Meier (K-M) formulation for right censored data. The comparison offers a way to generalize the K-M formulation to include risks of recovery and relapses in the calculation of a patient's survival probability. The generalization is to extend the F-N model to a nonhomogeneous Markov process. Closed-form solutions of the survival probability are available in special cases of the nonhomogeneous processes, like the popular multiple decrement model (including the K-M model) and Chiang's staging model, but these models do not consider recovery and relapses while the F-N model does. An analysis of sero-epidemiology current status data with recurrent events is illustrated. Fix and Neyman used Neyman's RBAN (regular best asymptotic normal) estimates for the risks, and provided a numerical example showing the importance of considering both the survival probability and the length of time of a patient living a normal life in the evaluation of clinical trials. The said extension would result in a complicated model and it is unlikely to find analytical closed-form solutions for survival analysis. With ever increasing computing power, numerical methods offer a viable way of investigating the problem.

  5. A Markov Process Inspired Cellular Automata Model of Road Traffic

    OpenAIRE

    Wang, Fa; Li, Li; Hu, Jianming; Ji, Yan; Yao, Danya; Zhang, Yi; Jin, Xuexiang; Su, Yuelong; Wei, Zheng

    2008-01-01

    To provide a more accurate description of the driving behaviors in vehicle queues, a namely Markov-Gap cellular automata model is proposed in this paper. It views the variation of the gap between two consequent vehicles as a Markov process whose stationary distribution corresponds to the observed distribution of practical gaps. The multiformity of this Markov process provides the model enough flexibility to describe various driving behaviors. Two examples are given to show how to specialize i...

  6. Learning Markov Decision Processes for Model Checking

    DEFF Research Database (Denmark)

    Mao, Hua; Chen, Yingke; Jaeger, Manfred

    2012-01-01

    . The proposed learning algorithm is adapted from algorithms for learning deterministic probabilistic finite automata, and extended to include both probabilistic and nondeterministic transitions. The algorithm is empirically analyzed and evaluated by learning system models of slot machines. The evaluation......Constructing an accurate system model for formal model verification can be both resource demanding and time-consuming. To alleviate this shortcoming, algorithms have been proposed for automatically learning system models based on observed system behaviors. In this paper we extend the algorithm...... on learning probabilistic automata to reactive systems, where the observed system behavior is in the form of alternating sequences of inputs and outputs. We propose an algorithm for automatically learning a deterministic labeled Markov decision process model from the observed behavior of a reactive system...

  7. Recombination Processes and Nonlinear Markov Chains.

    Science.gov (United States)

    Pirogov, Sergey; Rybko, Alexander; Kalinina, Anastasia; Gelfand, Mikhail

    2016-09-01

    Bacteria are known to exchange genetic information by horizontal gene transfer. Since the frequency of homologous recombination depends on the similarity between the recombining segments, several studies examined whether this could lead to the emergence of subspecies. Most of them simulated fixed-size Wright-Fisher populations, in which the genetic drift should be taken into account. Here, we use nonlinear Markov processes to describe a bacterial population evolving under mutation and recombination. We consider a population structure as a probability measure on the space of genomes. This approach implies the infinite population size limit, and thus, the genetic drift is not assumed. We prove that under these conditions, the emergence of subspecies is impossible.

  8. Derivation of Markov processes that violate detailed balance

    Science.gov (United States)

    Lee, Julian

    2018-03-01

    Time-reversal symmetry of the microscopic laws dictates that the equilibrium distribution of a stochastic process must obey the condition of detailed balance. However, cyclic Markov processes that do not admit equilibrium distributions with detailed balance are often used to model systems driven out of equilibrium by external agents. I show that for a Markov model without detailed balance, an extended Markov model can be constructed, which explicitly includes the degrees of freedom for the driving agent and satisfies the detailed balance condition. The original cyclic Markov model for the driven system is then recovered as an approximation at early times by summing over the degrees of freedom for the driving agent. I also show that the widely accepted expression for the entropy production in a cyclic Markov model is actually a time derivative of an entropy component in the extended model. Further, I present an analytic expression for the entropy component that is hidden in the cyclic Markov model.

  9. Branching processes and neutral evolution

    CERN Document Server

    Taïb, Ziad

    1992-01-01

    The Galton-Watson branching process has its roots in the problem of extinction of family names which was given a precise formulation by F. Galton as problem 4001 in the Educational Times (17, 1873). In 1875, an attempt to solve this problem was made by H. W. Watson but as it turned out, his conclusion was incorrect. Half a century later, R. A. Fisher made use of the Galton-Watson process to determine the extinction probability of the progeny of a mutant gene. However, it was J. B. S. Haldane who finally gave the first sketch of the correct conclusion. J. B. S. Haldane also predicted that mathematical genetics might some day develop into a "respectable branch of applied mathematics" (quoted in M. Kimura & T. Ohta, Theoretical Aspects of Population Genetics. Princeton, 1971). Since the time of Fisher and Haldane, the two fields of branching processes and mathematical genetics have attained a high degree of sophistication but in different directions. This monograph is a first attempt to apply the current sta...

  10. Pathwise duals of monotone and additive Markov processes

    Czech Academy of Sciences Publication Activity Database

    Sturm, A.; Swart, Jan M.

    -, - (2018) ISSN 0894-9840 R&D Projects: GA ČR GAP201/12/2613 Institutional support: RVO:67985556 Keywords : pathwise duality * monotone Markov process * additive Markov process * interacting particle system Subject RIV: BA - General Mathematics Impact factor: 0.854, year: 2016 http://library.utia.cas.cz/separaty/2016/SI/swart-0465436.pdf

  11. Markov branching diffusions: martingales, Girsanov type theorems and applications to the long term behaviour

    NARCIS (Netherlands)

    Engländer, J.; Kyprianou, A.E.

    2001-01-01

    Consider a spatial branching particle process where the underlying motion is a conservative diffusion on D C Rd corresponding to the elliptic op- erator L on D, and the branching is strictly binary (dyadic), with spatially varying rate ß(x) => 0 (and ß <> 0) which is assumed to be bounded

  12. Markov processes from K. Ito's perspective (AM-155)

    CERN Document Server

    Stroock, Daniel W

    2003-01-01

    Kiyosi Itô''s greatest contribution to probability theory may be his introduction of stochastic differential equations to explain the Kolmogorov-Feller theory of Markov processes. Starting with the geometric ideas that guided him, this book gives an account of Itô''s program. The modern theory of Markov processes was initiated by A. N. Kolmogorov. However, Kolmogorov''s approach was too analytic to reveal the probabilistic foundations on which it rests. In particular, it hides the central role played by the simplest Markov processes: those with independent, identically distributed incremen

  13. Branching process models of cancer

    CERN Document Server

    Durrett, Richard

    2015-01-01

    This volume develops results on continuous time branching processes and applies them to study rate of tumor growth, extending classic work on the Luria-Delbruck distribution. As a consequence, the authors calculate the probability that mutations that confer resistance to treatment are present at detection and quantify the extent of tumor heterogeneity. As applications, the authors evaluate ovarian cancer screening strategies and give rigorous proofs for results of Heano and Michor concerning tumor metastasis. These notes should be accessible to students who are familiar with Poisson processes and continuous time. Richard Durrett is mathematics professor at Duke University, USA. He is the author of 8 books, over 200 journal articles, and has supervised more than 40 Ph.D. students. Most of his current research concerns the applications of probability to biology: ecology, genetics, and most recently cancer.

  14. NonMarkov Ito Processes with 1- state memory

    Science.gov (United States)

    McCauley, Joseph L.

    2010-08-01

    A Markov process, by definition, cannot depend on any previous state other than the last observed state. An Ito process implies the Fokker-Planck and Kolmogorov backward time partial differential eqns. for transition densities, which in turn imply the Chapman-Kolmogorov eqn., but without requiring the Markov condition. We present a class of Ito process superficially resembling Markov processes, but with 1-state memory. In finance, such processes would obey the efficient market hypothesis up through the level of pair correlations. These stochastic processes have been mislabeled in recent literature as 'nonlinear Markov processes'. Inspired by Doob and Feller, who pointed out that the ChapmanKolmogorov eqn. is not restricted to Markov processes, we exhibit a Gaussian Ito transition density with 1-state memory in the drift coefficient that satisfies both of Kolmogorov's partial differential eqns. and also the Chapman-Kolmogorov eqn. In addition, we show that three of the examples from McKean's seminal 1966 paper are also nonMarkov Ito processes. Last, we show that the transition density of the generalized Black-Scholes type partial differential eqn. describes a martingale, and satisfies the ChapmanKolmogorov eqn. This leads to the shortest-known proof that the Green function of the Black-Scholes eqn. with variable diffusion coefficient provides the so-called martingale measure of option pricing.

  15. Filtering of a Markov Jump Process with Counting Observations

    International Nuclear Information System (INIS)

    Ceci, C.; Gerardi, A.

    2000-01-01

    This paper concerns the filtering of an R d -valued Markov pure jump process when only the total number of jumps are observed. Strong and weak uniqueness for the solutions of the filtering equations are discussed

  16. Continuous-time Markov decision processes theory and applications

    CERN Document Server

    Guo, Xianping

    2009-01-01

    This volume provides the first book entirely devoted to recent developments on the theory and applications of continuous-time Markov decision processes (MDPs). The MDPs presented here include most of the cases that arise in applications.

  17. On mean reward variance in semi-Markov processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2005-01-01

    Roč. 62, č. 3 (2005), s. 387-397 ISSN 1432-2994 R&D Projects: GA ČR(CZ) GA402/05/0115; GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov and semi-Markov processes with rewards * variance of cumulative reward * asymptotic behaviour Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.259, year: 2005

  18. Monte Carlo Simulation of Markov, Semi-Markov, and Generalized Semi- Markov Processes in Probabilistic Risk Assessment

    Science.gov (United States)

    English, Thomas

    2005-01-01

    A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.

  19. Quantum Markov processes and applications in many-body systems

    International Nuclear Information System (INIS)

    Temme, P. K.

    2010-01-01

    This thesis is concerned with the investigation of quantum as well as classical Markov processes and their application in the field of strongly correlated many-body systems. A Markov process is a special kind of stochastic process, which is determined by an evolution that is independent of its history and only depends on the current state of the system. The application of Markov processes has a long history in the field of statistical mechanics and classical many-body theory. Not only are Markov processes used to describe the dynamics of stochastic systems, but they predominantly also serve as a practical method that allows for the computation of fundamental properties of complex many-body systems by means of probabilistic algorithms. The aim of this thesis is to investigate the properties of quantum Markov processes, i.e. Markov processes taking place in a quantum mechanical state space, and to gain a better insight into complex many-body systems by means thereof. Moreover, we formulate a novel quantum algorithm which allows for the computation of the thermal and ground states of quantum many-body systems. After a brief introduction to quantum Markov processes we turn to an investigation of their convergence properties. We find bounds on the convergence rate of the quantum process by generalizing geometric bounds found for classical processes. We generalize a distance measure that serves as the basis for our investigations, the chi-square divergence, to non-commuting probability spaces. This divergence allows for a convenient generalization of the detailed balance condition to quantum processes. We then devise the quantum algorithm that can be seen as the natural generalization of the ubiquitous Metropolis algorithm to simulate quantum many-body Hamiltonians. By this we intend to provide further evidence, that a quantum computer can serve as a fully-fledged quantum simulator, which is not only capable of describing the dynamical evolution of quantum systems, but

  20. Bisimulation on Markov Processes over Arbitrary Measurable Spaces

    DEFF Research Database (Denmark)

    Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand

    2014-01-01

    We introduce a notion of bisimulation on labelled Markov Processes over generic measurable spaces in terms of arbitrary binary relations. Our notion of bisimulation is proven to coincide with the coalgebraic definition of Aczel and Mendler in terms of the Giry functor, which associates with a mea......We introduce a notion of bisimulation on labelled Markov Processes over generic measurable spaces in terms of arbitrary binary relations. Our notion of bisimulation is proven to coincide with the coalgebraic definition of Aczel and Mendler in terms of the Giry functor, which associates...

  1. Embedding a State Space Model Into a Markov Decision Process

    DEFF Research Database (Denmark)

    Nielsen, Lars Relund; Jørgensen, Erik; Højsgaard, Søren

    2011-01-01

    In agriculture Markov decision processes (MDPs) with finite state and action space are often used to model sequential decision making over time. For instance, states in the process represent possible levels of traits of the animal and transition probabilities are based on biological models...

  2. Lectures from Markov processes to Brownian motion

    CERN Document Server

    Chung, Kai Lai

    1982-01-01

    This book evolved from several stacks of lecture notes written over a decade and given in classes at slightly varying levels. In transforming the over­ lapping material into a book, I aimed at presenting some of the best features of the subject with a minimum of prerequisities and technicalities. (Needless to say, one man's technicality is another's professionalism. ) But a text frozen in print does not allow for the latitude of the classroom; and the tendency to expand becomes harder to curb without the constraints of time and audience. The result is that this volume contains more topics and details than I had intended, but I hope the forest is still visible with the trees. The book begins at the beginning with the Markov property, followed quickly by the introduction of option al times and martingales. These three topics in the discrete parameter setting are fully discussed in my book A Course In Probability Theory (second edition, Academic Press, 1974). The latter will be referred to throughout this book ...

  3. Hidden Markov processes theory and applications to biology

    CERN Document Server

    Vidyasagar, M

    2014-01-01

    This book explores important aspects of Markov and hidden Markov processes and the applications of these ideas to various problems in computational biology. The book starts from first principles, so that no previous knowledge of probability is necessary. However, the work is rigorous and mathematical, making it useful to engineers and mathematicians, even those not interested in biological applications. A range of exercises is provided, including drills to familiarize the reader with concepts and more advanced problems that require deep thinking about the theory. Biological applications are t

  4. Berman-Konsowa principle for reversible Markov jump processes

    NARCIS (Netherlands)

    Hollander, den W.Th.F.; Jansen, S.

    2013-01-01

    In this paper we prove a version of the Berman-Konsowa principle for reversible Markov jump processes on Polish spaces. The Berman-Konsowa principle provides a variational formula for the capacity of a pair of disjoint measurable sets. There are two versions, one involving a class of probability

  5. Testing the Adequacy of a Semi-Markov Process

    Science.gov (United States)

    2015-09-17

    classical Brownian motion are two common examples of martingales. Submartingales and supermartingales are two extended classes of martingales. They... movements using Semi-Markov processes,” Tourism Management, Vol. 32, No. 4, 2011, pp. 844–851. [4] Titman, A. C. and Sharples, L. D., “Model

  6. Elements of the theory of Markov processes and their applications

    CERN Document Server

    Bharucha-Reid, A T

    2010-01-01

    This graduate-level text and reference in probability, with numerous applications to several fields of science, presents nonmeasure-theoretic introduction to theory of Markov processes. The work also covers mathematical models based on the theory, employed in various applied fields. Prerequisites are a knowledge of elementary probability theory, mathematical statistics, and analysis. Appendixes. Bibliographies. 1960 edition.

  7. Envelopes of Sets of Measures, Tightness, and Markov Control Processes

    International Nuclear Information System (INIS)

    Gonzalez-Hernandez, J.; Hernandez-Lerma, O.

    1999-01-01

    We introduce upper and lower envelopes for sets of measures on an arbitrary topological space, which are then used to give a tightness criterion. These concepts are applied to show the existence of optimal policies for a class of Markov control processes

  8. Cascade probabilistic function and the Markov's processes. Chapter 1

    International Nuclear Information System (INIS)

    2002-01-01

    In the Chapter 1 the physical and mathematical descriptions of radiation processes are carried out. The relation of the cascade probabilistic functions (CPF) for electrons, protons, alpha-particles and ions with Markov's chain is shown. The algorithms for CPF calculation with accounting energy losses are given

  9. On Characterisation of Markov Processes Via Martingale Problems

    Indian Academy of Sciences (India)

    This extension is used to improve on a criterion for a probability measure to be invariant for the semigroup associated with the Markov process. We also give examples of martingale problems that are well-posed in the class of solutions which are continuous in probability but for which no r.c.l.l. solution exists.

  10. Markov LIMID processes for representing and solving renewal problems

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Kristensen, Anders Ringgaard; Nilsson, Dennis

    2014-01-01

    to model a Markov Limid Process, where each TemLimid represents a macro action. Algorithms are presented to find optimal plans for a sequence of such macro actions. Use of algorithms is illustrated based on an extended version of an example from pig production originally used to introduce the Limid concept...

  11. Active Learning of Markov Decision Processes for System Verification

    DEFF Research Database (Denmark)

    Chen, Yingke; Nielsen, Thomas Dyhre

    2012-01-01

    deterministic Markov decision processes from data by actively guiding the selection of input actions. The algorithm is empirically analyzed by learning system models of slot machines, and it is demonstrated that the proposed active learning procedure can significantly reduce the amount of data required...... demanding process, and this shortcoming has motivated the development of algorithms for automatically learning system models from observed system behaviors. Recently, algorithms have been proposed for learning Markov decision process representations of reactive systems based on alternating sequences...... of input/output observations. While alleviating the problem of manually constructing a system model, the collection/generation of observed system behaviors can also prove demanding. Consequently we seek to minimize the amount of data required. In this paper we propose an algorithm for learning...

  12. Reggeon field theory and Markov processes

    International Nuclear Information System (INIS)

    Grassberger, P.; Sundermeyer, K.

    1978-01-01

    Reggeon field theory with a quartic coupling in addition to the standard cubic one is shown to be mathematically equivalent to a chemical process where a radical can undergo diffusion, absorption, recombination, and autocatalytic production. Physically, these 'radicals' are wee partons. (Auth.)

  13. Identification of Optimal Policies in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    46 2010, č. 3 (2010), s. 558-570 ISSN 0023-5954. [International Conference on Mathematical Methods in Economy and Industry. České Budějovice, 15.06.2009-18.06.2009] R&D Projects: GA ČR(CZ) GA402/08/0107; GA ČR GA402/07/1113 Institutional research plan: CEZ:AV0Z10750506 Keywords : finite state Markov decision processes * discounted and average costs * elimination of suboptimal policies Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/E/sladky-identification of optimal policies in markov decision processes.pdf

  14. Critical age-dependent branching Markov processes and their ...

    Indian Academy of Sciences (India)

    1Department of Mathematics and Statistics, Iowa State University, Ames, Iowa 50011, ..... its offspring (at the time of their birth) are not the same, the system is said to be non- ...... chains I, Calculations of rates of approach to homozygosity, Proc.

  15. A Partially Observed Markov Decision Process for Dynamic Pricing

    OpenAIRE

    Yossi Aviv; Amit Pazgal

    2005-01-01

    In this paper, we develop a stylized partially observed Markov decision process (POMDP) framework to study a dynamic pricing problem faced by sellers of fashion-like goods. We consider a retailer that plans to sell a given stock of items during a finite sales season. The objective of the retailer is to dynamically price the product in a way that maximizes expected revenues. Our model brings together various types of uncertainties about the demand, some of which are resolvable through sales ob...

  16. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  17. On the entropy of a hidden Markov process.

    Science.gov (United States)

    Jacquet, Philippe; Seroussi, Gadiel; Szpankowski, Wojciech

    2008-05-01

    We study the entropy rate of a hidden Markov process (HMP) defined by observing the output of a binary symmetric channel whose input is a first-order binary Markov process. Despite the simplicity of the models involved, the characterization of this entropy is a long standing open problem. By presenting the probability of a sequence under the model as a product of random matrices, one can see that the entropy rate sought is equal to a top Lyapunov exponent of the product. This offers an explanation for the elusiveness of explicit expressions for the HMP entropy rate, as Lyapunov exponents are notoriously difficult to compute. Consequently, we focus on asymptotic estimates, and apply the same product of random matrices to derive an explicit expression for a Taylor approximation of the entropy rate with respect to the parameter of the binary symmetric channel. The accuracy of the approximation is validated against empirical simulation results. We also extend our results to higher-order Markov processes and to Rényi entropies of any order.

  18. Semi adiabatic theory of seasonal Markov processes

    Energy Technology Data Exchange (ETDEWEB)

    Talkner, P [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    The dynamics of many natural and technical systems are essentially influenced by a periodic forcing. Analytic solutions of the equations of motion for periodically driven systems are generally not known. Simulations, numerical solutions or in some limiting cases approximate analytic solutions represent the known approaches to study the dynamics of such systems. Besides the regime of weak periodic forces where linear response theory works, the limit of a slow driving force can often be treated analytically using an adiabatic approximation. For this approximation to hold all intrinsic processes must be fast on the time-scale of a period of the external driving force. We developed a perturbation theory for periodically driven Markovian systems that covers the adiabatic regime but also works if the system has a single slow mode that may even be slower than the driving force. We call it the semi adiabatic approximation. Some results of this approximation for a system exhibiting stochastic resonance which usually takes place within the semi adiabatic regime are indicated. (author) 1 fig., 8 refs.

  19. The explicit form of the rate function for semi-Markov processes and its contractions

    Science.gov (United States)

    Sughiyama, Yuki; Kobayashi, Testuya J.

    2018-03-01

    We derive the explicit form of the rate function for semi-Markov processes. Here, the ‘random time change trick’ plays an essential role. Also, by exploiting the contraction principle of large deviation theory to the explicit form, we show that the fluctuation theorem (Gallavotti-Cohen symmetry) holds for semi-Markov cases. Furthermore, we elucidate that our rate function is an extension of the level 2.5 rate function for Markov processes to semi-Markov cases.

  20. The semi-Markov process. Generalizations and calculation rules for application in the analysis of systems

    International Nuclear Information System (INIS)

    Hirschmann, H.

    1983-06-01

    The consequences of the basic assumptions of the semi-Markov process as defined from a homogeneous renewal process with a stationary Markov condition are reviewed. The notion of the semi-Markov process is generalized by its redefinition from a nonstationary Markov renewal process. For both the nongeneralized and the generalized case a representation of the first order conditional state probabilities is derived in terms of the transition probabilities of the Markov renewal process. Some useful calculation rules (regeneration rules) are derived for the conditional state probabilities of the semi-Markov process. Compared to the semi-Markov process in its usual definition the generalized process allows the analysis of a larger class of systems. For instance systems with arbitrarily distributed lifetimes of their components can be described. There is also a chance to describe systems which are modified during time by forces or manipulations from outside. (Auth.)

  1. The cascade probabilistic functions and the Markov's processes. Chapter 1

    International Nuclear Information System (INIS)

    2003-01-01

    In the Chapter 1 the physical and mathematical descriptions of radiation processes are carried out. The relation of the cascade probabilistic functions (CPF) with Markov's chain is shown. The CPF calculation for electrons with the energy losses taking into account are given. The calculation of the CPF on the computer was carried out. The estimation of energy losses contribution in the CPFs and radiation defects concentration are made. Besides calculation of the primarily knock-on atoms and radiation defects at electron irradiation with use of the CPF with taking into account energy losses are conducted

  2. Variance reduction techniques in the simulation of Markov processes

    International Nuclear Information System (INIS)

    Lessi, O.

    1987-01-01

    We study a functional r of the stationary distribution of a homogeneous Markov chain. It is often difficult or impossible to perform the analytical calculation of r and so it is reasonable to estimate r by a simulation process. A consistent estimator r(n) of r is obtained with respect to a chain with a countable state space. Suitably modifying the estimator r(n) of r one obtains a new consistent estimator which has a smaller variance than r(n). The same is obtained in the case of finite state space

  3. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    Science.gov (United States)

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  4. The application of Markov decision process in restaurant delivery robot

    Science.gov (United States)

    Wang, Yong; Hu, Zhen; Wang, Ying

    2017-05-01

    As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional path planning algorithm is not very ideal. To solve this problem, this paper proposes the Markov dynamic state immediate reward (MDR) path planning algorithm according to the traditional Markov decision process. First of all, it uses MDR to plan a global path, then navigates along this path. When the sensor detects there is no obstructions in front state, increase its immediate state reward value; when the sensor detects there is an obstacle in front, plan a global path that can avoid obstacle with the current position as the new starting point and reduce its state immediate reward value. This continues until the target is reached. When the robot learns for a period of time, it can avoid those places where obstacles are often present when planning the path. By analyzing the simulation experiment, the algorithm has achieved good results in the global path planning under the dynamic environment.

  5. Stochastic model of milk homogenization process using Markov's chain

    Directory of Open Access Journals (Sweden)

    A. A. Khvostov

    2016-01-01

    Full Text Available The process of development of a mathematical model of the process of homogenization of dairy products is considered in the work. The theory of Markov's chains was used in the development of the mathematical model, Markov's chain with discrete states and continuous parameter for which the homogenisation pressure is taken, being the basis for the model structure. Machine realization of the model is implemented in the medium of structural modeling MathWorks Simulink™. Identification of the model parameters was carried out by minimizing the standard deviation calculated from the experimental data for each fraction of dairy products fat phase. As the set of experimental data processing results of the micrographic images of fat globules of whole milk samples distribution which were subjected to homogenization at different pressures were used. Pattern Search method was used as optimization method with the Latin Hypercube search algorithm from Global Optimization Тoolbox library. The accuracy of calculations averaged over all fractions of 0.88% (the relative share of units, the maximum relative error was 3.7% with the homogenization pressure of 30 MPa, which may be due to the very abrupt change in properties from the original milk in the particle size distribution at the beginning of the homogenization process and the lack of experimental data at homogenization pressures of below the specified value. The mathematical model proposed allows to calculate the profile of volume and mass distribution of the fat phase (fat globules in the product, depending on the homogenization pressure and can be used in the laboratory and research of dairy products composition, as well as in the calculation, design and modeling of the process equipment of the dairy industry enterprises.

  6. Pavement maintenance optimization model using Markov Decision Processes

    Science.gov (United States)

    Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.

    2017-09-01

    This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.

  7. The exit-time problem for a Markov jump process

    Science.gov (United States)

    Burch, N.; D'Elia, M.; Lehoucq, R. B.

    2014-12-01

    The purpose of this paper is to consider the exit-time problem for a finite-range Markov jump process, i.e, the distance the particle can jump is bounded independent of its location. Such jump diffusions are expedient models for anomalous transport exhibiting super-diffusion or nonstandard normal diffusion. We refer to the associated deterministic equation as a volume-constrained nonlocal diffusion equation. The volume constraint is the nonlocal analogue of a boundary condition necessary to demonstrate that the nonlocal diffusion equation is well-posed and is consistent with the jump process. A critical aspect of the analysis is a variational formulation and a recently developed nonlocal vector calculus. This calculus allows us to pose nonlocal backward and forward Kolmogorov equations, the former equation granting the various moments of the exit-time distribution.

  8. Rate estimation in partially observed Markov jump processes with measurement errors

    OpenAIRE

    Amrein, Michael; Kuensch, Hans R.

    2010-01-01

    We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...

  9. Perturbation approach to scaled type Markov renewal processes with infinite mean

    OpenAIRE

    Pajor-Gyulai, Zsolt; Szász, Domokos

    2010-01-01

    Scaled type Markov renewal processes generalize classical renewal processes: renewal times come from a one parameter family of probability laws and the sequence of the parameters is the trajectory of an ergodic Markov chain. Our primary interest here is the asymptotic distribution of the Markovian parameter at time t \\to \\infty. The limit, of course, depends on the stationary distribution of the Markov chain. The results, however, are essentially different depending on whether the expectation...

  10. Control Design for Untimed Petri Nets Using Markov Decision Processes

    Directory of Open Access Journals (Sweden)

    Cherki Daoui

    2017-01-01

    Full Text Available Design of control sequences for discrete event systems (DESs has been presented modelled by untimed Petri nets (PNs. PNs are well-known mathematical and graphical models that are widely used to describe distributed DESs, including choices, synchronizations and parallelisms. The domains of application include, but are not restricted to, manufacturing systems, computer science and transportation networks. We are motivated by the observation that such systems need to plan their production or services. The paper is more particularly concerned with control issues in uncertain environments when unexpected events occur or when control errors disturb the behaviour of the system. To deal with such uncertainties, a new approach based on discrete time Markov decision processes (MDPs has been proposed that associates the modelling power of PNs with the planning power of MDPs. Finally, the simulation results illustrate the benefit of our method from the computational point of view. (original abstract

  11. Accelerated decomposition techniques for large discounted Markov decision processes

    Science.gov (United States)

    Larach, Abdelhadi; Chafik, S.; Daoui, C.

    2017-12-01

    Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorithm, which is a variant of Tarjan's algorithm that simultaneously finds the SCCs and their belonging levels. Second, a new definition of the restricted MDPs is presented to ameliorate some hierarchical solutions in discounted MDPs using value iteration (VI) algorithm based on a list of state-action successors. Finally, a robotic motion-planning example and the experiment results are presented to illustrate the benefit of the proposed decomposition algorithms.

  12. Simulation-based algorithms for Markov decision processes

    CERN Document Server

    Chang, Hyeong Soo; Fu, Michael C; Marcus, Steven I

    2013-01-01

    Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences.  Many real-world problems modeled by MDPs have huge state and/or action spaces, giving an opening to the curse of dimensionality and so making practical solution of the resulting models intractable.  In other cases, the system of interest is too complex to allow explicit specification of some of the MDP model parameters, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based algorithms have been developed to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function.  Specific approaches include adaptive sampling, evolutionary policy iteration, evolutionary random policy search, and model reference adaptive search. This substantially enlarged new edition reflects the latest developments in novel ...

  13. Prediction and generation of binary Markov processes: Can a finite-state fox catch a Markov mouse?

    Science.gov (United States)

    Ruebeck, Joshua B.; James, Ryan G.; Mahoney, John R.; Crutchfield, James P.

    2018-01-01

    Understanding the generative mechanism of a natural system is a vital component of the scientific method. Here, we investigate one of the fundamental steps toward this goal by presenting the minimal generator of an arbitrary binary Markov process. This is a class of processes whose predictive model is well known. Surprisingly, the generative model requires three distinct topologies for different regions of parameter space. We show that a previously proposed generator for a particular set of binary Markov processes is, in fact, not minimal. Our results shed the first quantitative light on the relative (minimal) costs of prediction and generation. We find, for instance, that the difference between prediction and generation is maximized when the process is approximately independently, identically distributed.

  14. Bayesian inference for Markov jump processes with informative observations.

    Science.gov (United States)

    Golightly, Andrew; Wilkinson, Darren J

    2015-04-01

    In this paper we consider the problem of parameter inference for Markov jump process (MJP) representations of stochastic kinetic models. Since transition probabilities are intractable for most processes of interest yet forward simulation is straightforward, Bayesian inference typically proceeds through computationally intensive methods such as (particle) MCMC. Such methods ostensibly require the ability to simulate trajectories from the conditioned jump process. When observations are highly informative, use of the forward simulator is likely to be inefficient and may even preclude an exact (simulation based) analysis. We therefore propose three methods for improving the efficiency of simulating conditioned jump processes. A conditioned hazard is derived based on an approximation to the jump process, and used to generate end-point conditioned trajectories for use inside an importance sampling algorithm. We also adapt a recently proposed sequential Monte Carlo scheme to our problem. Essentially, trajectories are reweighted at a set of intermediate time points, with more weight assigned to trajectories that are consistent with the next observation. We consider two implementations of this approach, based on two continuous approximations of the MJP. We compare these constructs for a simple tractable jump process before using them to perform inference for a Lotka-Volterra system. The best performing construct is used to infer the parameters governing a simple model of motility regulation in Bacillus subtilis.

  15. Asymptotic behaviour near extinction of continuous-state branching processes

    OpenAIRE

    Berzunza, Gabriel; Pardo, Juan Carlos

    2016-01-01

    In this note, we study the asymptotic behaviour near extinction of (sub-) critical continuous state branching processes. In particular, we establish an analogue of Khintchin's law of the iterated logarithm near extinction time for a continuous state branching process whose branching mechanism satisfies a given condition and its reflected process at its infimum.

  16. 3rd Workshop on Branching Processes and their Applications

    CERN Document Server

    González, Miguel; Gutiérrez, Cristina; Martínez, Rodrigo; Minuesa, Carmen; Molina, Manuel; Mota, Manuel; Ramos, Alfonso; WBPA15

    2016-01-01

    This volume gathers papers originally presented at the 3rd Workshop on Branching Processes and their Applications (WBPA15), which was held from 7 to 10 April 2015 in Badajoz, Spain (http://branching.unex.es/wbpa15/index.htm). The papers address a broad range of theoretical and practical aspects of branching process theory. Further, they amply demonstrate that the theoretical research in this area remains vital and topical, as well as the relevance of branching concepts in the development of theoretical approaches to solving new problems in applied fields such as Epidemiology, Biology, Genetics, and, of course, Population Dynamics. The topics covered can broadly be classified into the following areas: 1. Coalescent Branching Processes 2. Branching Random Walks 3. Population Growth Models in Varying and Random Environments 4. Size/Density/Resource-Dependent Branching Models 5. Age-Dependent Branching Models 6. Special Branching Models 7. Applications in Epidemiology 8. Applications in Biology and Genetics Offer...

  17. Markov Jump Processes Approximating a Non-Symmetric Generalized Diffusion

    International Nuclear Information System (INIS)

    Limić, Nedžad

    2011-01-01

    Consider a non-symmetric generalized diffusion X(⋅) in ℝ d determined by the differential operator A(x) = -Σ ij ∂ i a ij (x)∂ j + Σ i b i (x)∂ i . In this paper the diffusion process is approximated by Markov jump processes X n (⋅), in homogeneous and isotropic grids G n ⊂ℝ d , which converge in distribution in the Skorokhod space D([0,∞),ℝ d ) to the diffusion X(⋅). The generators of X n (⋅) are constructed explicitly. Due to the homogeneity and isotropy of grids, the proposed method for d≥3 can be applied to processes for which the diffusion tensor {a ij (x)} 11 dd fulfills an additional condition. The proposed construction offers a simple method for simulation of sample paths of non-symmetric generalized diffusion. Simulations are carried out in terms of jump processes X n (⋅). For piece-wise constant functions a ij on ℝ d and piece-wise continuous functions a ij on ℝ 2 the construction and principal algorithm are described enabling an easy implementation into a computer code.

  18. On the record process of time-reversible spectrally-negative Markov additive processes

    NARCIS (Netherlands)

    J. Ivanovs; M.R.H. Mandjes (Michel)

    2009-01-01

    htmlabstractWe study the record process of a spectrally-negative Markov additive process (MAP). Assuming time-reversibility, a number of key quantities can be given explicitly. It is shown how these key quantities can be used when analyzing the distribution of the all-time maximum attained by MAPs

  19. Discounted semi-Markov decision processes : linear programming and policy iteration

    NARCIS (Netherlands)

    Wessels, J.; van Nunen, J.A.E.E.

    1975-01-01

    For semi-Markov decision processes with discounted rewards we derive the well known results regarding the structure of optimal strategies (nonrandomized, stationary Markov strategies) and the standard algorithms (linear programming, policy iteration). Our analysis is completely based on a primal

  20. Discounted semi-Markov decision processes : linear programming and policy iteration

    NARCIS (Netherlands)

    Wessels, J.; van Nunen, J.A.E.E.

    1974-01-01

    For semi-Markov decision processes with discounted rewards we derive the well known results regarding the structure of optimal strategies (nonrandomized, stationary Markov strategies) and the standard algorithms (linear programming, policy iteration). Our analysis is completely based on a primal

  1. Simulating the formation of keratin filament networks by a piecewise-deterministic Markov process.

    Science.gov (United States)

    Beil, Michael; Lück, Sebastian; Fleischer, Frank; Portet, Stéphanie; Arendt, Wolfgang; Schmidt, Volker

    2009-02-21

    Keratin intermediate filament networks are part of the cytoskeleton in epithelial cells. They were found to regulate viscoelastic properties and motility of cancer cells. Due to unique biochemical properties of keratin polymers, the knowledge of the mechanisms controlling keratin network formation is incomplete. A combination of deterministic and stochastic modeling techniques can be a valuable source of information since they can describe known mechanisms of network evolution while reflecting the uncertainty with respect to a variety of molecular events. We applied the concept of piecewise-deterministic Markov processes to the modeling of keratin network formation with high spatiotemporal resolution. The deterministic component describes the diffusion-driven evolution of a pool of soluble keratin filament precursors fueling various network formation processes. Instants of network formation events are determined by a stochastic point process on the time axis. A probability distribution controlled by model parameters exercises control over the frequency of different mechanisms of network formation to be triggered. Locations of the network formation events are assigned dependent on the spatial distribution of the soluble pool of filament precursors. Based on this modeling approach, simulation studies revealed that the architecture of keratin networks mostly depends on the balance between filament elongation and branching processes. The spatial distribution of network mesh size, which strongly influences the mechanical characteristics of filament networks, is modulated by lateral annealing processes. This mechanism which is a specific feature of intermediate filament networks appears to be a major and fast regulator of cell mechanics.

  2. Finite-size scaling of survival probability in branching processes

    OpenAIRE

    Garcia-Millan, Rosalba; Font-Clos, Francesc; Corral, Alvaro

    2014-01-01

    Branching processes pervade many models in statistical physics. We investigate the survival probability of a Galton-Watson branching process after a finite number of generations. We reveal the finite-size scaling law of the survival probability for a given branching process ruled by a probability distribution of the number of offspring per element whose standard deviation is finite, obtaining the exact scaling function as well as the critical exponents. Our findings prove the universal behavi...

  3. Students' Progress throughout Examination Process as a Markov Chain

    Science.gov (United States)

    Hlavatý, Robert; Dömeová, Ludmila

    2014-01-01

    The paper is focused on students of Mathematical methods in economics at the Czech university of life sciences (CULS) in Prague. The idea is to create a model of students' progress throughout the whole course using the Markov chain approach. Each student has to go through various stages of the course requirements where his success depends on the…

  4. Simulation on a computer the cascade probabilistic functions and theirs relation with Markov's processes

    International Nuclear Information System (INIS)

    Kupchishin, A.A.; Kupchishin, A.I.; Shmygaleva, T.A.

    2002-01-01

    Within framework of the cascade-probabilistic (CP) method the radiation and physical processes are studied, theirs relation with Markov's processes are found. The conclusion that CP-function for electrons, protons, alpha-particles and ions are describing by unhomogeneous Markov's chain is drawn. The algorithms are developed, the CP-functions calculations for charged particles, concentration of radiation defects in solids at ion irradiation are carried out as well. Tables for CPF different parameters and radiation defects concentration at charged particle interaction with solids are given. The book consists of the introduction and two chapters: (1) Cascade probabilistic function and the Markov's processes; (2) Radiation defects formation in solids as a part of the Markov's processes. The book is intended for specialists on the radiation defects mathematical stimulation, solid state physics, elementary particles physics and applied mathematics

  5. A Method for Speeding Up Value Iteration in Partially Observable Markov Decision Processes

    OpenAIRE

    Zhang, Nevin Lianwen; Lee, Stephen S.; Zhang, Weihong

    2013-01-01

    We present a technique for speeding up the convergence of value iteration for partially observable Markov decisions processes (POMDPs). The underlying idea is similar to that behind modified policy iteration for fully observable Markov decision processes (MDPs). The technique can be easily incorporated into any existing POMDP value iteration algorithms. Experiments have been conducted on several test problems with one POMDP value iteration algorithm called incremental pruning. We find that th...

  6. On Markov processes in the hadron-nuclear and nuclear-nuclear collisions at superhigh energies

    International Nuclear Information System (INIS)

    Lebedeva, A.A.; Rus'kin, V.I.

    2001-01-01

    In the article the possibility of the Markov processes use as simulation method for mean characteristics of hadron-nuclear and nucleus-nuclear collisions at superhigh energies is discussed. The simple (hadron-nuclear collisions) and non-simple (nucleus-nuclear collisions) non-uniform Markov process of output constant spectrum and absorption in a nucleon's nucleus-target with rapidity y are considered. The expression allowing to simulate the different collision modes were obtained

  7. Simulation based sequential Monte Carlo methods for discretely observed Markov processes

    OpenAIRE

    Neal, Peter

    2014-01-01

    Parameter estimation for discretely observed Markov processes is a challenging problem. However, simulation of Markov processes is straightforward using the Gillespie algorithm. We exploit this ease of simulation to develop an effective sequential Monte Carlo (SMC) algorithm for obtaining samples from the posterior distribution of the parameters. In particular, we introduce two key innovations, coupled simulations, which allow us to study multiple parameter values on the basis of a single sim...

  8. Life spans of a Bellman-Harris branching process with immigration

    International Nuclear Information System (INIS)

    Badalbaev, I.S.; Mashrabbaev, A.

    1987-01-01

    One considers two schemes of the Bellman-Harris process with immigration when a) the lifetime of the particles is an integral-valued random variable and the immigration is defined by a sequence of independent random variables; b) the distribution of the lifetime of the particles is nonlattice and the immigration is a process with continuous time. One investigates the properties of the life spans of such processes. The results obtained here are a generalization to the case of Bellman-Harris processes of the results of A.M. Zubkov, obtained for Markov branching processes. For the proof one makes use in an essential manner of the known inequalities of Goldstein, estimating the generating function of the Bellman-Harris process in terms of the generating functions of the imbedded Galton-Watson process

  9. Dy163-Ho163 branching: an s-process barometer

    International Nuclear Information System (INIS)

    Beer, H.; Walter, G.; Macklin, R.L.

    1984-01-01

    The neutron capture cross sections of Dy163 and Er164 have been measured to analyze the s-process branching at Dy163-Ho163. The reproduction of the s-process abundance of Er164 via this branching is sensitive to temperature kT, neutron density, and electron density n/sub e/. The calculations using information from other branchings on kT and the neutron density n/sub n/ give constraints for n/sub e/ at the site of the s-process

  10. Continuous state branching processes in random environment: The Brownian case

    OpenAIRE

    Palau, Sandra; Pardo, Juan Carlos

    2015-01-01

    We consider continuous state branching processes that are perturbed by a Brownian motion. These processes are constructed as the unique strong solution of a stochastic differential equation. The long-term extinction and explosion behaviours are studied. In the stable case, the extinction and explosion probabilities are given explicitly. We find three regimes for the asymptotic behaviour of the explosion probability and, as in the case of branching processes in random environment, we find five...

  11. Properly quantized history-dependent Parrondo games, Markov processes, and multiplexing circuits

    Energy Technology Data Exchange (ETDEWEB)

    Bleiler, Steven A. [Fariborz Maseeh Department of Mathematics and Statistics, Portland State University, PO Box 751, Portland, OR 97207 (United States); Khan, Faisal Shah, E-mail: faisal.khan@kustar.ac.a [Khalifa University of Science, Technology and Research, PO Box 127788, Abu Dhabi (United Arab Emirates)

    2011-05-09

    Highlights: History-dependent Parrondo games are viewed as Markov processes. Quantum mechanical analogues of these Markov processes are constructed. These quantum analogues restrict to the original process on measurement. Relationship between these analogues and a quantum circuits is exhibited. - Abstract: In the context of quantum information theory, 'quantization' of various mathematical and computational constructions is said to occur upon the replacement, at various points in the construction, of the classical randomization notion of probability distribution with higher order randomization notions from quantum mechanics such as quantum superposition with measurement. For this to be done 'properly', a faithful copy of the original construction is required to exist within the new quantum one, just as is required when a function is extended to a larger domain. Here procedures for extending history-dependent Parrondo games, Markov processes and multiplexing circuits to their quantum versions are analyzed from a game theoretic viewpoint, and from this viewpoint, proper quantizations developed.

  12. Road maintenance optimization through a discrete-time semi-Markov decision process

    International Nuclear Information System (INIS)

    Zhang Xueqing; Gao Hui

    2012-01-01

    Optimization models are necessary for efficient and cost-effective maintenance of a road network. In this regard, road deterioration is commonly modeled as a discrete-time Markov process such that an optimal maintenance policy can be obtained based on the Markov decision process, or as a renewal process such that an optimal maintenance policy can be obtained based on the renewal theory. However, the discrete-time Markov process cannot capture the real time at which the state transits while the renewal process considers only one state and one maintenance action. In this paper, road deterioration is modeled as a semi-Markov process in which the state transition has the Markov property and the holding time in each state is assumed to follow a discrete Weibull distribution. Based on this semi-Markov process, linear programming models are formulated for both infinite and finite planning horizons in order to derive optimal maintenance policies to minimize the life-cycle cost of a road network. A hypothetical road network is used to illustrate the application of the proposed optimization models. The results indicate that these linear programming models are practical for the maintenance of a road network having a large number of road segments and that they are convenient to incorporate various constraints on the decision process, for example, performance requirements and available budgets. Although the optimal maintenance policies obtained for the road network are randomized stationary policies, the extent of this randomness in decision making is limited. The maintenance actions are deterministic for most states and the randomness in selecting actions occurs only for a few states.

  13. Finite-size scaling of survival probability in branching processes.

    Science.gov (United States)

    Garcia-Millan, Rosalba; Font-Clos, Francesc; Corral, Álvaro

    2015-04-01

    Branching processes pervade many models in statistical physics. We investigate the survival probability of a Galton-Watson branching process after a finite number of generations. We derive analytically the existence of finite-size scaling for the survival probability as a function of the control parameter and the maximum number of generations, obtaining the critical exponents as well as the exact scaling function, which is G(y)=2ye(y)/(e(y)-1), with y the rescaled distance to the critical point. Our findings are valid for any branching process of the Galton-Watson type, independently of the distribution of the number of offspring, provided its variance is finite. This proves the universal behavior of the finite-size effects in branching processes, including the universality of the metric factors. The direct relation to mean-field percolation is also discussed.

  14. Markov decision processes: a tool for sequential decision making under uncertainty.

    Science.gov (United States)

    Alagoz, Oguzhan; Hsu, Heather; Schaefer, Andrew J; Roberts, Mark S

    2010-01-01

    We provide a tutorial on the construction and evaluation of Markov decision processes (MDPs), which are powerful analytical tools used for sequential decision making under uncertainty that have been widely used in many industrial and manufacturing applications but are underutilized in medical decision making (MDM). We demonstrate the use of an MDP to solve a sequential clinical treatment problem under uncertainty. Markov decision processes generalize standard Markov models in that a decision process is embedded in the model and multiple decisions are made over time. Furthermore, they have significant advantages over standard decision analysis. We compare MDPs to standard Markov-based simulation models by solving the problem of the optimal timing of living-donor liver transplantation using both methods. Both models result in the same optimal transplantation policy and the same total life expectancies for the same patient and living donor. The computation time for solving the MDP model is significantly smaller than that for solving the Markov model. We briefly describe the growing literature of MDPs applied to medical decisions.

  15. The application of Markov decision process with penalty function in restaurant delivery robot

    Science.gov (United States)

    Wang, Yong; Hu, Zhen; Wang, Ying

    2017-05-01

    As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional Markov decision process path planning algorithm is not save, the robot is very close to the table and chairs. To solve this problem, this paper proposes the Markov Decision Process with a penalty term called MDPPT path planning algorithm according to the traditional Markov decision process (MDP). For MDP, if the restaurant delivery robot bumps into an obstacle, the reward it receives is part of the current status reward. For the MDPPT, the reward it receives not only the part of the current status but also a negative constant term. Simulation results show that the MDPPT algorithm can plan a more secure path.

  16. Continuous strong Markov processes in dimension one a stochastic calculus approach

    CERN Document Server

    Assing, Sigurd

    1998-01-01

    The book presents an in-depth study of arbitrary one-dimensional continuous strong Markov processes using methods of stochastic calculus. Departing from the classical approaches, a unified investigation of regular as well as arbitrary non-regular diffusions is provided. A general construction method for such processes, based on a generalization of the concept of a perfect additive functional, is developed. The intrinsic decomposition of a continuous strong Markov semimartingale is discovered. The book also investigates relations to stochastic differential equations and fundamental examples of irregular diffusions.

  17. An integrated Markov decision process and nested logit consumer response model of air ticket pricing

    NARCIS (Netherlands)

    Lu, J.; Feng, T.; Timmermans, H.P.J.; Yang, Z.

    2017-01-01

    The paper attempts to propose an optimal air ticket pricing model during the booking horizon by taking into account passengers' purchasing behavior of air tickets. A Markov decision process incorporating a nested logit consumer response model is established to modeling the dynamic pricing process.

  18. Efficient tests for equivalence of hidden Markov processes and quantum random walks

    NARCIS (Netherlands)

    U. Faigle; A. Schönhuth (Alexander)

    2011-01-01

    htmlabstractWhile two hidden Markov process (HMP) resp.~quantum random walk (QRW) parametrizations can differ from one another, the stochastic processes arising from them can be equivalent. Here a polynomial-time algorithm is presented which can determine equivalence of two HMP parametrizations

  19. Markov Processes: Exploring the Use of Dynamic Visualizations to Enhance Student Understanding

    Science.gov (United States)

    Pfannkuch, Maxine; Budgett, Stephanie

    2016-01-01

    Finding ways to enhance introductory students' understanding of probability ideas and theory is a goal of many first-year probability courses. In this article, we explore the potential of a prototype tool for Markov processes using dynamic visualizations to develop in students a deeper understanding of the equilibrium and hitting times…

  20. Data-based inference of generators for Markov jump processes using convex optimization

    NARCIS (Netherlands)

    D.T. Crommelin (Daan); E. Vanden-Eijnden (Eric)

    2009-01-01

    textabstractA variational approach to the estimation of generators for Markov jump processes from discretely sampled data is discussed and generalized. In this approach, one first calculates the spectrum of the discrete maximum likelihood estimator for the transition matrix consistent with

  1. Strategy Complexity of Finite-Horizon Markov Decision Processes and Simple Stochastic Games

    DEFF Research Database (Denmark)

    Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu

    2012-01-01

    Markov decision processes (MDPs) and simple stochastic games (SSGs) provide a rich mathematical framework to study many important problems related to probabilistic systems. MDPs and SSGs with finite-horizon objectives, where the goal is to maximize the probability to reach a target state in a given...

  2. Risk aversion and risk seeking in multicriteria forest management: a Markov decision process approach

    Science.gov (United States)

    Joseph Buongiorno; Mo Zhou; Craig Johnston

    2017-01-01

    Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value.  The other method used the certainty...

  3. An investigation of cognitive 'branching' processes in major depression.

    Science.gov (United States)

    Walsh, Nicholas D; Seal, Marc L; Williams, Steven C R; Mehta, Mitul A

    2009-11-10

    Patients with depression demonstrate cognitive impairment on a wide range of cognitive tasks, particularly putative tasks of frontal lobe function. Recent models of frontal lobe function have argued that the frontal pole region is involved in cognitive branching, a process requiring holding in mind one goal while performing sub-goal processes. Evidence for this model comes from functional neuroimaging and frontal-pole lesion patients. We have utilised these new concepts to investigate the possibility that patients with depression are impaired at cognitive 'branching'. 11 non-medicated patients with major depression were compared to 11 matched controls in a behavioural study on a task of cognitive 'branching'. In the version employed here, we recorded participant's performance as they learnt to perform the task. This involved participants completing a control condition, followed by a working memory condition, a dual-task condition and finally the branching condition, which integrates processes in the working memory and dual-task conditions. We also measured participants on a number of other cognitive tasks as well as mood-state before and after the branching experiment. Patients took longer to learn the first condition, but performed comparably to controls after six runs of the task. Overall, reaction times decreased with repeated exposure on the task conditions in controls, with this effect attenuated in patients. Importantly, no differences were found between patients and controls on the branching condition. There was, however, a significant change in mood-state with patients increasing in positive affect and decreasing in negative affect after the experiment. We found no clear evidence of a fundamental impairment in anterior prefrontal 'branching processes' in patients with depression. Rather our data argue for a contextual learning impairment underlying cognitive dysfunction in this disorder. Our data suggest that MDD patients are able to perform high

  4. Neutron fluctuations a treatise on the physics of branching processes

    CERN Document Server

    Pazsit, Imre; Pzsit, Imre

    2007-01-01

    The transport of neutrons in a multiplying system is an area of branching processes with a clear formalism. This book presents an account of the mathematical tools used in describing branching processes, which are then used to derive a large number of properties of the neutron distribution in multiplying systems with or without an external source. In the second part of the book, the theory is applied to the description of the neutron fluctuations in nuclear reactor cores as well as in small samples of fissile material. The question of how to extract information about the system under study is discussed. In particular the measurement of the reactivity of subcritical cores, driven with various Poisson and non-Poisson (pulsed) sources, and the identification of fissile material samples, is illustrated. The book gives pragmatic information for those planning and executing and evaluating experiments on such systems. - Gives a complete treatise of the mathematics of branching particle processes, and in particular n...

  5. Simple model of inhibition of chain-branching combustion processes

    Science.gov (United States)

    Babushok, Valeri I.; Gubernov, Vladimir V.; Minaev, Sergei S.; Miroshnichenko, Taisia P.

    2017-11-01

    A simple kinetic model has been suggested to describe the inhibition and extinction of flame propagation in reaction systems with chain-branching reactions typical for hydrocarbon systems. The model is based on the generalised model of the combustion process with chain-branching reaction combined with the one-stage reaction describing the thermal mode of flame propagation with the addition of inhibition reaction steps. Inhibitor addition suppresses the radical overshoot in flame and leads to the change of reaction mode from the chain-branching reaction to a thermal mode of flame propagation. With the increase of inhibitor the transition of chain-branching mode of reaction to the reaction with straight-chains (non-branching chain reaction) is observed. The inhibition part of the model includes a block of three reactions to describe the influence of the inhibitor. The heat losses are incorporated into the model via Newton cooling. The flame extinction is the result of the decreased heat release of inhibited reaction processes and the suppression of radical overshoot with the further decrease of the reaction rate due to the temperature decrease and mixture dilution. A comparison of the results of modelling laminar premixed methane/air flames inhibited by potassium bicarbonate (gas phase model, detailed kinetic model) with the results obtained using the suggested simple model is presented. The calculations with the detailed kinetic model demonstrate the following modes of combustion process: (1) flame propagation with chain-branching reaction (with radical overshoot, inhibitor addition decreases the radical overshoot down to the equilibrium level); (2) saturation of chemical influence of inhibitor, and (3) transition to thermal mode of flame propagation (non-branching chain mode of reaction). The suggested simple kinetic model qualitatively reproduces the modes of flame propagation with the addition of the inhibitor observed using detailed kinetic models.

  6. Spectral analysis of multi-dimensional self-similar Markov processes

    International Nuclear Information System (INIS)

    Modarresi, N; Rezakhah, S

    2010-01-01

    In this paper we consider a discrete scale invariant (DSI) process {X(t), t in R + } with scale l > 1. We consider a fixed number of observations in every scale, say T, and acquire our samples at discrete points α k , k in W, where α is obtained by the equality l = α T and W = {0, 1, ...}. We thus provide a discrete time scale invariant (DT-SI) process X(.) with the parameter space {α k , k in W}. We find the spectral representation of the covariance function of such a DT-SI process. By providing the harmonic-like representation of multi-dimensional self-similar processes, spectral density functions of them are presented. We assume that the process {X(t), t in R + } is also Markov in the wide sense and provide a discrete time scale invariant Markov (DT-SIM) process with the above scheme of sampling. We present an example of the DT-SIM process, simple Brownian motion, by the above sampling scheme and verify our results. Finally, we find the spectral density matrix of such a DT-SIM process and show that its associated T-dimensional self-similar Markov process is fully specified by {R H j (1), R j H (0), j = 0, 1, ..., T - 1}, where R H j (τ) is the covariance function of jth and (j + τ)th observations of the process.

  7. Reliability analysis of nuclear component cooling water system using semi-Markov process model

    International Nuclear Information System (INIS)

    Veeramany, Arun; Pandey, Mahesh D.

    2011-01-01

    Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.

  8. An introduction to branching measure-valued processes

    CERN Document Server

    Dynkin, Eugene B

    1994-01-01

    For about half a century, two classes of stochastic processes-Gaussian processes and processes with independent increments-have played an important role in the development of stochastic analysis and its applications. During the last decade, a third class-branching measure-valued (BMV) processes-has also been the subject of much research. A common feature of all three classes is that their finite-dimensional distributions are infinitely divisible, allowing the use of the powerful analytic tool of Laplace (or Fourier) transforms. All three classes, in an infinite-dimensional setting, provide means for study of physical systems with infinitely many degrees of freedom. This is the first monograph devoted to the theory of BMV processes. Dynkin first constructs a large class of BMV processes, called superprocesses, by passing to the limit from branching particle systems. Then he proves that, under certain restrictions, a general BMV process is a superprocess. A special chapter is devoted to the connections between ...

  9. Process Modeling for Energy Usage in “Smart House” System with a Help of Markov Discrete Chain

    Directory of Open Access Journals (Sweden)

    Victor Kravets

    2016-05-01

    Full Text Available Method for evaluating economic efficiency of technical systems using discrete Markov chains modelling illustrated by the system of “Smart house”, consisting, for example, of the three independently functioning elements. Dynamic model of a random power consumption process in the form of a symmetrical state graph of heterogeneous discrete Markov chain is built. The corresponding mathematical model of a random Markov process of power consumption in the “smart house” system in recurrent matrix form is being developed. Technique of statistical determination of probability of random transition elements of the system and the corresponding to the transition probability matrix of the discrete inhomogeneous Markov chain are developed. Statistically determined random transitions of system elements power consumption and the corresponding distribution laws are introduced. The matrix of transition prices, expectations for the possible states of a system price transition and, eventually, the cost of Markov process of power consumption throughout the day.

  10. Non-parametric Bayesian inference for inhomogeneous Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper; Johansen, Per Michael

    is a shot noise process, and the interaction function for a pair of points depends only on the distance between the two points and is a piecewise linear function modelled by a marked Poisson process. Simulation of the resulting posterior using a Metropolis-Hastings algorithm in the "conventional" way...

  11. Bearing Degradation Process Prediction Based on the Support Vector Machine and Markov Model

    Directory of Open Access Journals (Sweden)

    Shaojiang Dong

    2014-01-01

    Full Text Available Predicting the degradation process of bearings before they reach the failure threshold is extremely important in industry. This paper proposed a novel method based on the support vector machine (SVM and the Markov model to achieve this goal. Firstly, the features are extracted by time and time-frequency domain methods. However, the extracted original features are still with high dimensional and include superfluous information, and the nonlinear multifeatures fusion technique LTSA is used to merge the features and reduces the dimension. Then, based on the extracted features, the SVM model is used to predict the bearings degradation process, and the CAO method is used to determine the embedding dimension of the SVM model. After the bearing degradation process is predicted by SVM model, the Markov model is used to improve the prediction accuracy. The proposed method was validated by two bearing run-to-failure experiments, and the results proved the effectiveness of the methodology.

  12. A Correlated Random Effects Model for Non-homogeneous Markov Processes with Nonignorable Missingness.

    Science.gov (United States)

    Chen, Baojiang; Zhou, Xiao-Hua

    2013-05-01

    Life history data arising in clusters with prespecified assessment time points for patients often feature incomplete data since patients may choose to visit the clinic based on their needs. Markov process models provide a useful tool describing disease progression for life history data. The literature mainly focuses on time homogeneous process. In this paper we develop methods to deal with non-homogeneous Markov process with incomplete clustered life history data. A correlated random effects model is developed to deal with the nonignorable missingness, and a time transformation is employed to address the non-homogeneity in the transition model. Maximum likelihood estimate based on the Monte-Carlo EM algorithm is advocated for parameter estimation. Simulation studies demonstrate that the proposed method works well in many situations. We also apply this method to an Alzheimer's disease study.

  13. Choice of the parameters of the cusum algorithms for parameter estimation in the markov modulated poisson process

    OpenAIRE

    Burkatovskaya, Yuliya Borisovna; Kabanova, T.; Khaustov, Pavel Aleksandrovich

    2016-01-01

    CUSUM algorithm for controlling chain state switching in the Markov modulated Poissonprocess was investigated via simulation. Recommendations concerning the parameter choice were givensubject to characteristics of the process. Procedure of the process parameter estimation was described.

  14. Generalization bounds of ERM-based learning processes for continuous-time Markov chains.

    Science.gov (United States)

    Zhang, Chao; Tao, Dacheng

    2012-12-01

    Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.

  15. The Langevin Approach: An R Package for Modeling Markov Processes

    Directory of Open Access Journals (Sweden)

    Philip Rinn

    2016-08-01

    Full Text Available We describe an 'R' package developed by the research group 'Turbulence, Wind energy' 'and Stochastics' (TWiSt at the Carl von Ossietzky University of Oldenburg, which extracts the (stochastic evolution equation underlying a set of data or measurements. The method can be directly applied to data sets with one or two stochastic variables. Examples for the one-dimensional and two-dimensional cases are provided. This framework is valid under a small set of conditions which are explicitly presented and which imply simple preliminary test procedures to the data. For Markovian processes involving Gaussian white noise, a stochastic differential equation is derived straightforwardly from the time series and captures the full dynamical properties of the underlying process. Still, even in the case such conditions are not fulfilled, there are alternative versions of this method which we discuss briefly and provide the user with the necessary bibliography.

  16. The Markov process admits a consistent steady-state thermodynamic formalism

    Science.gov (United States)

    Peng, Liangrong; Zhu, Yi; Hong, Liu

    2018-01-01

    The search for a unified formulation for describing various non-equilibrium processes is a central task of modern non-equilibrium thermodynamics. In this paper, a novel steady-state thermodynamic formalism was established for general Markov processes described by the Chapman-Kolmogorov equation. Furthermore, corresponding formalisms of steady-state thermodynamics for the master equation and Fokker-Planck equation could be rigorously derived in mathematics. To be concrete, we proved that (1) in the limit of continuous time, the steady-state thermodynamic formalism for the Chapman-Kolmogorov equation fully agrees with that for the master equation; (2) a similar one-to-one correspondence could be established rigorously between the master equation and Fokker-Planck equation in the limit of large system size; (3) when a Markov process is restrained to one-step jump, the steady-state thermodynamic formalism for the Fokker-Planck equation with discrete state variables also goes to that for master equations, as the discretization step gets smaller and smaller. Our analysis indicated that general Markov processes admit a unified and self-consistent non-equilibrium steady-state thermodynamic formalism, regardless of underlying detailed models.

  17. A fast exact simulation method for a class of Markov jump processes.

    Science.gov (United States)

    Li, Yao; Hu, Lili

    2015-11-14

    A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze its properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.

  18. An investigation of cognitive 'branching' processes in major depression

    Directory of Open Access Journals (Sweden)

    Williams Steven CR

    2009-11-01

    Full Text Available Abstract Background Patients with depression demonstrate cognitive impairment on a wide range of cognitive tasks, particularly putative tasks of frontal lobe function. Recent models of frontal lobe function have argued that the frontal pole region is involved in cognitive branching, a process requiring holding in mind one goal while performing sub-goal processes. Evidence for this model comes from functional neuroimaging and frontal-pole lesion patients. We have utilised these new concepts to investigate the possibility that patients with depression are impaired at cognitive 'branching'. Methods 11 non-medicated patients with major depression were compared to 11 matched controls in a behavioural study on a task of cognitive 'branching'. In the version employed here, we recorded participant's performance as they learnt to perform the task. This involved participants completing a control condition, followed by a working memory condition, a dual-task condition and finally the branching condition, which integrates processes in the working memory and dual-task conditions. We also measured participants on a number of other cognitive tasks as well as mood-state before and after the branching experiment. Results Patients took longer to learn the first condition, but performed comparably to controls after six runs of the task. Overall, reaction times decreased with repeated exposure on the task conditions in controls, with this effect attenuated in patients. Importantly, no differences were found between patients and controls on the branching condition. There was, however, a significant change in mood-state with patients increasing in positive affect and decreasing in negative affect after the experiment. Conclusion We found no clear evidence of a fundamental impairment in anterior prefrontal 'branching processes' in patients with depression. Rather our data argue for a contextual learning impairment underlying cognitive dysfunction in this disorder. Our

  19. Description of quantum-mechanical motion by using the formalism of non-Markov stochastic process

    International Nuclear Information System (INIS)

    Skorobogatov, G.A.; Svertilov, S.I.

    1999-01-01

    The principle possibilities of mathematical modeling of quantum mechanical motion by the theory of a real stochastic processes is considered. The set of equations corresponding to the simplest case of a two-level system undergoing transitions under the influence of electromagnetic field are obtained. It is shown that quantum-mechanical processes are purely discrete processes of non-Markovian type. They are continuous processes in the space of probability amplitudes and posses the properties of quantum Markovity. The formulation of quantum mechanics in terms of the theory of stochastic processes is necessary for its generalization on small space-time intervals [ru

  20. Scalable approximate policies for Markov decision process models of hospital elective admissions.

    Science.gov (United States)

    Zhu, George; Lizotte, Dan; Hoey, Jesse

    2014-05-01

    To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    Science.gov (United States)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  2. Assistive system for people with Apraxia using a Markov decision process.

    Science.gov (United States)

    Jean-Baptiste, Emilie M D; Russell, Martin; Rothstein, Pia

    2014-01-01

    CogWatch is an assistive system to re-train stroke survivors suffering from Apraxia or Action Disorganization Syndrome (AADS) to complete activities of daily living (ADLs). This paper describes the approach to real-time planning based on a Markov Decision Process (MDP), and demonstrates its ability to improve task's performance via user simulation. The paper concludes with a discussion of the remaining challenges and future enhancements.

  3. «Concurrency» in M-L-Parallel Semi-Markov Process

    Directory of Open Access Journals (Sweden)

    Larkin Eugene

    2017-01-01

    Full Text Available This article investigates the functioning of a swarm of robots, each of which receives instructions from the external human operator and autonomously executes them. An abstract model of functioning of a robot, a group of robots and multiple groups of robots was obtained using the notion of semi-Markov process. The concepts of aggregated initial and aggregated absorbing states were introduced. Correspondences for calculation of time parameters of concurrency were obtained.

  4. The Green-Kubo formula for general Markov processes with a continuous time parameter

    International Nuclear Information System (INIS)

    Yang Fengxia; Liu Yong; Chen Yong

    2010-01-01

    For general Markov processes, the Green-Kubo formula is shown to be valid under a mild condition. A class of stochastic evolution equations on a separable Hilbert space and three typical infinite systems of locally interacting diffusions on Z d (irreversible in most cases) are shown to satisfy the Green-Kubo formula, and the Einstein relations for these stochastic evolution equations are shown explicitly as a corollary.

  5. Agriculture and Food Processes Branch program summary document

    Energy Technology Data Exchange (ETDEWEB)

    None

    1980-06-01

    The work of the Agriculture and Food Processes Branch within the US DOE's Office of Industrial Programs is discussed and reviewed. The Branch is responsible for assisting the food and agricultural sectors of the economy in increasing their energy efficiency by cost sharing with industry the development and demonstration of technologies industry by itself would not develop because of a greater than normal risk factor, but have significant energy conservation benefits. This task is made more difficult by the diversity of agriculture and the food industry. The focus of the program is now on the development and demonstration of energy conservation technology in high energy use industry sectors and agricultural functions (e.g., sugar processing, meat processing, irrigation, and crop drying, high energy use functions common to many sectors of the food industry (e.g., refrigeration, drying, and evaporation), and innovative concepts (e.g., energy integrated farm systems. Specific projects within the program are summarized. (LCL)

  6. Generalization of the Wide-Sense Markov Concept to a Widely Linear Processing

    International Nuclear Information System (INIS)

    Espinosa-Pulido, Juan Antonio; Navarro-Moreno, Jesús; Fernández-Alcalá, Rosa María; Ruiz-Molina, Juan Carlos; Oya-Lechuga, Antonia; Ruiz-Fuentes, Nuria

    2014-01-01

    In this paper we show that the classical definition and the associated characterizations of wide-sense Markov (WSM) signals are not valid for improper complex signals. For that, we propose an extension of the concept of WSM to a widely linear (WL) setting and the study of new characterizations. Specifically, we introduce a new class of signals, called widely linear Markov (WLM) signals, and we analyze some of their properties based either on second-order properties or on state-space models from a WL processing standpoint. The study is performed in both the forwards and backwards directions of time. Thus, we provide two forwards and backwards Markovian representations for WLM signals. Finally, different estimation recursive algorithms are obtained for these models

  7. A reward semi-Markov process with memory for wind speed modeling

    Science.gov (United States)

    Petroni, F.; D'Amico, G.; Prattico, F.

    2012-04-01

    -order Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. The primary goal of this analysis is the study of the time history of the wind in order to assess its reliability as a source of power and to determine the associated storage levels required. In order to assess this issue we use a probabilistic model based on indexed semi-Markov process [4] to which a reward structure is attached. Our model is used to calculate the expected energy produced by a given turbine and its variability expressed by the variance of the process. Our results can be used to compare different wind farms based on their reward and also on the risk of missed production due to the intrinsic variability of the wind speed process. The model is used to generate synthetic time series for wind speed by means of Monte Carlo simulations and backtesting procedure is used to compare results on first and second oder moments of rewards between real and synthetic data. [1] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic gen- eration of wind speed time series, Energy 30 (2005) 693-708. [2] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating wind speed time series at Tangiers, Morocco, Re- newable Energy 29 (2004) 1407-1418. [3] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling of wind using first-order Markov chain and Weibull distribu- tion, Renewable Energy 28 (2003) 1787-1802. [4]F. Petroni, G. D'Amico, F. Prattico, Indexed semi-Markov process for wind speed modeling. To be submitted.

  8. Availability Control for Means of Transport in Decisive Semi-Markov Models of Exploitation Process

    Science.gov (United States)

    Migawa, Klaudiusz

    2012-12-01

    The issues presented in this research paper refer to problems connected with the control process for exploitation implemented in the complex systems of exploitation for technical objects. The article presents the description of the method concerning the control availability for technical objects (means of transport) on the basis of the mathematical model of the exploitation process with the implementation of the decisive processes by semi-Markov. The presented method means focused on the preparing the decisive for the exploitation process for technical objects (semi-Markov model) and after that specifying the best control strategy (optimal strategy) from among possible decisive variants in accordance with the approved criterion (criteria) of the activity evaluation of the system of exploitation for technical objects. In the presented method specifying the optimal strategy for control availability in the technical objects means a choice of a sequence of control decisions made in individual states of modelled exploitation process for which the function being a criterion of evaluation reaches the extreme value. In order to choose the optimal control strategy the implementation of the genetic algorithm was chosen. The opinions were presented on the example of the exploitation process of the means of transport implemented in the real system of the bus municipal transport. The model of the exploitation process for the means of transports was prepared on the basis of the results implemented in the real transport system. The mathematical model of the exploitation process was built taking into consideration the fact that the model of the process constitutes the homogenous semi-Markov process.

  9. Polymers and Random graphs: Asymptotic equivalence to branching processes

    International Nuclear Information System (INIS)

    Spouge, J.L.

    1985-01-01

    In 1974, Falk and Thomas did a computer simulation of Flory's Equireactive RA/sub f/ Polymer model, rings forbidden and rings allowed. Asymptotically, the Rings Forbidden model tended to Stockmayer's RA/sub f/ distribution (in which the sol distribution ''sticks'' after gelation), while the Rings Allowed model tended to the Flory version of the RA/sub f/ distribution. In 1965, Whittle introduced the Tree and Pseudomultigraph models. We show that these random graphs generalize the Falk and Thomas models by incorporating first-shell substitution effects. Moreover, asymptotically the Tree model displays postgelation ''sticking.'' Hence this phenomenon results from the absence of rings and occurs independently of equireactivity. We also show that the Pseudomultigraph model is asymptotically identical to the Branching Process model introduced by Gordon in 1962. This provides a possible basis for the Branching Process model in standard statistical mechanics

  10. The Logic of Adaptive Behavior - Knowledge Representation and Algorithms for the Markov Decision Process Framework in First-Order Domains

    NARCIS (Netherlands)

    van Otterlo, M.

    2008-01-01

    Learning and reasoning in large, structured, probabilistic worlds is at the heart of artificial intelligence. Markov decision processes have become the de facto standard in modeling and solving sequential decision making problems under uncertainty. Many efficient reinforcement learning and dynamic

  11. Reduced equations of motion for quantum systems driven by diffusive Markov processes.

    Science.gov (United States)

    Sarovar, Mohan; Grace, Matthew D

    2012-09-28

    The expansion of a stochastic Liouville equation for the coupled evolution of a quantum system and an Ornstein-Uhlenbeck process into a hierarchy of coupled differential equations is a useful technique that simplifies the simulation of stochastically driven quantum systems. We expand the applicability of this technique by completely characterizing the class of diffusive Markov processes for which a useful hierarchy of equations can be derived. The expansion of this technique enables the examination of quantum systems driven by non-Gaussian stochastic processes with bounded range. We present an application of this extended technique by simulating Stark-tuned Förster resonance transfer in Rydberg atoms with nonperturbative position fluctuations.

  12. The second order extended Kalman filter and Markov nonlinear filter for data processing in interferometric systems

    International Nuclear Information System (INIS)

    Ermolaev, P; Volynsky, M

    2014-01-01

    Recurrent stochastic data processing algorithms using representation of interferometric signal as output of a dynamic system, which state is described by vector of parameters, in some cases are more effective, compared with conventional algorithms. Interferometric signals depend on phase nonlinearly. Consequently it is expedient to apply algorithms of nonlinear stochastic filtering, such as Kalman type filters. An application of the second order extended Kalman filter and Markov nonlinear filter that allows to minimize estimation error is described. Experimental results of signals processing are illustrated. Comparison of the algorithms is presented and discussed.

  13. A hierarchical Markov decision process modeling feeding and marketing decisions of growing pigs

    DEFF Research Database (Denmark)

    Pourmoayed, Reza; Nielsen, Lars Relund; Kristensen, Anders Ringgaard

    2016-01-01

    Feeding is the most important cost in the production of growing pigs and has a direct impact on the marketing decisions, growth and the final quality of the meat. In this paper, we address the sequential decision problem of when to change the feed-mix within a finisher pig pen and when to pick pigs...... for marketing. We formulate a hierarchical Markov decision process with three levels representing the decision process. The model considers decisions related to feeding and marketing and finds the optimal decision given the current state of the pen. The state of the system is based on information from on...

  14. Markov-modulated infinite-server queues driven by a common background process

    OpenAIRE

    Mandjes , Michel; De Turck , Koen

    2016-01-01

    International audience; This paper studies a system with multiple infinite-server queues which are modulated by a common background process. If this background process, being modeled as a finite-state continuous-time Markov chain, is in state j, then the arrival rate into the i-th queue is λi,j, whereas the service times of customers present in this queue are exponentially distributed with mean µ −1 i,j ; at each of the individual queues all customers present are served in parallel (thus refl...

  15. A Multi-stage Representation of Cell Proliferation as a Markov Process.

    Science.gov (United States)

    Yates, Christian A; Ford, Matthew J; Mort, Richard L

    2017-12-01

    The stochastic simulation algorithm commonly known as Gillespie's algorithm (originally derived for modelling well-mixed systems of chemical reactions) is now used ubiquitously in the modelling of biological processes in which stochastic effects play an important role. In well-mixed scenarios at the sub-cellular level it is often reasonable to assume that times between successive reaction/interaction events are exponentially distributed and can be appropriately modelled as a Markov process and hence simulated by the Gillespie algorithm. However, Gillespie's algorithm is routinely applied to model biological systems for which it was never intended. In particular, processes in which cell proliferation is important (e.g. embryonic development, cancer formation) should not be simulated naively using the Gillespie algorithm since the history-dependent nature of the cell cycle breaks the Markov process. The variance in experimentally measured cell cycle times is far less than in an exponential cell cycle time distribution with the same mean.Here we suggest a method of modelling the cell cycle that restores the memoryless property to the system and is therefore consistent with simulation via the Gillespie algorithm. By breaking the cell cycle into a number of independent exponentially distributed stages, we can restore the Markov property at the same time as more accurately approximating the appropriate cell cycle time distributions. The consequences of our revised mathematical model are explored analytically as far as possible. We demonstrate the importance of employing the correct cell cycle time distribution by recapitulating the results from two models incorporating cellular proliferation (one spatial and one non-spatial) and demonstrating that changing the cell cycle time distribution makes quantitative and qualitative differences to the outcome of the models. Our adaptation will allow modellers and experimentalists alike to appropriately represent cellular

  16. Graph theoretical calculation of systems reliability with semi-Markov processes

    International Nuclear Information System (INIS)

    Widmer, U.

    1984-06-01

    The determination of the state probabilities and related quantities of a system characterized by an SMP (or a homogeneous MP) can be performed by means of graph-theoretical methods. The calculation procedures for semi-Markov processes based on signal flow graphs are reviewed. Some methods from electrotechnics are adapted in order to obtain a representation of the state probabilities by means of trees. From this some formulas are derived for the asymptotic state probabilities and for the mean life-time in reliability considerations. (Auth.)

  17. Pitch angle scattering of relativistic electrons from stationary magnetic waves: Continuous Markov process and quasilinear theory

    International Nuclear Information System (INIS)

    Lemons, Don S.

    2012-01-01

    We develop a Markov process theory of charged particle scattering from stationary, transverse, magnetic waves. We examine approximations that lead to quasilinear theory, in particular the resonant diffusion approximation. We find that, when appropriate, the resonant diffusion approximation simplifies the result of the weak turbulence approximation without significant further restricting the regime of applicability. We also explore a theory generated by expanding drift and diffusion rates in terms of a presumed small correlation time. This small correlation time expansion leads to results valid for relatively small pitch angle and large wave energy density - a regime that may govern pitch angle scattering of high-energy electrons into the geomagnetic loss cone.

  18. Detection of Text Lines of Handwritten Arabic Manuscripts using Markov Decision Processes

    Directory of Open Access Journals (Sweden)

    Youssef Boulid

    2016-09-01

    Full Text Available In a character recognition systems, the segmentation phase is critical since the accuracy of the recognition depend strongly on it. In this paper we present an approach based on Markov Decision Processes to extract text lines from binary images of Arabic handwritten documents. The proposed approach detects the connected components belonging to the same line by making use of knowledge about features and arrangement of those components. The initial results show that the system is promising for extracting Arabic handwritten lines.

  19. Conditions for the Solvability of the Linear Programming Formulation for Constrained Discounted Markov Decision Processes

    Energy Technology Data Exchange (ETDEWEB)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Institut de Mathématiques de Bordeaux, INRIA Bordeaux Sud Ouest, Team: CQFD, and IMB (France); Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es [UNED, Department of Statistics and Operations Research (Spain)

    2016-08-15

    We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.

  20. Effective degree Markov-chain approach for discrete-time epidemic processes on uncorrelated networks.

    Science.gov (United States)

    Cai, Chao-Ran; Wu, Zhi-Xi; Guan, Jian-Yue

    2014-11-01

    Recently, Gómez et al. proposed a microscopic Markov-chain approach (MMCA) [S. Gómez, J. Gómez-Gardeñes, Y. Moreno, and A. Arenas, Phys. Rev. E 84, 036105 (2011)PLEEE81539-375510.1103/PhysRevE.84.036105] to the discrete-time susceptible-infected-susceptible (SIS) epidemic process and found that the epidemic prevalence obtained by this approach agrees well with that by simulations. However, we found that the approach cannot be straightforwardly extended to a susceptible-infected-recovered (SIR) epidemic process (due to its irreversible property), and the epidemic prevalences obtained by MMCA and Monte Carlo simulations do not match well when the infection probability is just slightly above the epidemic threshold. In this contribution we extend the effective degree Markov-chain approach, proposed for analyzing continuous-time epidemic processes [J. Lindquist, J. Ma, P. Driessche, and F. Willeboordse, J. Math. Biol. 62, 143 (2011)JMBLAJ0303-681210.1007/s00285-010-0331-2], to address discrete-time binary-state (SIS) or three-state (SIR) epidemic processes on uncorrelated complex networks. It is shown that the final epidemic size as well as the time series of infected individuals obtained from this approach agree very well with those by Monte Carlo simulations. Our results are robust to the change of different parameters, including the total population size, the infection probability, the recovery probability, the average degree, and the degree distribution of the underlying networks.

  1. Non-homogeneous Markov process models with informative observations with an application to Alzheimer's disease.

    Science.gov (United States)

    Chen, Baojiang; Zhou, Xiao-Hua

    2011-05-01

    Identifying risk factors for transition rates among normal cognition, mildly cognitive impairment, dementia and death in an Alzheimer's disease study is very important. It is known that transition rates among these states are strongly time dependent. While Markov process models are often used to describe these disease progressions, the literature mainly focuses on time homogeneous processes, and limited tools are available for dealing with non-homogeneity. Further, patients may choose when they want to visit the clinics, which creates informative observations. In this paper, we develop methods to deal with non-homogeneous Markov processes through time scale transformation when observation times are pre-planned with some observations missing. Maximum likelihood estimation via the EM algorithm is derived for parameter estimation. Simulation studies demonstrate that the proposed method works well under a variety of situations. An application to the Alzheimer's disease study identifies that there is a significant increase in transition rates as a function of time. Furthermore, our models reveal that the non-ignorable missing mechanism is perhaps reasonable. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Singular Perturbation for the Discounted Continuous Control of Piecewise Deterministic Markov Processes

    International Nuclear Information System (INIS)

    Costa, O. L. V.; Dufour, F.

    2011-01-01

    This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP’s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space ℝ n . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter ε>0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as ε goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as ε goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.

  3. Detection of bursts in extracellular spike trains using hidden semi-Markov point process models.

    Science.gov (United States)

    Tokdar, Surya; Xi, Peiyi; Kelly, Ryan C; Kass, Robert E

    2010-08-01

    Neurons in vitro and in vivo have epochs of bursting or "up state" activity during which firing rates are dramatically elevated. Various methods of detecting bursts in extracellular spike trains have appeared in the literature, the most widely used apparently being Poisson Surprise (PS). A natural description of the phenomenon assumes (1) there are two hidden states, which we label "burst" and "non-burst," (2) the neuron evolves stochastically, switching at random between these two states, and (3) within each state the spike train follows a time-homogeneous point process. If in (2) the transitions from non-burst to burst and burst to non-burst states are memoryless, this becomes a hidden Markov model (HMM). For HMMs, the state transitions follow exponential distributions, and are highly irregular. Because observed bursting may in some cases be fairly regular-exhibiting inter-burst intervals with small variation-we relaxed this assumption. When more general probability distributions are used to describe the state transitions the two-state point process model becomes a hidden semi-Markov model (HSMM). We developed an efficient Bayesian computational scheme to fit HSMMs to spike train data. Numerical simulations indicate the method can perform well, sometimes yielding very different results than those based on PS.

  4. Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes

    Directory of Open Access Journals (Sweden)

    Tomoaki Nakamura

    2017-12-01

    Full Text Available Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM, the emission distributions of which are Gaussian processes (GPs. Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods.

  5. Combining experimental and simulation data of molecular processes via augmented Markov models.

    Science.gov (United States)

    Olsson, Simon; Wu, Hao; Paul, Fabian; Clementi, Cecilia; Noé, Frank

    2017-08-01

    Accurate mechanistic description of structural changes in biomolecules is an increasingly important topic in structural and chemical biology. Markov models have emerged as a powerful way to approximate the molecular kinetics of large biomolecules while keeping full structural resolution in a divide-and-conquer fashion. However, the accuracy of these models is limited by that of the force fields used to generate the underlying molecular dynamics (MD) simulation data. Whereas the quality of classical MD force fields has improved significantly in recent years, remaining errors in the Boltzmann weights are still on the order of a few [Formula: see text], which may lead to significant discrepancies when comparing to experimentally measured rates or state populations. Here we take the view that simulations using a sufficiently good force-field sample conformations that are valid but have inaccurate weights, yet these weights may be made accurate by incorporating experimental data a posteriori. To do so, we propose augmented Markov models (AMMs), an approach that combines concepts from probability theory and information theory to consistently treat systematic force-field error and statistical errors in simulation and experiment. Our results demonstrate that AMMs can reconcile conflicting results for protein mechanisms obtained by different force fields and correct for a wide range of stationary and dynamical observables even when only equilibrium measurements are incorporated into the estimation process. This approach constitutes a unique avenue to combine experiment and computation into integrative models of biomolecular structure and dynamics.

  6. Data-Driven Markov Decision Process Approximations for Personalized Hypertension Treatment Planning

    Directory of Open Access Journals (Sweden)

    Greggory J. Schell PhD

    2016-10-01

    Full Text Available Background: Markov decision process (MDP models are powerful tools. They enable the derivation of optimal treatment policies but may incur long computational times and generate decision rules that are challenging to interpret by physicians. Methods: In an effort to improve usability and interpretability, we examined whether Poisson regression can approximate optimal hypertension treatment policies derived by an MDP for maximizing a patient’s expected discounted quality-adjusted life years. Results: We found that our Poisson approximation to the optimal treatment policy matched the optimal policy in 99% of cases. This high accuracy translates to nearly identical health outcomes for patients. Furthermore, the Poisson approximation results in 104 additional quality-adjusted life years per 1000 patients compared to the Seventh Joint National Committee’s treatment guidelines for hypertension. The comparative health performance of the Poisson approximation was robust to the cardiovascular disease risk calculator used and calculator calibration error. Limitations: Our results are based on Markov chain modeling. Conclusions: Poisson model approximation for blood pressure treatment planning has high fidelity to optimal MDP treatment policies, which can improve usability and enhance transparency of more personalized treatment policies.

  7. Genotype-Specific Measles Transmissibility: A Branching Process Analysis.

    Science.gov (United States)

    Ackley, Sarah F; Hacker, Jill K; Enanoria, Wayne T A; Worden, Lee; Blumberg, Seth; Porco, Travis C; Zipprich, Jennifer

    2018-04-03

    Substantial heterogeneity in measles outbreak sizes may be due to genotype-specific transmissibility. Using a branching process analysis, we characterize differences in measles transmission by estimating the association between genotype and the reproduction number R among postelimination California measles cases during 2000-2015 (400 cases, 165 outbreaks). Assuming a negative binomial secondary case distribution, we fit a branching process model to the distribution of outbreak sizes using maximum likelihood and estimated the reproduction number R for a multigenotype model. Genotype B3 is found to be significantly more transmissible than other genotypes (P = .01) with an R of 0.64 (95% confidence interval [CI], .48-.71), while the R for all other genotypes combined is 0.43 (95% CI, .28-.54). This result is robust to excluding the 2014-2015 outbreak linked to Disneyland theme parks (referred to as "outbreak A" for conciseness and clarity) (P = .04) and modeling genotype as a random effect (P = .004 including outbreak A and P = .02 excluding outbreak A). This result was not accounted for by season of introduction, age of index case, or vaccination of the index case. The R for outbreaks with a school-aged index case is 0.69 (95% CI, .52-.78), while the R for outbreaks with a non-school-aged index case is 0.28 (95% CI, .19-.35), but this cannot account for differences between genotypes. Variability in measles transmissibility may have important implications for measles control; the vaccination threshold required for elimination may not be the same for all genotypes or age groups.

  8. A Novel Analytical Model for Network-on-Chip using Semi-Markov Process

    Directory of Open Access Journals (Sweden)

    WANG, J.

    2011-02-01

    Full Text Available Network-on-Chip (NoC communication architecture is proposed to resolve the bottleneck of Multi-processor communication in a single chip. In this paper, a performance analytical model using Semi-Markov Process (SMP is presented to obtain the NoC performance. More precisely, given the related parameters, SMP is used to describe the behavior of each channel and the header flit routing time on each channel can be calculated by analyzing the SMP. Then, the average packet latency in NoC can be calculated. The accuracy of our model is illustrated through simulation. Indeed, the experimental results show that the proposed model can be used to obtain NoC performance and it performs better than the state-of-art models. Therefore, our model can be used as a useful tool to guide the NoC design process.

  9. Numerical construction of the p(fold) (committor) reaction coordinate for a Markov process.

    Science.gov (United States)

    Krivov, Sergei V

    2011-10-06

    To simplify the description of a complex multidimensional dynamical process, one often projects it onto a single reaction coordinate. In protein folding studies, the folding probability p(fold) is an optimal reaction coordinate which preserves many important properties of the dynamics. The construction of the coordinate is difficult. Here, an efficient numerical approach to construct the p(fold) reaction coordinate for a Markov process (satisfying the detailed balance) is described. The coordinate is obtained by optimizing parameters of a chosen functional form to make a generalized cut-based free energy profile the highest. The approach is illustrated by constructing the p(fold) reaction coordinate for the equilibrium folding simulation of FIP35 protein reported by Shaw et al. (Science 2010, 330, 341-346). © 2011 American Chemical Society

  10. Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach

    Energy Technology Data Exchange (ETDEWEB)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Bordeaux INP, IMB, UMR CNRS 5251 (France); Piunovskiy, A. B., E-mail: piunov@liv.ac.uk [University of Liverpool, Department of Mathematical Sciences (United Kingdom)

    2016-08-15

    In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures of the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.

  11. First Passage Moments of Finite-State Semi-Markov Processes

    Energy Technology Data Exchange (ETDEWEB)

    Warr, Richard [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cordeiro, James [Air Force Research Lab. (AFRL), Wright-Patterson AFB, OH (United States)

    2014-03-31

    In this paper, we discuss the computation of first-passage moments of a regular time-homogeneous semi-Markov process (SMP) with a finite state space to certain of its states that possess the property of universal accessibility (UA). A UA state is one which is accessible from any other state of the SMP, but which may or may not connect back to one or more other states. An important characteristic of UA is that it is the state-level version of the oft-invoked process-level property of irreducibility. We adapt existing results for irreducible SMPs to the derivation of an analytical matrix expression for the first passage moments to a single UA state of the SMP. In addition, consistent point estimators for these first passage moments, together with relevant R code, are provided.

  12. Modeling treatment of ischemic heart disease with partially observable Markov decision processes.

    Science.gov (United States)

    Hauskrecht, M; Fraser, H

    1998-01-01

    Diagnosis of a disease and its treatment are not separate, one-shot activities. Instead they are very often dependent and interleaved over time, mostly due to uncertainty about the underlying disease, uncertainty associated with the response of a patient to the treatment and varying cost of different diagnostic (investigative) and treatment procedures. The framework of Partially observable Markov decision processes (POMDPs) developed and used in operations research, control theory and artificial intelligence communities is particularly suitable for modeling such a complex decision process. In the paper, we show how the POMDP framework could be used to model and solve the problem of the management of patients with ischemic heart disease, and point out modeling advantages of the framework over standard decision formalisms.

  13. Planning treatment of ischemic heart disease with partially observable Markov decision processes.

    Science.gov (United States)

    Hauskrecht, M; Fraser, H

    2000-03-01

    Diagnosis of a disease and its treatment are not separate, one-shot activities. Instead, they are very often dependent and interleaved over time. This is mostly due to uncertainty about the underlying disease, uncertainty associated with the response of a patient to the treatment and varying cost of different diagnostic (investigative) and treatment procedures. The framework of partially observable Markov decision processes (POMDPs) developed and used in the operations research, control theory and artificial intelligence communities is particularly suitable for modeling such a complex decision process. In this paper, we show how the POMDP framework can be used to model and solve the problem of the management of patients with ischemic heart disease (IHD), and demonstrate the modeling advantages of the framework over standard decision formalisms.

  14. A high-fidelity weather time series generator using the Markov Chain process on a piecewise level

    Science.gov (United States)

    Hersvik, K.; Endrerud, O.-E. V.

    2017-12-01

    A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.

  15. Dynamic Request Routing for Online Video-on-Demand Service: A Markov Decision Process Approach

    Directory of Open Access Journals (Sweden)

    Jianxiong Wan

    2014-01-01

    Full Text Available We investigate the request routing problem in the CDN-based Video-on-Demand system. We model the system as a controlled queueing system including a dispatcher and several edge servers. The system is formulated as a Markov decision process (MDP. Since the MDP formulation suffers from the so-called “the curse of dimensionality” problem, we then develop a greedy heuristic algorithm, which is simple and can be implemented online, to approximately solve the MDP model. However, we do not know how far it deviates from the optimal solution. To address this problem, we further aggregate the state space of the original MDP model and use the bounded-parameter MDP (BMDP to reformulate the system. This allows us to obtain a suboptimal solution with a known performance bound. The effectiveness of two approaches is evaluated in a simulation study.

  16. Sieve estimation in a Markov illness-death process under dual censoring.

    Science.gov (United States)

    Boruvka, Audrey; Cook, Richard J

    2016-04-01

    Semiparametric methods are well established for the analysis of a progressive Markov illness-death process observed up to a noninformative right censoring time. However, often the intermediate and terminal events are censored in different ways, leading to a dual censoring scheme. In such settings, unbiased estimation of the cumulative transition intensity functions cannot be achieved without some degree of smoothing. To overcome this problem, we develop a sieve maximum likelihood approach for inference on the hazard ratio. A simulation study shows that the sieve estimator offers improved finite-sample performance over common imputation-based alternatives and is robust to some forms of dependent censoring. The proposed method is illustrated using data from cancer trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Wind Farm Reliability Modelling Using Bayesian Networks and Semi-Markov Processes

    Directory of Open Access Journals (Sweden)

    Robert Adam Sobolewski

    2015-09-01

    Full Text Available Technical reliability plays an important role among factors affecting the power output of a wind farm. The reliability is determined by an internal collection grid topology and reliability of its electrical components, e.g. generators, transformers, cables, switch breakers, protective relays, and busbars. A wind farm reliability’s quantitative measure can be the probability distribution of combinations of operating and failed states of the farm’s wind turbines. The operating state of a wind turbine is its ability to generate power and to transfer it to an external power grid, which means the availability of the wind turbine and other equipment necessary for the power transfer to the external grid. This measure can be used for quantitative analysis of the impact of various wind farm topologies and the reliability of individual farm components on the farm reliability, and for determining the expected farm output power with consideration of the reliability. This knowledge may be useful in an analysis of power generation reliability in power systems. The paper presents probabilistic models that quantify the wind farm reliability taking into account the above-mentioned technical factors. To formulate the reliability models Bayesian networks and semi-Markov processes were used. Using Bayesian networks the wind farm structural reliability was mapped, as well as quantitative characteristics describing equipment reliability. To determine the characteristics semi-Markov processes were used. The paper presents an example calculation of: (i probability distribution of the combination of both operating and failed states of four wind turbines included in the wind farm, and (ii expected wind farm output power with consideration of its reliability.

  18. Effects of stochastic interest rates in decision making under risk: A Markov decision process model for forest management

    Science.gov (United States)

    Mo Zhou; Joseph Buongiorno

    2011-01-01

    Most economic studies of forest decision making under risk assume a fixed interest rate. This paper investigated some implications of this stochastic nature of interest rates. Markov decision process (MDP) models, used previously to integrate stochastic stand growth and prices, can be extended to include variable interest rates as well. This method was applied to...

  19. The impulse cutoff an entropy functional measure on trajectories of Markov diffusion process integrating in information path functional

    OpenAIRE

    Lerner, Vladimir S.

    2012-01-01

    The impulses, cutting entropy functional (EF) measure on trajectories Markov diffusion process, integrate information path functional (IPF) composing discrete information Bits extracted from observing random process. Each cut brings memory of the cutting entropy, which provides both reduction of the process entropy and discrete unit of the cutting entropy a Bit. Consequently, information is memorized entropy cutting in random observations which process interactions. The origin of information ...

  20. Mapping absorption processes onto a Markov chain, conserving the mean first passage time

    International Nuclear Information System (INIS)

    Biswas, Katja

    2013-01-01

    The dynamics of a multidimensional system is projected onto a discrete state master equation using the transition rates W(k → k′; t, t + dt) between a set of states {k} represented by the regions {ζ k } in phase or discrete state space. Depending on the dynamics Γ i (t) of the original process and the choice of ζ k , the discretized process can be Markovian or non-Markovian. For absorption processes, it is shown that irrespective of these properties of the projection, a master equation with time-independent transition rates W-bar (k→k ' ) can be obtained, which conserves the total occupation time of the partitions of the phase or discrete state space of the original process. An expression for the transition probabilities p-bar (k ' |k) is derived based on either time-discrete measurements {t i } with variable time stepping Δ (i+1)i = t i+1 − t i or the theoretical knowledge at continuous times t. This allows computational methods of absorbing Markov chains to be used to obtain the mean first passage time (MFPT) of the system. To illustrate this approach, the procedure is applied to obtain the MFPT for the overdamped Brownian motion of particles subject to a system with dichotomous noise and the escape from an entropic barrier. The high accuracy of the simulation results confirms with the theory. (paper)

  1. Clarification of basic factorization identity is for the almost semi-continuous latticed Poisson processes on the Markov chain

    Directory of Open Access Journals (Sweden)

    Gerich M. S.

    2012-12-01

    Full Text Available Let ${xi(t, x(t}$ be a homogeneous semi-continuous lattice Poisson process on the Markov chain.The jumps of one sign are geometrically distributed, and jumps of the opposite sign are arbitrary latticed distribution. For a suchprocesses the relations for the components of two-sided matrix factorization are established.This relations define the moment genereting functions for extremumf of the process and their complements.

  2. Markov-CA model using analytical hierarchy process and multiregression technique

    International Nuclear Information System (INIS)

    Omar, N Q; Sanusi, S A M; Hussin, W M W; Samat, N; Mohammed, K S

    2014-01-01

    The unprecedented increase in population and rapid rate of urbanisation has led to extensive land use changes. Cellular automata (CA) are increasingly used to simulate a variety of urban dynamics. This paper introduces a new CA based on an integration model built-in multi regression and multi-criteria evaluation to improve the representation of CA transition rule. This multi-criteria evaluation is implemented by utilising data relating to the environmental and socioeconomic factors in the study area in order to produce suitability maps (SMs) using an analytical hierarchical process, which is a well-known method. Before being integrated to generate suitability maps for the periods from 1984 to 2010 based on the different decision makings, which have become conditioned for the next step of CA generation. The suitability maps are compared in order to find the best maps based on the values of the root equation (R 2 ). This comparison can help the stakeholders make better decisions. Thus, the resultant suitability map derives a predefined transition rule for the last step for CA model. The approach used in this study highlights a mechanism for monitoring and evaluating land-use and land-cover changes in Kirkuk city, Iraq owing changes in the structures of governments, wars, and an economic blockade over the past decades. The present study asserts the high applicability and flexibility of Markov-CA model. The results have shown that the model and its interrelated concepts are performing rather well

  3. Analyses of Markov decision process structure regarding the possible strategic use of interacting memory systems

    Directory of Open Access Journals (Sweden)

    Eric A Zilli

    2008-12-01

    Full Text Available Behavioral tasks are often used to study the different memory systems present in humans and animals. Such tasks are usually designed to isolate and measure some aspect of a single memory system. However, it is not necessarily clear that any given task actually does isolate a system or that the strategy used by a subject in the experiment is the one desired by the experimenter. We have previously shown that when tasks are written mathematically as a form of partially-observable Markov decision processes, the structure of the tasks provide information regarding the possible utility of certain memory systems. These previous analyses dealt with the disambiguation problem: given a specific ambiguous observation of the environment, is there information provided by a given memory strategy that can disambiguate that observation to allow a correct decisionµ Here we extend this approach to cases where multiple memory systems can be strategically combined in different ways. Specifically, we analyze the disambiguation arising from three ways by which episodic-like memory retrieval might be cued (by another episodic-like memory, by a semantic association, or by working memory for some earlier observation. We also consider the disambiguation arising from holding earlier working memories, episodic-like memories or semantic associations in working memory. From these analyses we can begin to develop a quantitative hierarchy among memory systems in which stimulus-response memories and semantic associations provide no disambiguation while the episodic memory system provides the most flexible

  4. Markov model of fatigue of a composite material with the poisson process of defect initiation

    Science.gov (United States)

    Paramonov, Yu.; Chatys, R.; Andersons, J.; Kleinhofs, M.

    2012-05-01

    As a development of the model where only one weak microvolume (WMV) and only a pulsating cyclic loading are considered, in the current version of the model, we take into account the presence of several weak sites where fatigue damage can accumulate and a loading with an arbitrary (but positive) stress ratio. The Poisson process of initiation of WMVs is considered, whose rate depends on the size of a specimen. The cumulative distribution function (cdf) of the fatigue life of every individual WMV is calculated using the Markov model of fatigue. For the case where this function is approximated by a lognormal distribution, a formula for calculating the cdf of fatigue life of the specimen (modeled as a chain of WMVs) is obtained. Only a pulsating cyclic loading was considered in the previous version of the model. Now, using the modified energy method, a loading cycle with an arbitrary stress ratio is "transformed" into an equivalent cycle with some other stress ratio. In such a way, the entire probabilistic fatigue diagram for any stress ratio with a positive cycle stress can be obtained. Numerical examples are presented.

  5. Composition of web services using Markov decision processes and dynamic programming.

    Science.gov (United States)

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.

  6. Robust path planning for flexible needle insertion using Markov decision processes.

    Science.gov (United States)

    Tan, Xiaoyu; Yu, Pengqian; Lim, Kah-Bin; Chui, Chee-Kong

    2018-05-11

    Flexible needle has the potential to accurately navigate to a treatment region in the least invasive manner. We propose a new planning method using Markov decision processes (MDPs) for flexible needle navigation that can perform robust path planning and steering under the circumstance of complex tissue-needle interactions. This method enhances the robustness of flexible needle steering from three different perspectives. First, the method considers the problem caused by soft tissue deformation. The method then resolves the common needle penetration failure caused by patterns of targets, while the last solution addresses the uncertainty issues in flexible needle motion due to complex and unpredictable tissue-needle interaction. Computer simulation and phantom experimental results show that the proposed method can perform robust planning and generate a secure control policy for flexible needle steering. Compared with a traditional method using MDPs, the proposed method achieves higher accuracy and probability of success in avoiding obstacles under complicated and uncertain tissue-needle interactions. Future work will involve experiment with biological tissue in vivo. The proposed robust path planning method can securely steer flexible needle within soft phantom tissues and achieve high adaptability in computer simulation.

  7. A test of multiple correlation temporal window characteristic of non-Markov processes

    Science.gov (United States)

    Arecchi, F. T.; Farini, A.; Megna, N.

    2016-03-01

    We introduce a sensitive test of memory effects in successive events. The test consists of a combination K of binary correlations at successive times. K decays monotonically from K = 1 for uncorrelated events as a Markov process. For a monotonic memory fading, K1 temporal window in cognitive tasks consisting of the visual identification of the front face of the Necker cube after a previous presentation of the same. We speculate that memory effects provide a temporal window with K>1 and this experiment could be a possible first step towards a better comprehension of this phenomenon. The K>1 behaviour is maximal at an inter-measurement time τ around 2s with inter-subject differences. The K>1 persists over a time window of 1s around τ; outside this window the K1 window in pairs of successive perceptions suggests that, at variance with single visual stimuli eliciting a suitable response, a pair of stimuli shortly separated in time displays mutual correlations.

  8. Strategic level proton therapy patient admission planning: a Markov decision process modeling approach.

    Science.gov (United States)

    Gedik, Ridvan; Zhang, Shengfan; Rainwater, Chase

    2017-06-01

    A relatively new consideration in proton therapy planning is the requirement that the mix of patients treated from different categories satisfy desired mix percentages. Deviations from these percentages and their impacts on operational capabilities are of particular interest to healthcare planners. In this study, we investigate intelligent ways of admitting patients to a proton therapy facility that maximize the total expected number of treatment sessions (fractions) delivered to patients in a planning period with stochastic patient arrivals and penalize the deviation from the patient mix restrictions. We propose a Markov Decision Process (MDP) model that provides very useful insights in determining the best patient admission policies in the case of an unexpected opening in the facility (i.e., no-shows, appointment cancellations, etc.). In order to overcome the curse of dimensionality for larger and more realistic instances, we propose an aggregate MDP model that is able to approximate optimal patient admission policies using the worded weight aggregation technique. Our models are applicable to healthcare treatment facilities throughout the United States, but are motivated by collaboration with the University of Florida Proton Therapy Institute (UFPTI).

  9. Performance Evaluation and Optimal Management of Distance-Based Registration Using a Semi-Markov Process

    Directory of Open Access Journals (Sweden)

    Jae Joon Suh

    2017-01-01

    Full Text Available We consider the distance-based registration (DBR which is a kind of dynamic location registration scheme in a mobile communication network. In the DBR, the location of a mobile station (MS is updated when it enters a base station more than or equal to a specified distance away from the base station where the location registration for the MS was done last. In this study, we first investigate the existing performance-evaluation methods on the DBR with implicit registration (DBIR presented to improve the performance of the DBR and point out some problems of the evaluation methods. We propose a new performance-evaluation method for the DBIR scheme using a semi-Markov process (SMP which can resolve the controversial issues of the existing methods. The numerical results obtained with the proposed SMP model are compared with those from previous models. It is shown that the SMP model should be considered to get an accurate performance of the DBIR scheme.

  10. Balancing Long Lifetime and Satisfying Fairness in WBAN Using a Constrained Markov Decision Process

    Directory of Open Access Journals (Sweden)

    Yingqi Yin

    2015-01-01

    Full Text Available As an important part of the Internet of Things (IOT and the special case of device-to-device (D2D communication, wireless body area network (WBAN gradually becomes the focus of attention. Since WBAN is a body-centered network, the energy of sensor nodes is strictly restrained since they are supplied by battery with limited power. In each data collection, only one sensor node is scheduled to transmit its measurements directly to the access point (AP through the fading channel. We formulate the problem of dynamically choosing which sensor should communicate with the AP to maximize network lifetime under the constraint of fairness as a constrained markov decision process (CMDP. The optimal lifetime and optimal policy are obtained by Bellman equation in dynamic programming. The proposed algorithm defines the limiting performance in WBAN lifetime under different degrees of fairness constraints. Due to the defect of large implementation overhead in acquiring global channel state information (CSI, we put forward a distributed scheduling algorithm that adopts local CSI, which saves the network overhead and simplifies the algorithm. It was demonstrated via simulation that this scheduling algorithm can allocate time slot reasonably under different channel conditions to balance the performances of network lifetime and fairness.

  11. A markov decision process model for the optimal dispatch of military medical evacuation assets.

    Science.gov (United States)

    Keneally, Sean K; Robbins, Matthew J; Lunday, Brian J

    2016-06-01

    We develop a Markov decision process (MDP) model to examine aerial military medical evacuation (MEDEVAC) dispatch policies in a combat environment. The problem of deciding which aeromedical asset to dispatch to each service request is complicated by the threat conditions at the service locations and the priority class of each casualty event. We assume requests for MEDEVAC support arrive sequentially, with the location and the priority of each casualty known upon initiation of the request. The United States military uses a 9-line MEDEVAC request system to classify casualties as being one of three priority levels: urgent, priority, and routine. Multiple casualties can be present at a single casualty event, with the highest priority casualty determining the priority level for the casualty event. Moreover, an armed escort may be required depending on the threat level indicated by the 9-line MEDEVAC request. The proposed MDP model indicates how to optimally dispatch MEDEVAC helicopters to casualty events in order to maximize steady-state system utility. The utility gained from servicing a specific request depends on the number of casualties, the priority class for each of the casualties, and the locations of both the servicing ambulatory helicopter and casualty event. Instances of the dispatching problem are solved using a relative value iteration dynamic programming algorithm. Computational examples are used to investigate optimal dispatch policies under different threat situations and armed escort delays; the examples are based on combat scenarios in which United States Army MEDEVAC units support ground operations in Afghanistan.

  12. Markov decision processes and the belief-desire-intention model bridging the gap for autonomous agents

    CERN Document Server

    Simari, Gerardo I

    2011-01-01

    In this work, we provide a treatment of the relationship between two models that have been widely used in the implementation of autonomous agents: the Belief DesireIntention (BDI) model and Markov Decision Processes (MDPs). We start with an informal description of the relationship, identifying the common features of the two approaches and the differences between them. Then we hone our understanding of these differences through an empirical analysis of the performance of both models on the TileWorld testbed. This allows us to show that even though the MDP model displays consistently better behavior than the BDI model for small worlds, this is not the case when the world becomes large and the MDP model cannot be solved exactly. Finally we present a theoretical analysis of the relationship between the two approaches, identifying mappings that allow us to extract a set of intentions from a policy (a solution to an MDP), and to extract a policy from a set of intentions.

  13. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. I. Biological model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    that really uses all these methodological improvements. In this paper, the biological model describing the performance and feed intake of sows is presented. In particular, estimation of herd specific parameters is emphasized. The optimization model is described in a subsequent paper......Several replacement models have been presented in literature. In other applicational areas like dairy cow replacement, various methodological improvements like hierarchical Markov processes and Bayesian updating have been implemented, but not in sow models. Furthermore, there are methodological...... improvements like multi-level hierarchical Markov processes with decisions on multiple time scales, efficient methods for parameter estimations at herd level and standard software that has been hardly implemented at all in any replacement model. The aim of this study is to present a sow replacement model...

  14. An open Markov chain scheme model for a credit consumption portfolio fed by ARIMA and SARMA processes

    Science.gov (United States)

    Esquível, Manuel L.; Fernandes, José Moniz; Guerreiro, Gracinda R.

    2016-06-01

    We introduce a schematic formalism for the time evolution of a random population entering some set of classes and such that each member of the population evolves among these classes according to a scheme based on a Markov chain model. We consider that the flow of incoming members is modeled by a time series and we detail the time series structure of the elements in each of the classes. We present a practical application to data from a credit portfolio of a Cape Verdian bank; after modeling the entering population in two different ways - namely as an ARIMA process and as a deterministic sigmoid type trend plus a SARMA process for the residues - we simulate the behavior of the population and compare the results. We get that the second method is more accurate in describing the behavior of the populations when compared to the observed values in a direct simulation of the Markov chain.

  15. Nuclide transport of decay chain in the fractured rock medium: a model using continuous time Markov process

    International Nuclear Information System (INIS)

    Younmyoung Lee; Kunjai Lee

    1995-01-01

    A model using continuous time Markov process for nuclide transport of decay chain of arbitrary length in the fractured rock medium has been developed. Considering the fracture in the rock matrix as a finite number of compartments, the transition probability for nuclide from the transition intensity between and out of the compartments is represented utilizing Chapman-Kolmogorov equation, with which the expectation and the variance of nuclide distribution for the fractured rock medium could be obtained. A comparison between continuous time Markov process model and available analytical solutions for the nuclide transport of three decay chains without rock matrix diffusion has been made showing comparatively good agreement. Fittings with experimental breakthrough curves obtained with nonsorbing materials such as NaLS and uranine in the artificial fractured rock are also made. (author)

  16. Projected metastable Markov processes and their estimation with observable operator models

    International Nuclear Information System (INIS)

    Wu, Hao; Prinz, Jan-Hendrik; Noé, Frank

    2015-01-01

    The determination of kinetics of high-dimensional dynamical systems, such as macromolecules, polymers, or spin systems, is a difficult and generally unsolved problem — both in simulation, where the optimal reaction coordinate(s) are generally unknown and are difficult to compute, and in experimental measurements, where only specific coordinates are observable. Markov models, or Markov state models, are widely used but suffer from the fact that the dynamics on a coarsely discretized state spaced are no longer Markovian, even if the dynamics in the full phase space are. The recently proposed projected Markov models (PMMs) are a formulation that provides a description of the kinetics on a low-dimensional projection without making the Markovianity assumption. However, as yet no general way of estimating PMMs from data has been available. Here, we show that the observed dynamics of a PMM can be exactly described by an observable operator model (OOM) and derive a PMM estimator based on the OOM learning

  17. Double-observer line transect surveys with Markov-modulated Poisson process models for animal availability.

    Science.gov (United States)

    Borchers, D L; Langrock, R

    2015-12-01

    We develop maximum likelihood methods for line transect surveys in which animals go undetected at distance zero, either because they are stochastically unavailable while within view or because they are missed when they are available. These incorporate a Markov-modulated Poisson process model for animal availability, allowing more clustered availability events than is possible with Poisson availability models. They include a mark-recapture component arising from the independent-observer survey, leading to more accurate estimation of detection probability given availability. We develop models for situations in which (a) multiple detections of the same individual are possible and (b) some or all of the availability process parameters are estimated from the line transect survey itself, rather than from independent data. We investigate estimator performance by simulation, and compare the multiple-detection estimators with estimators that use only initial detections of individuals, and with a single-observer estimator. Simultaneous estimation of detection function parameters and availability model parameters is shown to be feasible from the line transect survey alone with multiple detections and double-observer data but not with single-observer data. Recording multiple detections of individuals improves estimator precision substantially when estimating the availability model parameters from survey data, and we recommend that these data be gathered. We apply the methods to estimate detection probability from a double-observer survey of North Atlantic minke whales, and find that double-observer data greatly improve estimator precision here too. © 2015 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  18. Mathematical model of the loan portfolio dynamics in the form of Markov chain considering the process of new customers attraction

    Science.gov (United States)

    Bozhalkina, Yana

    2017-12-01

    Mathematical model of the loan portfolio structure change in the form of Markov chain is explored. This model considers in one scheme both the process of customers attraction, their selection based on the credit score, and loans repayment. The model describes the structure and volume of the loan portfolio dynamics, which allows to make medium-term forecasts of profitability and risk. Within the model corrective actions of bank management in order to increase lending volumes or to reduce the risk are formalized.

  19. A competitive Markov decision process model for the energy–water–climate change nexus

    International Nuclear Information System (INIS)

    Nanduri, Vishnu; Saavedra-Antolínez, Ivan

    2013-01-01

    Highlights: • Developed a CMDP model for the energy–water–climate change nexus. • Solved the model using a reinforcement learning algorithm. • Study demonstrated on 30-bus IEEE electric power network using DCOPF formulation. • Sixty percentage drop in CO 2 and 40% drop in H 2 O use when coal replaced by wind (over 10 years). • Higher profits for nuclear and wind as well as higher LMPs under CO 2 and H 2 O taxes. - Abstract: Drought-like conditions in some parts of the US and around the world are causing water shortages that lead to power failures, becoming a source of concern to independent system operators. Water shortages can cause significant challenges in electricity production and thereby a direct socioeconomic impact on the surrounding region. Our paper presents a new, comprehensive quantitative model that examines the electricity–water–climate change nexus. We investigate the impact of a joint water and carbon tax proposal on the operation of a transmission-constrained power network operating in a wholesale power market setting. We develop a competitive Markov decision process (CMDP) model for the dynamic competition in wholesale electricity markets, and solve the model using reinforcement learning. Several cases, including the impact of different tax schemes, integration of stochastic wind energy resources, and capacity disruptions due to droughts are investigated. Results from the analysis on the sample power network show that electricity prices increased with the adoption of water and carbon taxes compared with locational marginal prices without taxes. As expected, wind energy integration reduced both CO 2 emissions and water usage. Capacity disruptions also caused locational marginal prices to increase. Other detailed analyses and results obtained using a 30-bus IEEE network are discussed in detail

  20. Learning to maximize reward rate: a model based on semi-Markov decision processes.

    Science.gov (United States)

    Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R

    2014-01-01

    WHEN ANIMALS HAVE TO MAKE A NUMBER OF DECISIONS DURING A LIMITED TIME INTERVAL, THEY FACE A FUNDAMENTAL PROBLEM: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible "conditions." A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each "condition" being a "state" and the value of decision thresholds being the "actions" taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values.

  1. Decision Making under Uncertainty: A Neural Model based on Partially Observable Markov Decision Processes

    Directory of Open Access Journals (Sweden)

    Rajesh P N Rao

    2010-11-01

    Full Text Available A fundamental problem faced by animals is learning to select actions based on noisy sensory information and incomplete knowledge of the world. It has been suggested that the brain engages in Bayesian inference during perception but how such probabilistic representations are used to select actions has remained unclear. Here we propose a neural model of action selection and decision making based on the theory of partially observable Markov decision processes (POMDPs. Actions are selected based not on a single optimal estimate of state but on the posterior distribution over states (the belief state. We show how such a model provides a unified framework for explaining experimental results in decision making that involve both information gathering and overt actions. The model utilizes temporal difference (TD learning for maximizing expected reward. The resulting neural architecture posits an active role for the neocortex in belief computation while ascribing a role to the basal ganglia in belief representation, value computation, and action selection. When applied to the random dots motion discrimination task, model neurons representing belief exhibit responses similar to those of LIP neurons in primate neocortex. The appropriate threshold for switching from information gathering to overt actions emerges naturally during reward maximization. Additionally, the time course of reward prediction error in the model shares similarities with dopaminergic responses in the basal ganglia during the random dots task. For tasks with a deadline, the model learns a decision making strategy that changes with elapsed time, predicting a collapsing decision threshold consistent with some experimental studies. The model provides a new framework for understanding neural decision making and suggests an important role for interactions between the neocortex and the basal ganglia in learning the mapping between probabilistic sensory representations and actions that maximize

  2. A study on the stochastic model for nuclide transport in the fractured porous rock using continuous time Markov process

    International Nuclear Information System (INIS)

    Lee, Youn Myoung

    1995-02-01

    As a newly approaching model, a stochastic model using continuous time Markov process for nuclide decay chain transport of arbitrary length in the fractured porous rock medium has been proposed, by which the need for solving a set of partial differential equations corresponding to various sets of side conditions can be avoided. Once the single planar fracture in the rock matrix is represented by a series of finite number of compartments having region wise constant parameter values in them, the medium is continuous in view of various processes associated with nuclide transport but discrete in medium space and such geologic system is assumed to have Markov property, since the Markov process requires that only the present value of the time dependent random variable be known to determine the future value of random variable, nuclide transport in the medium can then be modeled as a continuous time Markov process. Processes that are involved in nuclide transport are advective transport due to groundwater flow, diffusion into the rock matrix, adsorption onto the wall of the fracture and within the pores in the rock matrix, and radioactive decay chain. The transition probabilities for nuclide from the transition intensities between and out of the compartments are represented utilizing Chapman-Kolmogorov equation, through which the expectation and the variance of nuclide distribution for each compartment or the fractured rock medium can be obtained. Some comparisons between Markov process model developed in this work and available analytical solutions for one-dimensional layered porous medium, fractured medium with rock matrix diffusion, and porous medium considering three member nuclide decay chain without rock matrix diffusion have been made showing comparatively good agreement for all cases. To verify the model developed in this work another comparative study was also made by fitting the experimental data obtained with NaLS and uranine running in the artificial fractured

  3. An integral equation approach to the interval reliability of systems modelled by finite semi-Markov processes

    International Nuclear Information System (INIS)

    Csenki, A.

    1995-01-01

    The interval reliability for a repairable system which alternates between working and repair periods is defined as the probability of the system being functional throughout a given time interval. In this paper, a set of integral equations is derived for this dependability measure, under the assumption that the system is modelled by an irreducible finite semi-Markov process. The result is applied to the semi-Markov model of a two-unit system with sequential preventive maintenance. The method used for the numerical solution of the resulting system of integral equations is a two-point trapezoidal rule. The system of implementation is the matrix computation package MATLAB on the Apple Macintosh SE/30. The numerical results are discussed and compared with those from simulation

  4. Certified policy synthesis for general Markov decision processes : an application in building automation systems

    NARCIS (Netherlands)

    Haesaert, S.; Cauchi, N.; Abate, A.

    2017-01-01

    In this paper, we present an industrial application of new approximate similarity relations for Markov models, and show that they are key for the synthesis of control strategies. Typically, modern engineering systems are modelled using complex and high-order models which make the correct-by-design

  5. Applying a Markov approach as a Lean Thinking analysis of waste elimination in a Rice Production Process

    Directory of Open Access Journals (Sweden)

    Eldon Glen Caldwell Marin

    2015-01-01

    Full Text Available The Markov Chains Model was proposed to analyze stochastic events when recursive cycles occur; for example, when rework in a continuous flow production affects the overall performance. Typically, the analysis of rework and scrap is done through a wasted material cost perspective and not from the perspective of waste capacity that reduces throughput and economic value added (EVA. Also, we can not find many cases of this application in agro-industrial production in Latin America, given the complexity of the calculations and the need for robust applications. This scientific work presents the results of a quasi-experimental research approach in order to explain how to apply DOE methods and Markov analysis in a rice production process located in Central America, evaluating the global effects of a single reduction in rework and scrap in a part of the whole line. The results show that in this case it is possible to evaluate benefits from Global Throughput and EVA perspective and not only from the saving costs perspective, finding a relationship between operational indicators and corporate performance. However, it was found that it is necessary to analyze the markov chains configuration with many rework points, also it is still relevant to take into account the effects on takt time and not only scrap´s costs.

  6. Markov Tail Chains

    OpenAIRE

    janssen, Anja; Segers, Johan

    2013-01-01

    The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in Rd. We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In ...

  7. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    Keywords. Markov chain; state space; stationary transition probability; stationary distribution; irreducibility; aperiodicity; stationarity; M-H algorithm; proposal distribution; acceptance probability; image processing; Gibbs sampler.

  8. Using Markov Decision Processes with Heterogeneous Queueing Systems to Examine Military MEDEVAC Dispatching Policies

    Science.gov (United States)

    2017-03-23

    POLICIES THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering and Management Air Force Institute of Technology...dispatching policy and three practitioner-friendly myopic baseline policies. Two computational experiments, a two-level, five-factor screening design and a...over, an open question exists concerning the best exact solution approach for solving Markov decision problems due to recent advances in performance by

  9. Artificial intelligence framework for simulating clinical decision-making: a Markov decision process approach.

    Science.gov (United States)

    Bennett, Casey C; Hauser, Kris

    2013-01-01

    In the modern healthcare system, rapidly expanding costs/complexity, the growing myriad of treatment options, and exploding information streams that often do not effectively reach the front lines hinder the ability to choose optimal treatment decisions over time. The goal in this paper is to develop a general purpose (non-disease-specific) computational/artificial intelligence (AI) framework to address these challenges. This framework serves two potential functions: (1) a simulation environment for exploring various healthcare policies, payment methodologies, etc., and (2) the basis for clinical artificial intelligence - an AI that can "think like a doctor". This approach combines Markov decision processes and dynamic decision networks to learn from clinical data and develop complex plans via simulation of alternative sequential decision paths while capturing the sometimes conflicting, sometimes synergistic interactions of various components in the healthcare system. It can operate in partially observable environments (in the case of missing observations or data) by maintaining belief states about patient health status and functions as an online agent that plans and re-plans as actions are performed and new observations are obtained. This framework was evaluated using real patient data from an electronic health record. The results demonstrate the feasibility of this approach; such an AI framework easily outperforms the current treatment-as-usual (TAU) case-rate/fee-for-service models of healthcare. The cost per unit of outcome change (CPUC) was $189 vs. $497 for AI vs. TAU (where lower is considered optimal) - while at the same time the AI approach could obtain a 30-35% increase in patient outcomes. Tweaking certain AI model parameters could further enhance this advantage, obtaining approximately 50% more improvement (outcome change) for roughly half the costs. Given careful design and problem formulation, an AI simulation framework can approximate optimal

  10. Semi-Markov Chains and Hidden Semi-Markov Models toward Applications Their Use in Reliability and DNA Analysis

    CERN Document Server

    Barbu, Vlad

    2008-01-01

    Semi-Markov processes are much more general and better adapted to applications than the Markov ones because sojourn times in any state can be arbitrarily distributed, as opposed to the geometrically distributed sojourn time in the Markov case. This book concerns with the estimation of discrete-time semi-Markov and hidden semi-Markov processes

  11. SDF1 Reduces Interneuron Leading Process Branching through Dual Regulation of Actin and Microtubules

    Science.gov (United States)

    Lysko, Daniel E.; Putt, Mary

    2014-01-01

    Normal cerebral cortical function requires a highly ordered balance between projection neurons and interneurons. During development these two neuronal populations migrate from distinct progenitor zones to form the cerebral cortex, with interneurons originating in the more distant ganglionic eminences. Moreover, deficits in interneurons have been linked to a variety of neurodevelopmental disorders underscoring the importance of understanding interneuron development and function. We, and others, have identified SDF1 signaling as one important modulator of interneuron migration speed and leading process branching behavior in mice, although how SDF1 signaling impacts these behaviors remains unknown. We previously found SDF1 inhibited leading process branching while increasing the rate of migration. We have now mechanistically linked SDF1 modulation of leading process branching behavior to a dual regulation of both actin and microtubule organization. We find SDF1 consolidates actin at the leading process tip by de-repressing calpain protease and increasing proteolysis of branched-actin-supporting cortactin. Additionally, SDF1 stabilizes the microtubule array in the leading process through activation of the microtubule-associated protein doublecortin (DCX). DCX stabilizes the microtubule array by bundling microtubules within the leading process, reducing branching. These data provide mechanistic insight into the regulation of interneuron leading process dynamics during neuronal migration in mice and provides insight into how cortactin and DCX, a known human neuronal migration disorder gene, participate in this process. PMID:24695713

  12. SDF1 reduces interneuron leading process branching through dual regulation of actin and microtubules.

    Science.gov (United States)

    Lysko, Daniel E; Putt, Mary; Golden, Jeffrey A

    2014-04-02

    Normal cerebral cortical function requires a highly ordered balance between projection neurons and interneurons. During development these two neuronal populations migrate from distinct progenitor zones to form the cerebral cortex, with interneurons originating in the more distant ganglionic eminences. Moreover, deficits in interneurons have been linked to a variety of neurodevelopmental disorders underscoring the importance of understanding interneuron development and function. We, and others, have identified SDF1 signaling as one important modulator of interneuron migration speed and leading process branching behavior in mice, although how SDF1 signaling impacts these behaviors remains unknown. We previously found SDF1 inhibited leading process branching while increasing the rate of migration. We have now mechanistically linked SDF1 modulation of leading process branching behavior to a dual regulation of both actin and microtubule organization. We find SDF1 consolidates actin at the leading process tip by de-repressing calpain protease and increasing proteolysis of branched-actin-supporting cortactin. Additionally, SDF1 stabilizes the microtubule array in the leading process through activation of the microtubule-associated protein doublecortin (DCX). DCX stabilizes the microtubule array by bundling microtubules within the leading process, reducing branching. These data provide mechanistic insight into the regulation of interneuron leading process dynamics during neuronal migration in mice and provides insight into how cortactin and DCX, a known human neuronal migration disorder gene, participate in this process.

  13. Materials Process Design Branch. Work Unit Directive (WUD) 54

    National Research Council Canada - National Science Library

    LeClair, Steve

    2002-01-01

    The objectives of the Manufacturing Research WUD 54 are to 1) conduct in-house research to develop advanced materials process design/control technologies to enable more repeatable and affordable manufacturing capabilities and 2...

  14. Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2013-01-01

    Roč. 7, č. 3 (2013), s. 146-161 ISSN 0572-3043 R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150 Grant - others:AVČR a CONACyT(CZ) 171396 Institutional support: RVO:67985556 Keywords : Discrete-time Markov decision chains * exponential utility functions * certainty equivalent * mean-variance optimality * connections between risk -sensitive and risk -neutral models Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/sladky-0399099.pdf

  15. On some Filtration Procedure for Jump Markov Process Observed in White Gaussian Noise

    OpenAIRE

    Khas'minskii, Rafail Z.; Lazareva, Betty V.

    1992-01-01

    The importance of optimal filtration problem for Markov chain with two states observed in Gaussian white noise (GWN) for a lot of concrete technical problems is well known. The equation for a posterior probability $\\pi(t)$ of one of the states was obtained many years ago. The aim of this paper is to study a simple filtration method. It is shown that this simplified filtration is asymptotically efficient in some sense if the diffusion constant of the GWN goes to 0. Some advantages of this proc...

  16. Mathematical Modeling of the Process for Microbial Production of Branched Chained Amino Acids

    Directory of Open Access Journals (Sweden)

    Todorov K.

    2009-12-01

    Full Text Available This article deals with modelling of branched chained amino acids production. One of important branched chained amino acid is L-valine. The aim of the article is synthesis of dynamic unstructured model of fed-batch fermentation process with intensive droppings for L-valine production. The presented approach of the investigation includes the following main procedures: description of the process by generalized stoichiometric equations; preliminary data processing and calculation of specific rates for main kinetic variables; identification of the specific rates takes into account the dissolved oxygen tension; establishment and optimisation of dynamic model of the process; simulation researches. MATLAB is used as a research environment.

  17. Numerical simulations of piecewise deterministic Markov processes with an application to the stochastic Hodgkin-Huxley model

    Science.gov (United States)

    Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan

    2016-12-01

    The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.

  18. Modeling dyadic processes using Hidden Markov Models: A time series approach to mother-infant interactions during infant immunization.

    Science.gov (United States)

    Stifter, Cynthia A; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at two and six months of age, used hidden Markov modeling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a 4-state model for the dyadic responses to a two-month inoculation whereas a 6-state model best described the dyadic process at six months. Two of the states at two months and three of the states at six months suggested a progression from high intensity crying to no crying with parents using vestibular and auditory soothing methods. The use of feeding and/or pacifying to soothe the infant characterized one two-month state and two six-month states. These data indicate that with maturation and experience, the mother-infant dyad is becoming more organized around the soothing interaction. Using hidden Markov modeling to describe individual differences, as well as normative processes, is also presented and discussed.

  19. Estimating the Propagation of Interdependent Cascading Outages with Multi-Type Branching Processes

    Energy Technology Data Exchange (ETDEWEB)

    Qi, Junjian; Ju, Wenyun; Sun, Kai

    2016-01-01

    In this paper, the multi-type branching process is applied to describe the statistics and interdependencies of line outages, the load shed, and isolated buses. The offspring mean matrix of the multi-type branching process is estimated by the Expectation Maximization (EM) algorithm and can quantify the extent of outage propagation. The joint distribution of two types of outages is estimated by the multi-type branching process via the Lagrange-Good inversion. The proposed model is tested with data generated by the AC OPA cascading simulations on the IEEE 118-bus system. The largest eigenvalues of the offspring mean matrix indicate that the system is closer to criticality when considering the interdependence of different types of outages. Compared with empirically estimating the joint distribution of the total outages, good estimate is obtained by using the multitype branching process with a much smaller number of cascades, thus greatly improving the efficiency. It is shown that the multitype branching process can effectively predict the distribution of the load shed and isolated buses and their conditional largest possible total outages even when there are no data of them.

  20. The S-Process Branching-Point at 205PB

    Science.gov (United States)

    Tonchev, Anton; Tsoneva, N.; Bhatia, C.; Arnold, C. W.; Goriely, S.; Hammond, S. L.; Kelley, J. H.; Kwan, E.; Lenske, H.; Piekarewicz, J.; Raut, R.; Rusev, G.; Shizuma, T.; Tornow, W.

    2017-09-01

    Accurate neutron-capture cross sections for radioactive nuclei near the line of beta stability are crucial for understanding s-process nucleosynthesis. However, neutron-capture cross sections for short-lived radionuclides are difficult to measure due to the fact that the measurements require both highly radioactive samples and intense neutron sources. We consider photon scattering using monoenergetic and 100% linearly polarized photon beams to obtain the photoabsorption cross section on 206Pb below the neutron separation energy. This observable becomes an essential ingredient in the Hauser-Feshbach statistical model for calculations of capture cross sections on 205Pb. The newly obtained photoabsorption information is also used to estimate the Maxwellian-averaged radiative cross section of 205Pb(n,g)206Pb at 30 keV. The astrophysical impact of this measurement on s-process nucleosynthesis will be discussed. This work was performed under the auspices of US DOE by LLNL under Contract DE-AC52-07NA27344.

  1. Post processing of optically recognized text via second order hidden Markov model

    Science.gov (United States)

    Poudel, Srijana

    In this thesis, we describe a postprocessing system on Optical Character Recognition(OCR) generated text. Second Order Hidden Markov Model (HMM) approach is used to detect and correct the OCR related errors. The reason for choosing the 2nd order HMM is to keep track of the bigrams so that the model can represent the system more accurately. Based on experiments with training data of 159,733 characters and testing of 5,688 characters, the model was able to correct 43.38 % of the errors with a precision of 75.34 %. However, the precision value indicates that the model introduced some new errors, decreasing the correction percentage to 26.4%.

  2. MARKOV PROCESSES IN MODELING LAND USE AND LAND COVER CHANGES IN SINTRA-CASCAIS, PORTUGAL

    Directory of Open Access Journals (Sweden)

    PEDRO CABRAL

    2009-01-01

    Full Text Available En este artículo los procesos de alteración de la utilización y ocupación del suelo (LUCC son investigados recorriendo-se a técnicas de teledetección y a cadenas de Markov en lasmunicipalidades de Sintra y Cascais (Portugal entre los anos de 1989 y 2000. El papel del Parque Natural de Sintra-Cascais (PNSC es evaluado.Los resultados demuestran que, dentro del PNSC, el LUCC presente depende del pasadoinmediato del uso y ocupación del suelo siguiendo un comportamiento Markoviano. Fuera del PNSC, LUCC es aleatorio y no sigue un proceso Markoviano. Estimativas del LUCC para el ano de 2006 son presentadas para el área dentro del PNSC. Estos resultados refuerzan el papel del PNSC como una herramienta indispensable para preservar la estabilidad del LUCC y garantizar sus funciones.

  3. Multi-state reliability for pump group in system based on UGF and semi-Markov process

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Zhao Xinwen; Chen Ling

    2012-01-01

    In this paper, multi-state reliability value of pump group in nuclear power system is obtained by the combination method of the universal generating function (UGF) and Semi-Markov process. UGF arithmetic model of multi-state system reliability is studied, and the performance state probability expression of multi-state component is derived using semi-Markov theory. A quantificational model is defined to express the performance rate of the system and component. Different availability results by multi-state and binary state analysis method are compared under the condition whether the performance rate can satisfy the demanded value, and the mean value of system instantaneous output performance is also obtained. It shows that this combination method is an effective and feasible one which can quantify the effect of the partial failure on the system reliability, and the result of multi-state system reliability by this method deduces the modesty of the reliability value obtained by binary reliability analysis method. (authors)

  4. The Specific Features of design and process engineering in branch of industrial enterprise

    Science.gov (United States)

    Sosedko, V. V.; Yanishevskaya, A. G.

    2017-06-01

    Production output of industrial enterprise is organized in debugged working mechanisms at each stage of product’s life cycle from initial design documentation to product and finishing it with utilization. The topic of article is mathematical model of the system design and process engineering in branch of the industrial enterprise, statistical processing of estimated implementation results of developed mathematical model in branch, and demonstration of advantages at application at this enterprise. During the creation of model a data flow about driving of information, orders, details and modules in branch of enterprise groups of divisions were classified. Proceeding from the analysis of divisions activity, a data flow, details and documents the state graph of design and process engineering was constructed, transitions were described and coefficients are appropriated. To each condition of system of the constructed state graph the corresponding limiting state probabilities were defined, and also Kolmogorov’s equations are worked out. When integration of sets of equations of Kolmogorov the state probability of system activity the specified divisions and production as function of time in each instant is defined. On the basis of developed mathematical model of uniform system of designing and process engineering and manufacture, and a state graph by authors statistical processing the application of mathematical model results was carried out, and also advantage at application at this enterprise is shown. Researches on studying of loading services probability of branch and third-party contractors (the orders received from branch within a month) were conducted. The developed mathematical model of system design and process engineering and manufacture can be applied to definition of activity state probability of divisions and manufacture as function of time in each instant that will allow to keep account of loading of performance of work in branches of the enterprise.

  5. Tokunaga and Horton self-similarity for level set trees of Markov chains

    International Nuclear Information System (INIS)

    Zaliapin, Ilia; Kovchegov, Yevgeniy

    2012-01-01

    Highlights: ► Self-similar properties of the level set trees for Markov chains are studied. ► Tokunaga and Horton self-similarity are established for symmetric Markov chains and regular Brownian motion. ► Strong, distributional self-similarity is established for symmetric Markov chains with exponential jumps. ► It is conjectured that fractional Brownian motions are Tokunaga self-similar. - Abstract: The Horton and Tokunaga branching laws provide a convenient framework for studying self-similarity in random trees. The Horton self-similarity is a weaker property that addresses the principal branching in a tree; it is a counterpart of the power-law size distribution for elements of a branching system. The stronger Tokunaga self-similarity addresses so-called side branching. The Horton and Tokunaga self-similarity have been empirically established in numerous observed and modeled systems, and proven for two paradigmatic models: the critical Galton–Watson branching process with finite progeny and the finite-tree representation of a regular Brownian excursion. This study establishes the Tokunaga and Horton self-similarity for a tree representation of a finite symmetric homogeneous Markov chain. We also extend the concept of Horton and Tokunaga self-similarity to infinite trees and establish self-similarity for an infinite-tree representation of a regular Brownian motion. We conjecture that fractional Brownian motions are also Tokunaga and Horton self-similar, with self-similarity parameters depending on the Hurst exponent.

  6. Generalized Boolean logic Driven Markov Processes: A powerful modeling framework for Model-Based Safety Analysis of dynamic repairable and reconfigurable systems

    International Nuclear Information System (INIS)

    Piriou, Pierre-Yves; Faure, Jean-Marc; Lesage, Jean-Jacques

    2017-01-01

    This paper presents a modeling framework that permits to describe in an integrated manner the structure of the critical system to analyze, by using an enriched fault tree, the dysfunctional behavior of its components, by means of Markov processes, and the reconfiguration strategies that have been planned to ensure safety and availability, with Moore machines. This framework has been developed from BDMP (Boolean logic Driven Markov Processes), a previous framework for dynamic repairable systems. First, the contribution is motivated by pinpointing the limitations of BDMP to model complex reconfiguration strategies and the failures of the control of these strategies. The syntax and semantics of GBDMP (Generalized Boolean logic Driven Markov Processes) are then formally defined; in particular, an algorithm to analyze the dynamic behavior of a GBDMP model is developed. The modeling capabilities of this framework are illustrated on three representative examples. Last, qualitative and quantitative analysis of GDBMP models highlight the benefits of the approach.

  7. Effects of s-process branchings on stellar and meteoritic abundances

    International Nuclear Information System (INIS)

    Norman, E.B.; Lesko, K.T.; Crane, S.G.; Larimer, R.M.; Champagne, A.E.

    1985-12-01

    The level scheme and electromagnetic properties of 148 Pm have been studied using 149 Sm(d, 3 He) and 148 Nd(p,nγ) reactions. Combining these measurements with estimates for E2/M1 decay branching ratios leads to the tentative conclusion that 148 Pm/sup g,m/ are in thermal equilibrium during the s-process. The branch at 148 Pm then leads to an inferred s-process neutron density of 3 x 10 8 cm -3

  8. Age-dependent branching processes for surveillance of vaccine-preventable diseases with incubation period

    Directory of Open Access Journals (Sweden)

    Marusia N Bojkova

    2010-10-01

    Full Text Available The purpose of this paper is to review the recent results of the authors in the area of infectious disease modelling by means of branching stochastic processes. This is a new approach involving age-dependent branching models, which turned out to be more appropriate and flexible for describing the spread of an infection in a given population, than discrete time ones. Concretely, Bellman-Harris and Sevast’yanov’s branching processes are investigated. It is justified that the proposed models are proper candidates as models of infectious diseases with incubation period like measles, mumps, avian flu, etc. It is worth to notice that in general the developed methodology is applicable to the diseases that follow the so-called SIR (susceptible- infected-removed scheme in terms of epidemiological models. Two policies of extra-vaccination level are proposed and compared on the ground of simulation examples.

  9. Phasic Triplet Markov Chains.

    Science.gov (United States)

    El Yazid Boudaren, Mohamed; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar

    2014-11-01

    Hidden Markov chains have been shown to be inadequate for data modeling under some complex conditions. In this work, we address the problem of statistical modeling of phenomena involving two heterogeneous system states. Such phenomena may arise in biology or communications, among other fields. Namely, we consider that a sequence of meaningful words is to be searched within a whole observation that also contains arbitrary one-by-one symbols. Moreover, a word may be interrupted at some site to be carried on later. Applying plain hidden Markov chains to such data, while ignoring their specificity, yields unsatisfactory results. The Phasic triplet Markov chain, proposed in this paper, overcomes this difficulty by means of an auxiliary underlying process in accordance with the triplet Markov chains theory. Related Bayesian restoration techniques and parameters estimation procedures according to the new model are then described. Finally, to assess the performance of the proposed model against the conventional hidden Markov chain model, experiments are conducted on synthetic and real data.

  10. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. II. Optimization model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    improvements. The biological model of the replacement model is described in a previous paper and in this paper the optimization model is described. The model is developed as a prototype for use under practical conditions. The application of the model is demonstrated using data from two commercial Danish sow......Recent methodological improvements in replacement models comprising multi-level hierarchical Markov processes and Bayesian updating have hardly been implemented in any replacement model and the aim of this study is to present a sow replacement model that really uses these methodological...... herds. It is concluded that the Bayesian updating technique and the hierarchical structure decrease the size of the state space dramatically. Since parameter estimates vary considerably among herds it is concluded that decision support concerning sow replacement only makes sense with parameters...

  11. Markov counting and reward processes for analysing the performance of a complex system subject to random inspections

    International Nuclear Information System (INIS)

    Ruiz-Castro, Juan Eloy

    2016-01-01

    In this paper, a discrete complex reliability system subject to internal failures and external shocks, is modelled algorithmically. Two types of internal failure are considered: repairable and non-repairable. When a repairable failure occurs, the unit goes to corrective repair. In addition, the unit is subject to external shocks that may produce an aggravation of the internal degradation level, cumulative damage or extreme failure. When a damage threshold is reached, the unit must be removed. When a non-repairable failure occurs, the device is replaced by a new, identical one. The internal performance and the external damage are partitioned in performance levels. Random inspections are carried out. When an inspection takes place, the internal performance of the system and the damage caused by external shocks are observed and if necessary the unit is sent to preventive maintenance. If the inspection observes minor state for the internal performance and/or external damage, then these states remain in memory when the unit goes to corrective or preventive maintenance. Transient and stationary analyses are performed. Markov counting and reward processes are developed in computational form to analyse the performance and profitability of the system with and without preventive maintenance. These aspects are implemented computationally with Matlab. - Highlights: • A multi-state device is modelled in an algorithmic and computational form. • The performance is partitioned in multi-states and degradation levels. • Several types of failures with repair times according to degradation levels. • Preventive maintenance as response to random inspection is introduced. • The performance-profitable is analysed through Markov counting and reward processes.

  12. Composable Markov Building Blocks

    NARCIS (Netherlands)

    Evers, S.; Fokkinga, M.M.; Apers, Peter M.G.; Prade, H.; Subrahmanian, V.S.

    2007-01-01

    In situations where disjunct parts of the same process are described by their own first-order Markov models and only one model applies at a time (activity in one model coincides with non-activity in the other models), these models can be joined together into one. Under certain conditions, nearly all

  13. Composable Markov Building Blocks

    NARCIS (Netherlands)

    Evers, S.; Fokkinga, M.M.; Apers, Peter M.G.

    2007-01-01

    In situations where disjunct parts of the same process are described by their own first-order Markov models, these models can be joined together under the constraint that there can only be one activity at a time, i.e. the activities of one model coincide with non-activity in the other models. Under

  14. Modelling the PCR amplification process by a size-dependent branching process and estimation of the efficiency

    NARCIS (Netherlands)

    Lalam, N.; Jacob, C.; Jagers, P.

    2004-01-01

    We propose a stochastic modelling of the PCR amplification process by a size-dependent branching process starting as a supercritical Bienaymé-Galton-Watson transient phase and then having a saturation near-critical size-dependent phase. This model allows us to estimate the probability of replication

  15. Fermionic Markov Chains

    OpenAIRE

    Fannes, Mark; Wouters, Jeroen

    2012-01-01

    We study a quantum process that can be considered as a quantum analogue for the classical Markov process. We specifically construct a version of these processes for free Fermions. For such free Fermionic processes we calculate the entropy density. This can be done either directly using Szeg\\"o's theorem for asymptotic densities of functions of Toeplitz matrices, or through an extension of said theorem to rates of functions, which we present in this article.

  16. A Markov decision process for managing habitat for Florida scrub-jays

    Science.gov (United States)

    Johnson, Fred A.; Breininger, David R.; Duncan, Brean W.; Nichols, James D.; Runge, Michael C.; Williams, B. Ken

    2011-01-01

    Florida scrub-jays Aphelocoma coerulescens are listed as threatened under the Endangered Species Act due to loss and degradation of scrub habitat. This study concerned the development of an optimal strategy for the restoration and management of scrub habitat at Merritt Island National Wildlife Refuge, which contains one of the few remaining large populations of scrub-jays in Florida. There are documented differences in the reproductive and survival rates of scrubjays among discrete classes of scrub height (Markov models to estimate annual transition probabilities among the four scrub-height classes under three possible management actions: scrub restoration (mechanical cutting followed by burning), a prescribed burn, or no intervention. A strategy prescribing the optimal management action for management units exhibiting different proportions of scrub-height classes was derived using dynamic programming. Scrub restoration was the optimal management action only in units dominated by mixed and tall scrub, and burning tended to be the optimal action for intermediate levels of short scrub. The optimal action was to do nothing when the amount of short scrub was greater than 30%, because short scrub mostly transitions to optimal height scrub (i.e., that state with the highest demographic success of scrub-jays) in the absence of intervention. Monte Carlo simulation of the optimal policy suggested that some form of management would be required every year. We note, however, that estimates of scrub-height transition probabilities were subject to several sources of uncertainty, and so we explored the management implications of alternative sets of transition probabilities. Generally, our analysis demonstrated the difficulty of managing for a species that requires midsuccessional habitat, and suggests that innovative management tools may be needed to help ensure the persistence of scrub-jays at Merritt Island National Wildlife Refuge. The development of a tailored monitoring

  17. On structural properties of the value function for an unbounded jump Markov process with an application to a processor-sharing retrial queue

    NARCIS (Netherlands)

    Bhulai, S.; Brooms, A.C.; Spieksma, F.M.

    2014-01-01

    The derivation of structural properties for unbounded jump Markov processes cannot be done using standard mathematical tools, since the analysis is hindered due to the fact that the system is not uniformizable. We present a promising technique, a smoothed rate truncation method, to overcome the

  18. A Monte Carlo approach to the ship-centric Markov decision process for analyzing decisions over converting a containership to LNG power

    NARCIS (Netherlands)

    Kana, A.A.; Harrison, B.M.

    2017-01-01

    A Monte Carlo approach to the ship-centric Markov decision process (SC-MDP) is presented for analyzing whether a container ship should convert to LNG power in the face of evolving Emission Control Area regulations. The SC-MDP model was originally developed as a means to analyze uncertain,

  19. The Effectiveness Analysis of Waiting Processes in the Different Branches of a Bank by Queue Model

    Directory of Open Access Journals (Sweden)

    Abdullah ÖZÇİL

    2015-06-01

    Full Text Available Despite the appreciable increase in the number of bank branches every year, nowadays queues for services don’t decrease and even become parts of our daily lives. By minimizing waiting processes the least, increasing customer satisfaction should be one of branch managers’ main goals. A quick and also customer oriented service with high quality is the most important factor for customer loyalty. In this study, Queueing theory, one of Operation Research techniques, is handled and in application, the data are obtained related to waiting in queue of customer in six different branches of two banks operating in Denizli and then they are analyzed by Queueing theory and also calculated the average effectiveness of the system. The study’s data are obtained by six branches of two banks called as A1, A2, A3, B1, B2 and B3. At the end of study it is presented to the company some advices that can bring benefits to the staff and customers. In this study, Queueing theory, one of Operation Research techniques, is handled and in application, the data are obtained related to waiting in queue of customer in three different branches of a bank operating in Denizli and then they are analyzed by Queueing theory and also calculated the average effectiveness of the system. The study’s data are obtained by three branches of the bank called A1, A2 and A3. At last it is presented to the company some advices that can bring more benefits to the staff and clients.

  20. A Markov decision process for managing habitat for Florida scrub-jays

    Science.gov (United States)

    Johnson, Fred A.; Breininger, David R.; Duncan, Brean W.; Nichols, James D.; Runge, Michael C.; Williams, B. Ken

    2011-01-01

    Florida scrub-jays Aphelocoma coerulescens are listed as threatened under the Endangered Species Act due to loss and degradation of scrub habitat. This study concerned the development of an optimal strategy for the restoration and management of scrub habitat at Merritt Island National Wildlife Refuge, which contains one of the few remaining large populations of scrub-jays in Florida. There are documented differences in the reproductive and survival rates of scrubjays among discrete classes of scrub height (strategy that would maximize the long-term growth rate of the resident scrub-jay population. We used aerial imagery with multistate Markov models to estimate annual transition probabilities among the four scrub-height classes under three possible management actions: scrub restoration (mechanical cutting followed by burning), a prescribed burn, or no intervention. A strategy prescribing the optimal management action for management units exhibiting different proportions of scrub-height classes was derived using dynamic programming. Scrub restoration was the optimal management action only in units dominated by mixed and tall scrub, and burning tended to be the optimal action for intermediate levels of short scrub. The optimal action was to do nothing when the amount of short scrub was greater than 30%, because short scrub mostly transitions to optimal height scrub (i.e., that state with the highest demographic success of scrub-jays) in the absence of intervention. Monte Carlo simulation of the optimal policy suggested that some form of management would be required every year. We note, however, that estimates of scrub-height transition probabilities were subject to several sources of uncertainty, and so we explored the management implications of alternative sets of transition probabilities. Generally, our analysis demonstrated the difficulty of managing for a species that requires midsuccessional habitat, and suggests that innovative management tools may be needed to

  1. Constructing Dynamic Event Trees from Markov Models

    International Nuclear Information System (INIS)

    Paolo Bucci; Jason Kirschenbaum; Tunc Aldemir; Curtis Smith; Ted Wood

    2006-01-01

    In the probabilistic risk assessment (PRA) of process plants, Markov models can be used to model accurately the complex dynamic interactions between plant physical process variables (e.g., temperature, pressure, etc.) and the instrumentation and control system that monitors and manages the process. One limitation of this approach that has prevented its use in nuclear power plant PRAs is the difficulty of integrating the results of a Markov analysis into an existing PRA. In this paper, we explore a new approach to the generation of failure scenarios and their compilation into dynamic event trees from a Markov model of the system. These event trees can be integrated into an existing PRA using software tools such as SAPHIRE. To implement our approach, we first construct a discrete-time Markov chain modeling the system of interest by: (a) partitioning the process variable state space into magnitude intervals (cells), (b) using analytical equations or a system simulator to determine the transition probabilities between the cells through the cell-to-cell mapping technique, and, (c) using given failure/repair data for all the components of interest. The Markov transition matrix thus generated can be thought of as a process model describing the stochastic dynamic behavior of the finite-state system. We can therefore search the state space starting from a set of initial states to explore all possible paths to failure (scenarios) with associated probabilities. We can also construct event trees of arbitrary depth by tracing paths from a chosen initiating event and recording the following events while keeping track of the probabilities associated with each branch in the tree. As an example of our approach, we use the simple level control system often used as benchmark in the literature with one process variable (liquid level in a tank), and three control units: a drain unit and two supply units. Each unit includes a separate level sensor to observe the liquid level in the tank

  2. Using multitype branching processes to quantify statistics of disease outbreaks in zoonotic epidemics.

    Science.gov (United States)

    Singh, Sarabjeet; Schneider, David J; Myers, Christopher R

    2014-03-01

    Branching processes have served as a model for chemical reactions, biological growth processes, and contagion (of disease, information, or fads). Through this connection, these seemingly different physical processes share some common universalities that can be elucidated by analyzing the underlying branching process. In this work we focus on coupled branching processes as a model of infectious diseases spreading from one population to another. An exceedingly important example of such coupled outbreaks are zoonotic infections that spill over from animal populations to humans. We derive several statistical quantities characterizing the first spillover event from animals to humans, including the probability of spillover, the first passage time distribution for human infection, and disease prevalence in the animal population at spillover. Large stochastic fluctuations in those quantities can make inference of the state of the system at the time of spillover difficult. Focusing on outbreaks in the human population, we then characterize the critical threshold for a large outbreak, the distribution of outbreak sizes, and associated scaling laws. These all show a strong dependence on the basic reproduction number in the animal population and indicate the existence of a novel multicritical point with altered scaling behavior. The coupling of animal and human infection dynamics has crucial implications, most importantly allowing for the possibility of large human outbreaks even when human-to-human transmission is subcritical.

  3. Using multitype branching processes to quantify statistics of disease outbreaks in zoonotic epidemics

    Science.gov (United States)

    Singh, Sarabjeet; Schneider, David J.; Myers, Christopher R.

    2014-03-01

    Branching processes have served as a model for chemical reactions, biological growth processes, and contagion (of disease, information, or fads). Through this connection, these seemingly different physical processes share some common universalities that can be elucidated by analyzing the underlying branching process. In this work we focus on coupled branching processes as a model of infectious diseases spreading from one population to another. An exceedingly important example of such coupled outbreaks are zoonotic infections that spill over from animal populations to humans. We derive several statistical quantities characterizing the first spillover event from animals to humans, including the probability of spillover, the first passage time distribution for human infection, and disease prevalence in the animal population at spillover. Large stochastic fluctuations in those quantities can make inference of the state of the system at the time of spillover difficult. Focusing on outbreaks in the human population, we then characterize the critical threshold for a large outbreak, the distribution of outbreak sizes, and associated scaling laws. These all show a strong dependence on the basic reproduction number in the animal population and indicate the existence of a novel multicritical point with altered scaling behavior. The coupling of animal and human infection dynamics has crucial implications, most importantly allowing for the possibility of large human outbreaks even when human-to-human transmission is subcritical.

  4. Confluence reduction for Markov automata

    NARCIS (Netherlands)

    Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. Recently, the process algebra MAPA was introduced to efficiently model such systems. As always, the state space explosion threatens the analysability of the models

  5. Confluence Reduction for Markov Automata

    NARCIS (Netherlands)

    Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette; Braberman, Victor; Fribourg, Laurent

    Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. Recently, the process algebra MAPA was introduced to efficiently model such systems. As always, the state space explosion threatens the analysability of the models

  6. Prospects for direct neutron capture measurements on s-process branching point isotopes

    Energy Technology Data Exchange (ETDEWEB)

    Guerrero, C.; Lerendegui-Marco, J.; Quesada, J.M. [Universidad de Sevilla, Dept. de Fisica Atomica, Molecular y Nuclear, Sevilla (Spain); Domingo-Pardo, C. [CSIC-Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Kaeppeler, F. [Karlsruhe Institute of Technology, Institut fuer Kernphysik, Karlsruhe (Germany); Palomo, F.R. [Universidad de Sevilla, Dept. de Ingenieria Electronica, Sevilla (Spain); Reifarth, R. [Goethe-Universitaet Frankfurt am Main, Frankfurt am Main (Germany)

    2017-05-15

    The neutron capture cross sections of several unstable key isotopes acting as branching points in the s-process are crucial for stellar nucleosynthesis studies, but they are very challenging to measure directly due to the difficult production of sufficient sample material, the high activity of the resulting samples, and the actual (n, γ) measurement, where high neutron fluxes and effective background rejection capabilities are required. At present there are about 21 relevant s-process branching point isotopes whose cross section could not be measured yet over the neutron energy range of interest for astrophysics. However, the situation is changing with some very recent developments and upcoming technologies. This work introduces three techniques that will change the current paradigm in the field: the use of γ-ray imaging techniques in (n, γ) experiments, the production of moderated neutron beams using high-power lasers, and double capture experiments in Maxwellian neutron beams. (orig.)

  7. On the regularity of the extinction probability of a branching process in varying and random environments

    International Nuclear Information System (INIS)

    Alili, Smail; Rugh, Hans Henrik

    2008-01-01

    We consider a supercritical branching process in time-dependent environment ξ. We assume that the offspring distributions depend regularly (C k or real-analytically) on real parameters λ. We show that the extinction probability q λ (ξ), given the environment ξ 'inherits' this regularity whenever the offspring distributions satisfy a condition of contraction-type. Our proof makes use of the Poincaré metric on the complex unit disc and a real-analytic implicit function theorem

  8. The McMillan Theorem for Colored Branching Processes and Dimensions of Random Fractals

    Directory of Open Access Journals (Sweden)

    Victor Bakhtin

    2014-12-01

    Full Text Available For the simplest colored branching process, we prove an analog to the McMillan theorem and calculate the Hausdorff dimensions of random fractals defined in terms of the limit behavior of empirical measures generated by finite genetic lines. In this setting, the role of Shannon’s entropy is played by the Kullback–Leibler divergence, and the Hausdorff dimensions are computed by means of the so-called Billingsley–Kullback entropy, defined in the paper.

  9. Simulating the Emergence and Survival of Mutations Using a Self Regulating Multitype Branching Processes

    Directory of Open Access Journals (Sweden)

    Charles J. Mode

    2011-01-01

    Full Text Available It is difficult for an experimenter to study the emergence and survival of mutations, because mutations are rare events so that large experimental population must be maintained to ensure a reasonable chance that a mutation will be observed. In his famous book, The Genetical Theory of Natural Selection, Sir R. A. Fisher introduced branching processes into evolutionary genetics as a framework for studying the emergence and survival of mutations in an evolving population. During the lifespan of Fisher, computer technology had not advanced to a point at which it became an effective tool for simulating the phenomenon of the emergence and survival of mutations, but given the wide availability of personal desktop and laptop computers, it is now possible and financially feasible for investigators to perform Monte Carlo Simulation experiments. In this paper all computer simulation experiments were carried out within a framework of self regulating multitype branching processes, which are part of a stochastic working paradigm. Emergence and survival of mutations could also be studied within a deterministic paradigm, which raises the issue as to what sense are predictions based on the stochastic and deterministic models are consistent. To come to grips with this issue, a technique was used such that a deterministic model could be embedded in a branching process so that the predictions of both the stochastic and deterministic compared based on the same assigned values of parameters.

  10. A statistical model for prediction of fuel element failure using the Markov process and entropy minimax principles

    International Nuclear Information System (INIS)

    Choi, K.Y.; Yoon, Y.K.; Chang, S.H.

    1991-01-01

    This paper reports on a new statistical fuel failure model developed to take into account the effects of damaging environmental conditions and the overall operating history of the fuel elements. The degradation of material properties and damage resistance of the fuel cladding is mainly caused by the combined effects of accumulated dynamic stresses, neutron irradiation, and chemical and stress corrosion at operating temperature. Since the degradation of material properties due to these effects can be considered as a stochastic process, a dynamic reliability function is derived based on the Markov process. Four damage parameters, namely, dynamic stresses, magnitude of power increase from the preceding power level and with ramp rate, and fatigue cycles, are used to build this model. The dynamic reliability function and damage parameters are used to obtain effective damage parameters. The entropy maximization principle is used to generate a probability density function of the effective damage parameters. The entropy minimization principle is applied to determine weighting factors for amalgamation of the failure probabilities due to the respective failure modes. In this way, the effects of operating history, damaging environmental conditions, and damage sequence are taken into account

  11. Two-boundary first exit time of Gauss-Markov processes for stochastic modeling of acto-myosin dynamics.

    Science.gov (United States)

    D'Onofrio, Giuseppe; Pirozzi, Enrica

    2017-05-01

    We consider a stochastic differential equation in a strip, with coefficients suitably chosen to describe the acto-myosin interaction subject to time-varying forces. By simulating trajectories of the stochastic dynamics via an Euler discretization-based algorithm, we fit experimental data and determine the values of involved parameters. The steps of the myosin are represented by the exit events from the strip. Motivated by these results, we propose a specific stochastic model based on the corresponding time-inhomogeneous Gauss-Markov and diffusion process evolving between two absorbing boundaries. We specify the mean and covariance functions of the stochastic modeling process taking into account time-dependent forces including the effect of an external load. We accurately determine the probability density function (pdf) of the first exit time (FET) from the strip by solving a system of two non singular second-type Volterra integral equations via a numerical quadrature. We provide numerical estimations of the mean of FET as approximations of the dwell-time of the proteins dynamics. The percentage of backward steps is given in agreement to experimental data. Numerical and simulation results are compared and discussed.

  12. Dual Sticky Hierarchical Dirichlet Process Hidden Markov Model and Its Application to Natural Language Description of Motions.

    Science.gov (United States)

    Hu, Weiming; Tian, Guodong; Kang, Yongxin; Yuan, Chunfeng; Maybank, Stephen

    2017-09-25

    In this paper, a new nonparametric Bayesian model called the dual sticky hierarchical Dirichlet process hidden Markov model (HDP-HMM) is proposed for mining activities from a collection of time series data such as trajectories. All the time series data are clustered. Each cluster of time series data, corresponding to a motion pattern, is modeled by an HMM. Our model postulates a set of HMMs that share a common set of states (topics in an analogy with topic models for document processing), but have unique transition distributions. For the application to motion trajectory modeling, topics correspond to motion activities. The learnt topics are clustered into atomic activities which are assigned predicates. We propose a Bayesian inference method to decompose a given trajectory into a sequence of atomic activities. On combining the learnt sources and sinks, semantic motion regions, and the learnt sequence of atomic activities, the action represented by the trajectory can be described in natural language in as automatic a way as possible. The effectiveness of our dual sticky HDP-HMM is validated on several trajectory datasets. The effectiveness of the natural language descriptions for motions is demonstrated on the vehicle trajectories extracted from a traffic scene.

  13. Regeneration and general Markov chains

    Directory of Open Access Journals (Sweden)

    Vladimir V. Kalashnikov

    1994-01-01

    Full Text Available Ergodicity, continuity, finite approximations and rare visits of general Markov chains are investigated. The obtained results permit further quantitative analysis of characteristics, such as, rates of convergence, continuity (measured as a distance between perturbed and non-perturbed characteristics, deviations between Markov chains, accuracy of approximations and bounds on the distribution function of the first visit time to a chosen subset, etc. The underlying techniques use the embedding of the general Markov chain into a wide sense regenerative process with the help of splitting construction.

  14. Markov chains theory and applications

    CERN Document Server

    Sericola, Bruno

    2013-01-01

    Markov chains are a fundamental class of stochastic processes. They are widely used to solve problems in a large number of domains such as operational research, computer science, communication networks and manufacturing systems. The success of Markov chains is mainly due to their simplicity of use, the large number of available theoretical results and the quality of algorithms developed for the numerical evaluation of many metrics of interest.The author presents the theory of both discrete-time and continuous-time homogeneous Markov chains. He carefully examines the explosion phenomenon, the

  15. Characterization results and Markov chain Monte Carlo algorithms including exact simulation for some spatial point processes

    DEFF Research Database (Denmark)

    Häggström, Olle; Lieshout, Marie-Colette van; Møller, Jesper

    1999-01-01

    The area-interaction process and the continuum random-cluster model are characterized in terms of certain functional forms of their respective conditional intensities. In certain cases, these two point process models can be derived from a bivariate point process model which in many respects...... is simpler to analyse and simulate. Using this correspondence we devise a two-component Gibbs sampler, which can be used for fast and exact simulation by extending the recent ideas of Propp and Wilson. We further introduce a Swendsen-Wang type algorithm. The relevance of the results within spatial statistics...

  16. Partially Observable Markov Decision Process-Based Transmission Policy over Ka-Band Channels for Space Information Networks

    Directory of Open Access Journals (Sweden)

    Jian Jiao

    2017-09-01

    Full Text Available The Ka-band and higher Q/V band channels can provide an appealing capacity for the future deep-space communications and Space Information Networks (SIN, which are viewed as a primary solution to satisfy the increasing demands for high data rate services. However, Ka-band channel is much more sensitive to the weather conditions than the conventional communication channels. Moreover, due to the huge distance and long propagation delay in SINs, the transmitter can only obtain delayed Channel State Information (CSI from feedback. In this paper, the noise temperature of time-varying rain attenuation at Ka-band channels is modeled to a two-state Gilbert–Elliot channel, to capture the channel capacity that randomly ranging from good to bad state. An optimal transmission scheme based on Partially Observable Markov Decision Processes (POMDP is proposed, and the key thresholds for selecting the optimal transmission method in the SIN communications are derived. Simulation results show that our proposed scheme can effectively improve the throughput.

  17. A multi-level hierarchic Markov process with Bayesian updating for herd optimization and simulation in dairy cattle.

    Science.gov (United States)

    Demeter, R M; Kristensen, A R; Dijkstra, J; Oude Lansink, A G J M; Meuwissen, M P M; van Arendonk, J A M

    2011-12-01

    Herd optimization models that determine economically optimal insemination and replacement decisions are valuable research tools to study various aspects of farming systems. The aim of this study was to develop a herd optimization and simulation model for dairy cattle. The model determines economically optimal insemination and replacement decisions for individual cows and simulates whole-herd results that follow from optimal decisions. The optimization problem was formulated as a multi-level hierarchic Markov process, and a state space model with Bayesian updating was applied to model variation in milk yield. Methodological developments were incorporated in 2 main aspects. First, we introduced an additional level to the model hierarchy to obtain a more tractable and efficient structure. Second, we included a recently developed cattle feed intake model. In addition to methodological developments, new parameters were used in the state space model and other biological functions. Results were generated for Dutch farming conditions, and outcomes were in line with actual herd performance in the Netherlands. Optimal culling decisions were sensitive to variation in milk yield but insensitive to energy requirements for maintenance and feed intake capacity. We anticipate that the model will be applied in research and extension. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Distinguishing Hidden Markov Chains

    OpenAIRE

    Kiefer, Stefan; Sistla, A. Prasad

    2015-01-01

    Hidden Markov Chains (HMCs) are commonly used mathematical models of probabilistic systems. They are employed in various fields such as speech recognition, signal processing, and biological sequence analysis. We consider the problem of distinguishing two given HMCs based on an observation sequence that one of the HMCs generates. More precisely, given two HMCs and an observation sequence, a distinguishing algorithm is expected to identify the HMC that generates the observation sequence. Two HM...

  19. Pemodelan Markov Switching Autoregressive

    OpenAIRE

    Ariyani, Fiqria Devi; Warsito, Budi; Yasin, Hasbi

    2014-01-01

    Transition from depreciation to appreciation of exchange rate is one of regime switching that ignored by classic time series model, such as ARIMA, ARCH, or GARCH. Therefore, economic variables are modeled by Markov Switching Autoregressive (MSAR) which consider the regime switching. MLE is not applicable to parameters estimation because regime is an unobservable variable. So that filtering and smoothing process are applied to see the regime probabilities of observation. Using this model, tran...

  20. Relative entropy and waiting time for continuous-time Markov processes

    NARCIS (Netherlands)

    Chazottes, J.R.; Giardinà, C.; Redig, F.H.J.

    2006-01-01

    For discrete-time stochastic processes, there is a close connection between return (resp. waiting) times and entropy (resp. relative entropy). Such a connection cannot be straightforwardly extended to the continuous-time setting. Contrarily to the discrete-time case one needs a reference measure on

  1. Distribution of chirality in the quantum walk: Markov process and entanglement

    International Nuclear Information System (INIS)

    Romanelli, Alejandro

    2010-01-01

    The asymptotic behavior of the quantum walk on the line is investigated, focusing on the probability distribution of chirality independently of position. It is shown analytically that this distribution has a longtime limit that is stationary and depends on the initial conditions. This result is unexpected in the context of the unitary evolution of the quantum walk as it is usually linked to a Markovian process. The asymptotic value of the entanglement between the coin and the position is determined by the chirality distribution. For given asymptotic values of both the entanglement and the chirality distribution, it is possible to find the corresponding initial conditions within a particular class of spatially extended Gaussian distributions.

  2. Probabilistic parameter estimation of activated sludge processes using Markov Chain Monte Carlo.

    Science.gov (United States)

    Sharifi, Soroosh; Murthy, Sudhir; Takács, Imre; Massoudieh, Arash

    2014-03-01

    One of the most important challenges in making activated sludge models (ASMs) applicable to design problems is identifying the values of its many stoichiometric and kinetic parameters. When wastewater characteristics data from full-scale biological treatment systems are used for parameter estimation, several sources of uncertainty, including uncertainty in measured data, external forcing (e.g. influent characteristics), and model structural errors influence the value of the estimated parameters. This paper presents a Bayesian hierarchical modeling framework for the probabilistic estimation of activated sludge process parameters. The method provides the joint probability density functions (JPDFs) of stoichiometric and kinetic parameters by updating prior information regarding the parameters obtained from expert knowledge and literature. The method also provides the posterior correlations between the parameters, as well as a measure of sensitivity of the different constituents with respect to the parameters. This information can be used to design experiments to provide higher information content regarding certain parameters. The method is illustrated using the ASM1 model to describe synthetically generated data from a hypothetical biological treatment system. The results indicate that data from full-scale systems can narrow down the ranges of some parameters substantially whereas the amount of information they provide regarding other parameters is small, due to either large correlations between some of the parameters or a lack of sensitivity with respect to the parameters. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. The influence of Markov decision process structure on the possible strategic use of working memory and episodic memory.

    Directory of Open Access Journals (Sweden)

    Eric A Zilli

    2008-07-01

    Full Text Available Researchers use a variety of behavioral tasks to analyze the effect of biological manipulations on memory function. This research will benefit from a systematic mathematical method for analyzing memory demands in behavioral tasks. In the framework of reinforcement learning theory, these tasks can be mathematically described as partially-observable Markov decision processes. While a wealth of evidence collected over the past 15 years relates the basal ganglia to the reinforcement learning framework, only recently has much attention been paid to including psychological concepts such as working memory or episodic memory in these models. This paper presents an analysis that provides a quantitative description of memory states sufficient for correct choices at specific decision points. Using information from the mathematical structure of the task descriptions, we derive measures that indicate whether working memory (for one or more cues or episodic memory can provide strategically useful information to an agent. In particular, the analysis determines which observed states must be maintained in or retrieved from memory to perform these specific tasks. We demonstrate the analysis on three simplified tasks as well as eight more complex memory tasks drawn from the animal and human literature (two alternation tasks, two sequence disambiguation tasks, two non-matching tasks, the 2-back task, and the 1-2-AX task. The results of these analyses agree with results from quantitative simulations of the task reported in previous publications and provide simple indications of the memory demands of the tasks which can require far less computation than a full simulation of the task. This may provide a basis for a quantitative behavioral stoichiometry of memory tasks.

  4. The influence of Markov decision process structure on the possible strategic use of working memory and episodic memory.

    Science.gov (United States)

    Zilli, Eric A; Hasselmo, Michael E

    2008-07-23

    Researchers use a variety of behavioral tasks to analyze the effect of biological manipulations on memory function. This research will benefit from a systematic mathematical method for analyzing memory demands in behavioral tasks. In the framework of reinforcement learning theory, these tasks can be mathematically described as partially-observable Markov decision processes. While a wealth of evidence collected over the past 15 years relates the basal ganglia to the reinforcement learning framework, only recently has much attention been paid to including psychological concepts such as working memory or episodic memory in these models. This paper presents an analysis that provides a quantitative description of memory states sufficient for correct choices at specific decision points. Using information from the mathematical structure of the task descriptions, we derive measures that indicate whether working memory (for one or more cues) or episodic memory can provide strategically useful information to an agent. In particular, the analysis determines which observed states must be maintained in or retrieved from memory to perform these specific tasks. We demonstrate the analysis on three simplified tasks as well as eight more complex memory tasks drawn from the animal and human literature (two alternation tasks, two sequence disambiguation tasks, two non-matching tasks, the 2-back task, and the 1-2-AX task). The results of these analyses agree with results from quantitative simulations of the task reported in previous publications and provide simple indications of the memory demands of the tasks which can require far less computation than a full simulation of the task. This may provide a basis for a quantitative behavioral stoichiometry of memory tasks.

  5. Evaluation of linearly solvable Markov decision process with dynamic model learning in a mobile robot navigation task

    Directory of Open Access Journals (Sweden)

    Ken eKinjo

    2013-04-01

    Full Text Available Linearly solvable Markov Decision Process (LMDP is a class of optimal control problem in whichthe Bellman’s equation can be converted into a linear equation by an exponential transformation ofthe state value function (Todorov, 2009. In an LMDP, the optimal value function and the correspondingcontrol policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunctionproblem in a continuous state using the knowledge of the system dynamics and the action, state, andterminal cost functions.In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in whichthe dynamics of the body and the environment have to be learned from experience. We first perform asimulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynam-ics model on the derived the action policy. The result shows that a crude linear approximation of thenonlinear dynamics can still allow solution of the task, despite with a higher total cost.We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robotplatform. The state is given by the position and the size of a battery in its camera view and two neck jointangles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servocontroller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state costfunctions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics modelperformed equivalently with the optimal linear quadratic controller (LQR. In the non-quadratic task, theLMDP controller with a linear dynamics model showed the best performance. The results demonstratethe usefulness of the LMDP framework in real robot control even when simple linear models are usedfor dynamics learning.

  6. Evaluation of linearly solvable Markov decision process with dynamic model learning in a mobile robot navigation task.

    Science.gov (United States)

    Kinjo, Ken; Uchibe, Eiji; Doya, Kenji

    2013-01-01

    Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.

  7. Approximate quantum Markov chains

    CERN Document Server

    Sutter, David

    2018-01-01

    This book is an introduction to quantum Markov chains and explains how this concept is connected to the question of how well a lost quantum mechanical system can be recovered from a correlated subsystem. To achieve this goal, we strengthen the data-processing inequality such that it reveals a statement about the reconstruction of lost information. The main difficulty in order to understand the behavior of quantum Markov chains arises from the fact that quantum mechanical operators do not commute in general. As a result we start by explaining two techniques of how to deal with non-commuting matrices: the spectral pinching method and complex interpolation theory. Once the reader is familiar with these techniques a novel inequality is presented that extends the celebrated Golden-Thompson inequality to arbitrarily many matrices. This inequality is the key ingredient in understanding approximate quantum Markov chains and it answers a question from matrix analysis that was open since 1973, i.e., if Lieb's triple ma...

  8. Study on color identification for monitoring and controlling fermentation process of branched chain amino acid

    Science.gov (United States)

    Ma, Lei; Wang, Yizhong; Chen, Ning; Liu, Tiegen; Xu, Qingyang; Kong, Fanzhi

    2008-12-01

    In this paper, a new method for monitoring and controlling fermentation process of branched chain amino acid (BCAA) was proposed based on color identification. The color image of fermentation broth of BCAA was firstly taken by a CCD camera. Then, it was changed from RGB color model to HIS color model. Its histograms of hue H and saturation S were calculated, which were used as the input of a designed BP network. The output of the BP network was the description of the color of fermentation broth of BCAA. After training, the color of fermentation broth was identified by the BP network according to the histograms of H and S of a fermentation broth image. Along with other parameters, the fermentation process of BCAA was monitored and controlled to start the stationary phase of fermentation soon. Experiments were conducted with satisfied results to show the feasibility and usefulness of color identification of fermentation broth in fermentation process control of BCAA.

  9. Modeling Dyadic Processes Using Hidden Markov Models: A Time Series Approach to Mother-Infant Interactions during Infant Immunization

    Science.gov (United States)

    Stifter, Cynthia A.; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at 2 and 6?months of age, used hidden Markov modelling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a…

  10. Neutron capture at the s-process branching points $^{171}$Tm and $^{204}$Tl

    CERN Multimedia

    Branching points in the s-process are very special isotopes for which there is a competition between the neutron capture and the subsequent b-decay chain producing the heavy elements beyond Fe. Typically, the knowledge on the associated capture cross sections is very poor due to the difficulty in obtaining enough material of these radioactive isotopes and to measure the cross section of a sample with an intrinsic activity; indeed only 2 out o the 21 ${s}$-process branching points have ever been measured by using the time-of-flight method. In this experiment we aim at measuring for the first time the capture cross sections of $^{171}$Tm and $^{204}$Tl, both of crucial importance for understanding the nucleosynthesis of heavy elements in AGB stars. The combination of both (n,$\\gamma$) measurements on $^{171}$Tm and $^{204}$Tl will allow one to accurately constrain neutron density and the strength of the 13C(α,n) source in low mass AGB stars. Additionally, the cross section of $^{204}$Tl is also of cosmo-chrono...

  11. Oscillatory Critical Amplitudes in Hierarchical Models and the Harris Function of Branching Processes

    Science.gov (United States)

    Costin, Ovidiu; Giacomin, Giambattista

    2013-02-01

    Oscillatory critical amplitudes have been repeatedly observed in hierarchical models and, in the cases that have been taken into consideration, these oscillations are so small to be hardly detectable. Hierarchical models are tightly related to iteration of maps and, in fact, very similar phenomena have been repeatedly reported in many fields of mathematics, like combinatorial evaluations and discrete branching processes. It is precisely in the context of branching processes with bounded off-spring that T. Harris, in 1948, first set forth the possibility that the logarithm of the moment generating function of the rescaled population size, in the super-critical regime, does not grow near infinity as a power, but it has an oscillatory prefactor (the Harris function). These oscillations have been observed numerically only much later and, while the origin is clearly tied to the discrete character of the iteration, the amplitude size is not so well understood. The purpose of this note is to reconsider the issue for hierarchical models and in what is arguably the most elementary setting—the pinning model—that actually just boils down to iteration of polynomial maps (and, notably, quadratic maps). In this note we show that the oscillatory critical amplitude for pinning models and the Harris function coincide. Moreover we make explicit the link between these oscillatory functions and the geometry of the Julia set of the map, making thus rigorous and quantitative some ideas set forth in Derrida et al. (Commun. Math. Phys. 94:115-132, 1984).

  12. The neutron capture cross section of the ${s}$-process branch point isotope $^{63}$Ni

    CERN Multimedia

    Neutron capture nucleosynthesis in massive stars plays an important role in Galactic chemical evolution as well as for the analysis of abundance patterns in very old metal-poor halo stars. The so-called weak ${s}$-process component, which is responsible for most of the ${s}$ abundances between Fe and Sr, turned out to be very sensitive to the stellar neutron capture cross sections in this mass region and, in particular, of isotopes near the seed distribution around Fe. In this context, the unstable isotope $^{63}$Ni is of particular interest because it represents the first branching point in the reaction path of the ${s}$-process. We propose to measure this cross section at n_TOF from thermal energies up to 500 keV, covering the entire range of astrophysical interest. These data are needed to replace uncertain theoretical predicitons by first experimental information to understand the consequences of the $^{63}$Ni branching for the abundance pattern of the subsequent isotopes, especially for $^{63}$Cu and $^{...

  13. Spectral methods for quantum Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Szehr, Oleg

    2014-05-08

    The aim of this project is to contribute to our understanding of quantum time evolutions, whereby we focus on quantum Markov chains. The latter constitute a natural generalization of the ubiquitous concept of a classical Markov chain to describe evolutions of quantum mechanical systems. We contribute to the theory of such processes by introducing novel methods that allow us to relate the eigenvalue spectrum of the transition map to convergence as well as stability properties of the Markov chain.

  14. Spectral methods for quantum Markov chains

    International Nuclear Information System (INIS)

    Szehr, Oleg

    2014-01-01

    The aim of this project is to contribute to our understanding of quantum time evolutions, whereby we focus on quantum Markov chains. The latter constitute a natural generalization of the ubiquitous concept of a classical Markov chain to describe evolutions of quantum mechanical systems. We contribute to the theory of such processes by introducing novel methods that allow us to relate the eigenvalue spectrum of the transition map to convergence as well as stability properties of the Markov chain.

  15. Multi-rate Poisson tree processes for single-locus species delimitation under maximum likelihood and Markov chain Monte Carlo.

    Science.gov (United States)

    Kapli, P; Lutteropp, S; Zhang, J; Kobert, K; Pavlidis, P; Stamatakis, A; Flouri, T

    2017-06-01

    In recent years, molecular species delimitation has become a routine approach for quantifying and classifying biodiversity. Barcoding methods are of particular importance in large-scale surveys as they promote fast species discovery and biodiversity estimates. Among those, distance-based methods are the most common choice as they scale well with large datasets; however, they are sensitive to similarity threshold parameters and they ignore evolutionary relationships. The recently introduced "Poisson Tree Processes" (PTP) method is a phylogeny-aware approach that does not rely on such thresholds. Yet, two weaknesses of PTP impact its accuracy and practicality when applied to large datasets; it does not account for divergent intraspecific variation and is slow for a large number of sequences. We introduce the multi-rate PTP (mPTP), an improved method that alleviates the theoretical and technical shortcomings of PTP. It incorporates different levels of intraspecific genetic diversity deriving from differences in either the evolutionary history or sampling of each species. Results on empirical data suggest that mPTP is superior to PTP and popular distance-based methods as it, consistently yields more accurate delimitations with respect to the taxonomy (i.e., identifies more taxonomic species, infers species numbers closer to the taxonomy). Moreover, mPTP does not require any similarity threshold as input. The novel dynamic programming algorithm attains a speedup of at least five orders of magnitude compared to PTP, allowing it to delimit species in large (meta-) barcoding data. In addition, Markov Chain Monte Carlo sampling provides a comprehensive evaluation of the inferred delimitation in just a few seconds for millions of steps, independently of tree size. mPTP is implemented in C and is available for download at http://github.com/Pas-Kapli/mptp under the GNU Affero 3 license. A web-service is available at http://mptp.h-its.org . : paschalia.kapli@h-its.org or

  16. Pulse-train control of branching processes: Elimination of background and intruder state population

    International Nuclear Information System (INIS)

    Seidl, Markus; Uiberacker, Christoph; Jakubetz, Werner; Etinski, Mihajlo

    2008-01-01

    The authors introduce and describe pulse train control (PTC) of population branching in strongly coupled processes as a novel control tool for the separation of competing multiphoton processes. Control strategies are presented based on the different responses of processes with different photonicities and/or different frequency detunings to the pulse-to-pulse time delay and the pulse-to-pulse phase shift in pulse trains. The control efficiency is further enhanced by the property of pulse trains that complete population transfer can be obtained over an extended frequency range that replaces the resonance frequency of simple pulses. The possibility to freely tune the frequency assists the separation of the competing processes and reduces the number of subpulses required for full control. As a sample application, PTC of leaking multiphoton resonances is demonstrated by numerical simulations. In model systems exhibiting sizable background (intruder) state population if excited with single pulses, PTC leading to complete accumulation of population in the target state and elimination of background population is readily achieved. The analysis of the results reveals different mechanisms of control and provides clues on the mechanisms of the leaking process itself. In an alternative setup, pulse trains can be used as a phase-sensitive tool for level switching. By changing only the pulse-to-pulse phase shift of a train with otherwise unchanged parameters, population can be transferred to any of two different target states in a near-quantitative manner.

  17. The (n, $\\gamma$) reaction in the s-process branching point $^{59}$Ni

    CERN Multimedia

    We propose to measure the $^{59}$Ni(n,$\\gamma$)$^{56}$Fe cross section at the neutron time of flight (n TOF) facility with a dedicated chemical vapor deposition (CVD) diamond detector. The (n, ) reaction in the radioactive $^{59}$Ni is of relevance in nuclear astrophysics as it can be seen as a rst branching point in the astrophysical s-process. Its relevance in nuclear technology is especially related to material embrittlement in stainless steel. There is a strong discrepancy between available experimental data and the evaluated nuclear data les for this isotope. The aim of the measurement is to clarify this disagreement. The clear energy separation of the reaction products of neutron induced reactions in $^{59}$Ni makes it a very suitable candidate for a rst cross section measurement with the CVD diamond detector, which should serve in the future for similar measurements at n_TOF.

  18. On using continuoas Markov processes for unit service life evaluation taking as an example the RBMK-1000 gate-regulating valve

    International Nuclear Information System (INIS)

    Klemin, A.I.; Emel'yanov, V.S.; Rabchun, A.V.

    1984-01-01

    A technique is sugfested for estimating service life indices of equipment based on describing the process of the equipment ageing by continuous Markov diffusion process. It is noted that a number of problems on estimating durability indices of products is reduced to problems of estimating characteristics of the time of the first attainment of the preset boundary (boundaries) by a random process describing the ageing of a product. The methods of statistic estimation of the drift and diffusion coefficient in the continuous Markov diffusion process are considered formulae for their point and interval estimates are presented. A special description is given for a case of a stationary process and determining in this case mathematical expectation and dispersion of the time of the first attainment of a boundary (boundaries). The method of numerical simulation of the diffusion process with constant drift and diffusion coefficients is also described; results obtained on the basis of such a simulation are discussed. An example of using the suggested technique for quantitative estimate of the service life for the RBMK-1000 gate-regulating value is given

  19. Markov stochasticity coordinates

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2017-01-01

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  20. Decisive Markov Chains

    OpenAIRE

    Abdulla, Parosh Aziz; Henda, Noomene Ben; Mayr, Richard

    2007-01-01

    We consider qualitative and quantitative verification problems for infinite-state Markov chains. We call a Markov chain decisive w.r.t. a given set of target states F if it almost certainly eventually reaches either F or a state from which F can no longer be reached. While all finite Markov chains are trivially decisive (for every set F), this also holds for many classes of infinite Markov chains. Infinite Markov chains which contain a finite attractor are decisive w.r.t. every set F. In part...

  1. Markov stochasticity coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Eliazar, Iddo, E-mail: iddo.eliazar@intel.com

    2017-01-15

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  2. Living in the branches: population dynamics and ecological processes in dendritic networks

    Science.gov (United States)

    Grant, E.H.C.; Lowe, W.H.; Fagan, W.F.

    2007-01-01

    Spatial structure regulates and modifies processes at several levels of ecological organization (e.g. individual/genetic, population and community) and is thus a key component of complex systems, where knowledge at a small scale can be insufficient for understanding system behaviour at a larger scale. Recent syntheses outline potential applications of network theory to ecological systems, but do not address the implications of physical structure for network dynamics. There is a specific need to examine how dendritic habitat structure, such as that found in stream, hedgerow and cave networks, influences ecological processes. Although dendritic networks are one type of ecological network, they are distinguished by two fundamental characteristics: (1) both the branches and the nodes serve as habitat, and (2) the specific spatial arrangement and hierarchical organization of these elements interacts with a species' movement behaviour to alter patterns of population distribution and abundance, and community interactions. Here, we summarize existing theory relating to ecological dynamics in dendritic networks, review empirical studies examining the population- and community-level consequences of these networks, and suggest future research integrating spatial pattern and processes in dendritic systems.

  3. Determinação da capacidade real necessária de um processo produtivo utilizando cadeia de Markov Determination of necessary real capacity in productive process using Markov chain

    Directory of Open Access Journals (Sweden)

    Francielly Hedler Staudt

    2011-01-01

    Full Text Available Todas as empresas em desenvolvimento passam pelo momento de decidir se há ou não necessidade de realizar novos investimentos para suprir uma demanda crescente. Para tomar tal decisão é imprescindível conhecer se o processo atual tem capacidade de produzir a nova demanda. Porém, são raras as empresas que têm a percepção de que os refugos e retrabalhos também consomem recursos da produção e, portanto, devem ser considerados no cálculo da capacidade produtiva. A proposta deste trabalho consiste em incluir esses fatores na análise de capacidade da fábrica, utilizando uma matriz de transição estocástica da cadeia absorvente de Markov como ferramenta para obtenção do fator de capacidade. Este fator, aliado ao índice de eficiência e a demanda desejada ao fim do processo, resulta na capacidade real necessária. Um estudo de caso exemplifica a metodologia, apresentando resultados que permitem o cálculo do índice de ocupação real de cada centro produtivo. O cálculo desse índice demonstrou que alguns centros de trabalho necessitam de análises sobre investimentos em capacitação, pois ultrapassaram 90% de ocupação.All developing companies must decide once in a while whether it is required to perform new investments to handle a growing demand. In order to make this decision, it is essential to know whether the current productive capacity is able to supply the new demand. However, just few companies realize that refuse and rework use production resources, which must be taken into account in the productive capacity calculation. The aim of this work was to include these factors in factory capacity analysis, using Markov chain stochastic transition matrix as a tool to obtain the capacity factor. This factor - used together with the efficiency index and the required demand in the end of the process - results in the necessary real capacity. A case study exemplifies the proposed methodology, presenting results that allow for the

  4. Improved zeolite regeneration processes for preparing saturated branched-chain fatty acids

    Science.gov (United States)

    Ferrierite zeolite solid is an excellent catalyst for the skeletal isomerization of unsaturated linear-chain fatty acids (i.e., oleic acid) to unsaturated branched-chain fatty acids (i.e., iso-oleic acid) follow by hydrogenation to give saturated branched-chain fatty acids (i.e., isostearic acid). ...

  5. Quantum Markov Chain Mixing and Dissipative Engineering

    DEFF Research Database (Denmark)

    Kastoryano, Michael James

    2012-01-01

    This thesis is the fruit of investigations on the extension of ideas of Markov chain mixing to the quantum setting, and its application to problems of dissipative engineering. A Markov chain describes a statistical process where the probability of future events depends only on the state...... of the system at the present point in time, but not on the history of events. Very many important processes in nature are of this type, therefore a good understanding of their behaviour has turned out to be very fruitful for science. Markov chains always have a non-empty set of limiting distributions...... (stationary states). The aim of Markov chain mixing is to obtain (upper and/or lower) bounds on the number of steps it takes for the Markov chain to reach a stationary state. The natural quantum extensions of these notions are density matrices and quantum channels. We set out to develop a general mathematical...

  6. Speeding up Online POMDP planning - unification of observation branches by belief-state compression via expected feature values

    CSIR Research Space (South Africa)

    Rens, G

    2015-01-01

    Full Text Available A novel algorithm to speed up online planning in partially observable Markov decision processes (POMDPs) is introduced. I propose a method for compressing nodes in belief-decision-trees while planning occurs. Whereas belief-decision-trees branch...

  7. A Markov chain approach to modelling charge exchange processes of an ion beam in monotonically increasing or decreasing potentials

    International Nuclear Information System (INIS)

    Shrier, O; Khachan, J; Bosi, S

    2006-01-01

    A Markov chain method is presented as an alternative approach to Monte Carlo simulations of charge exchange collisions by an energetic hydrogen ion beam with a cold background hydrogen gas. This method was used to determine the average energy of the resulting energetic neutrals along the path of the beam. A comparison with Monte Carlo modelling showed a good agreement but with the advantage that it required much less computing time and produced no numerical noise. In particular, the Markov chain method works well for monotonically increasing or decreasing electrostatic potentials. Finally, a good agreement is obtained with experimental results from Doppler shift spectroscopy on energetic beams from a hollow cathode discharge. In particular, the average energy of ions that undergo charge exchange reaches a plateau that can be well below the full energy that might be expected from the applied voltage bias, depending on the background gas pressure. For example, pressures of ∼20 mTorr limit the ion energy to ∼20% of the applied voltage

  8. Partial branch and bound algorithm for improved data association in multiframe processing

    Science.gov (United States)

    Poore, Aubrey B.; Yan, Xin

    1999-07-01

    A central problem in multitarget, multisensor, and multiplatform tracking remains that of data association. Lagrangian relaxation methods have shown themselves to yield near optimal answers in real-time. The necessary improvement in the quality of these solutions warrants a continuing interest in these methods. These problems are NP-hard; the only known methods for solving them optimally are enumerative in nature with branch-and-bound being most efficient. Thus, the development of methods less than a full branch-and-bound are needed for improving the quality. Such methods as K-best, local search, and randomized search have been proposed to improve the quality of the relaxation solution. Here, a partial branch-and-bound technique along with adequate branching and ordering rules are developed. Lagrangian relaxation is used as a branching method and as a method to calculate the lower bound for subproblems. The result shows that the branch-and-bound framework greatly improves the resolution quality of the Lagrangian relaxation algorithm and yields better multiple solutions in less time than relaxation alone.

  9. Confluence reduction for Markov automata (extended version)

    NARCIS (Netherlands)

    Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. Recently, the process algebra MAPA was introduced to efficiently model such systems. As always, the state space explosion threatens the analysability of the models

  10. Fields From Markov Chains

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2005-01-01

    A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly.......A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly....

  11. Semi-Markov Arnason-Schwarz models.

    Science.gov (United States)

    King, Ruth; Langrock, Roland

    2016-06-01

    We consider multi-state capture-recapture-recovery data where observed individuals are recorded in a set of possible discrete states. Traditionally, the Arnason-Schwarz model has been fitted to such data where the state process is modeled as a first-order Markov chain, though second-order models have also been proposed and fitted to data. However, low-order Markov models may not accurately represent the underlying biology. For example, specifying a (time-independent) first-order Markov process involves the assumption that the dwell time in each state (i.e., the duration of a stay in a given state) has a geometric distribution, and hence that the modal dwell time is one. Specifying time-dependent or higher-order processes provides additional flexibility, but at the expense of a potentially significant number of additional model parameters. We extend the Arnason-Schwarz model by specifying a semi-Markov model for the state process, where the dwell-time distribution is specified more generally, using, for example, a shifted Poisson or negative binomial distribution. A state expansion technique is applied in order to represent the resulting semi-Markov Arnason-Schwarz model in terms of a simpler and computationally tractable hidden Markov model. Semi-Markov Arnason-Schwarz models come with only a very modest increase in the number of parameters, yet permit a significantly more flexible state process. Model selection can be performed using standard procedures, and in particular via the use of information criteria. The semi-Markov approach allows for important biological inference to be drawn on the underlying state process, for example, on the times spent in the different states. The feasibility of the approach is demonstrated in a simulation study, before being applied to real data corresponding to house finches where the states correspond to the presence or absence of conjunctivitis. © 2015, The International Biometric Society.

  12. Improving the Approaches to Organization of Strategic Process at Enterprises of the Bakery Industry Branch of Ukraine

    Directory of Open Access Journals (Sweden)

    Zavertany Denys V.

    2018-03-01

    Full Text Available Today, both the production and the non-production organizations cannot function efficiently without defining the mission and values that explain why they are in the business, what products they produce, and what consumer market they target. Therefore, it becomes relevant to research the definitions of of the strategic process for the enterprises of bakery industry branch of Ukraine to ensure their sustainable development in the context of globalization and the development of market economy. The purpose of the research is to improve the approaches to organization of strategic process at the enterprises of bakery industry branch of Ukraine. The study was based on the use of such approaches and methods: dialectical, systemic, causal method, theoretical generalization and comparison. The types of strategic process which can be applied by the enterprises of bakery industry branch of Ukraine have been defined. A stable relationship between the external and the internal environments of the strategic process, with the allocation of the functioning elements for each of them, has been determined. It is proved that close interaction of both external and internal environment plays a key role in the formation of organizational design of a bakery enterprise.

  13. Stencil method: a Markov model for transport in porous media

    Science.gov (United States)

    Delgoshaie, A. H.; Tchelepi, H.; Jenny, P.

    2016-12-01

    In porous media the transport of fluid is dominated by flow-field heterogeneity resulting from the underlying transmissibility field. Since the transmissibility is highly uncertain, many realizations of a geological model are used to describe the statistics of the transport phenomena in a Monte Carlo framework. One possible way to avoid the high computational cost of physics-based Monte Carlo simulations is to model the velocity field as a Markov process and use Markov Chain Monte Carlo. In previous works multiple Markov models for discrete velocity processes have been proposed. These models can be divided into two general classes of Markov models in time and Markov models in space. Both of these choices have been shown to be effective to some extent. However some studies have suggested that the Markov property cannot be confirmed for a temporal Markov process; Therefore there is not a consensus about the validity and value of Markov models in time. Moreover, previous spacial Markov models have only been used for modeling transport on structured networks and can not be readily applied to model transport in unstructured networks. In this work we propose a novel approach for constructing a Markov model in time (stencil method) for a discrete velocity process. The results form the stencil method are compared to previously proposed spacial Markov models for structured networks. The stencil method is also applied to unstructured networks and can successfully describe the dispersion of particles in this setting. Our conclusion is that both temporal Markov models and spacial Markov models for discrete velocity processes can be valid for a range of model parameters. Moreover, we show that the stencil model can be more efficient in many practical settings and is suited to model dispersion both on structured and unstructured networks.

  14. The algebra of the general Markov model on phylogenetic trees and networks.

    Science.gov (United States)

    Sumner, J G; Holland, B R; Jarvis, P D

    2012-04-01

    It is known that the Kimura 3ST model of sequence evolution on phylogenetic trees can be extended quite naturally to arbitrary split systems. However, this extension relies heavily on mathematical peculiarities of the associated Hadamard transformation, and providing an analogous augmentation of the general Markov model has thus far been elusive. In this paper, we rectify this shortcoming by showing how to extend the general Markov model on trees to include incompatible edges; and even further to more general network models. This is achieved by exploring the algebra of the generators of the continuous-time Markov chain together with the “splitting” operator that generates the branching process on phylogenetic trees. For simplicity, we proceed by discussing the two state case and then show that our results are easily extended to more states with little complication. Intriguingly, upon restriction of the two state general Markov model to the parameter space of the binary symmetric model, our extension is indistinguishable from the Hadamard approach only on trees; as soon as any incompatible splits are introduced the two approaches give rise to differing probability distributions with disparate structure. Through exploration of a simple example, we give an argument that our extension to more general networks has desirable properties that the previous approaches do not share. In particular, our construction allows for convergent evolution of previously divergent lineages; a property that is of significant interest for biological applications.

  15. Observation uncertainty in reversible Markov chains.

    Science.gov (United States)

    Metzner, Philipp; Weber, Marcus; Schütte, Christof

    2010-09-01

    In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .

  16. Markov chains and mixing times

    CERN Document Server

    Levin, David A

    2017-01-01

    Markov Chains and Mixing Times is a magical book, managing to be both friendly and deep. It gently introduces probabilistic techniques so that an outsider can follow. At the same time, it is the first book covering the geometric theory of Markov chains and has much that will be new to experts. It is certainly THE book that I will use to teach from. I recommend it to all comers, an amazing achievement. -Persi Diaconis, Mary V. Sunseri Professor of Statistics and Mathematics, Stanford University Mixing times are an active research topic within many fields from statistical physics to the theory of algorithms, as well as having intrinsic interest within mathematical probability and exploiting discrete analogs of important geometry concepts. The first edition became an instant classic, being accessible to advanced undergraduates and yet bringing readers close to current research frontiers. This second edition adds chapters on monotone chains, the exclusion process and hitting time parameters. Having both exercises...

  17. Process for the selective cracking of straight-chained and slightly branched hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Gorring, R L; Shipman, G F

    1975-01-23

    The invention describes a method for the selective (hydro) cracking of petroleum materials, containing normal straight-chained and/or slightly branched-chained hydrocarbons. The mixture is brought into contact with a selective, crystalline alumino silicate zeolite cracking catalyst housing a silicon oxide/aluminum oxide ratio of at least about 12 and a constraint index of about 1 to 12 under cracking conditions. A zeolite catalyst with a crystal size of up to 0.05 ..mu.. is used. Solidification point and viscosity in particular of oils are to be lowered through the catalytic dewaxing.

  18. Quantitative Analysis of Axonal Branch Dynamics in the Developing Nervous System.

    Directory of Open Access Journals (Sweden)

    Kelsey Chalmers

    2016-03-01

    Full Text Available Branching is an important mechanism by which axons navigate to their targets during neural development. For instance, in the developing zebrafish retinotectal system, selective branching plays a critical role during both initial pathfinding and subsequent arborisation once the target zone has been reached. Here we show how quantitative methods can help extract new information from time-lapse imaging about the nature of the underlying branch dynamics. First, we introduce Dynamic Time Warping to this domain as a method for automatically matching branches between frames, replacing the effort required for manual matching. Second, we model branch dynamics as a birth-death process, i.e. a special case of a continuous-time Markov process. This reveals that the birth rate for branches from zebrafish retinotectal axons, as they navigate across the tectum, increased over time. We observed no significant change in the death rate for branches over this time period. However, blocking neuronal activity with TTX slightly increased the death rate, without a detectable change in the birth rate. Third, we show how the extraction of these rates allows computational simulations of branch dynamics whose statistics closely match the data. Together these results reveal new aspects of the biology of retinotectal pathfinding, and introduce computational techniques which are applicable to the study of axon branching more generally.

  19. Fluctuation limit theorems for age-dependent critical binary branching systems

    Directory of Open Access Journals (Sweden)

    Murillo-Salas Antonio

    2011-03-01

    Full Text Available We consider an age-dependent branching particle system in ℝd, where the particles are subject to α-stable migration (0 < α ≤ 2, critical binary branching, and general (non-arithmetic lifetimes distribution. The population starts off from a Poisson random field in ℝd with Lebesgue intensity. We prove functional central limit theorems and strong laws of large numbers under two rescalings: high particle density, and a space-time rescaling that preserves the migration distribution. Properties of the limit processes such as Markov property, almost sure continuity of paths and generalized Langevin equation, are also investigated.

  20. A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process

    Science.gov (United States)

    Wang, Yi; Tamai, Tetsuo

    2009-01-01

    Since the complexity of software systems continues to grow, most engineers face two serious problems: the state space explosion problem and the problem of how to debug systems. In this paper, we propose a game-theoretic approach to full branching time model checking on three-valued semantics. The three-valued models and logics provide successful abstraction that overcomes the state space explosion problem. The game style model checking that generates counter-examples can guide refinement or identify validated formulas, which solves the system debugging problem. Furthermore, output of our game style method will give significant information to engineers in detecting where errors have occurred and what the causes of the errors are.

  1. Assessing local population vulnerability to wind energy development with branching process models: an application to wind energy development

    Science.gov (United States)

    Erickson, Richard A.; Eager, Eric A.; Stanton, Jessica C.; Beston, Julie A.; Diffendorfer, James E.; Thogmartin, Wayne E.

    2015-01-01

    Quantifying the impact of anthropogenic development on local populations is important for conservation biology and wildlife management. However, these local populations are often subject to demographic stochasticity because of their small population size. Traditional modeling efforts such as population projection matrices do not consider this source of variation whereas individual-based models, which include demographic stochasticity, are computationally intense and lack analytical tractability. One compromise between approaches is branching process models because they accommodate demographic stochasticity and are easily calculated. These models are known within some sub-fields of probability and mathematical ecology but are not often applied in conservation biology and applied ecology. We applied branching process models to quantitatively compare and prioritize species locally vulnerable to the development of wind energy facilities. Specifically, we examined species vulnerability using branching process models for four representative species: A cave bat (a long-lived, low fecundity species), a tree bat (short-lived, moderate fecundity species), a grassland songbird (a short-lived, high fecundity species), and an eagle (a long-lived, slow maturation species). Wind turbine-induced mortality has been observed for all of these species types, raising conservation concerns. We simulated different mortality rates from wind farms while calculating local extinction probabilities. The longer-lived species types (e.g., cave bats and eagles) had much more pronounced transitions from low extinction risk to high extinction risk than short-lived species types (e.g., tree bats and grassland songbirds). High-offspring-producing species types had a much greater variability in baseline risk of extinction than the lower-offspring-producing species types. Long-lived species types may appear stable until a critical level of incidental mortality occurs. After this threshold, the risk of

  2. Consistent Estimation of Partition Markov Models

    Directory of Open Access Journals (Sweden)

    Jesús E. García

    2017-04-01

    Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.

  3. Absorbing Markov Chain Models to Determine Optimum Process Target Levels in Production Systems with Dual Correlated Quality Characteristics

    Directory of Open Access Journals (Sweden)

    Mohammad Saber Fallah Nezhad

    2012-03-01

    Full Text Available For a manufacturing organization to compete effectively in the global marketplace, cutting costs and improving overall efficiency is essential.  A single-stage production system with two independent quality characteristics and different costs associated with each quality characteristic that falls below a lower specification limit (scrap or above an upper specification limit (rework is presented in this paper. The amount of reworks and scraps are assumed to be depending on the process parameters such as process mean and standard deviation thus the expected total profit is significantly dependent on the process parameters. This paper develops a Markovian decision making model for determining the process means. Sensitivity analyzes is performed to validate, and a numerical example is given to illustrate the proposed model. The results showed that the optimal process means extremely effects on the quality characteristics’ parameters.

  4. Markov Random Fields on Triangle Meshes

    DEFF Research Database (Denmark)

    Andersen, Vedrana; Aanæs, Henrik; Bærentzen, Jakob Andreas

    2010-01-01

    In this paper we propose a novel anisotropic smoothing scheme based on Markov Random Fields (MRF). Our scheme is formulated as two coupled processes. A vertex process is used to smooth the mesh by displacing the vertices according to a MRF smoothness prior, while an independent edge process label...

  5. Asymptotics for Estimating Equations in Hidden Markov Models

    DEFF Research Database (Denmark)

    Hansen, Jørgen Vinsløv; Jensen, Jens Ledet

    Results on asymptotic normality for the maximum likelihood estimate in hidden Markov models are extended in two directions. The stationarity assumption is relaxed, which allows for a covariate process influencing the hidden Markov process. Furthermore a class of estimating equations is considered...

  6. Portfolio allocation under the vendor managed inventory: A Markov ...

    African Journals Online (AJOL)

    Portfolio allocation under the vendor managed inventory: A Markov decision process. ... Journal of Applied Sciences and Environmental Management ... This study provides a review of Markov decision processes and investigates its suitability for solutions to portfolio allocation problems under vendor managed inventory in ...

  7. Logics and Models for Stochastic Analysis Beyond Markov Chains

    DEFF Research Database (Denmark)

    Zeng, Kebin

    , because of the generality of ME distributions, we have to leave the world of Markov chains. To support ME distributions with multiple exits, we introduce a multi-exits ME distribution together with a process algebra MEME to express the systems having the semantics as Markov renewal processes with ME...

  8. Path integral formulation and Feynman rules for phylogenetic branching models

    Energy Technology Data Exchange (ETDEWEB)

    Jarvis, P D; Bashford, J D; Sumner, J G [School of Mathematics and Physics, University of Tasmania, GPO Box 252C, 7001 Hobart, TAS (Australia)

    2005-11-04

    A dynamical picture of phylogenetic evolution is given in terms of Markov models on a state space, comprising joint probability distributions for character types of taxonomic classes. Phylogenetic branching is a process which augments the number of taxa under consideration, and hence the rank of the underlying joint probability state tensor. We point out the combinatorial necessity for a second-quantized, or Fock space setting, incorporating discrete counting labels for taxa and character types, to allow for a description in the number basis. Rate operators describing both time evolution without branching, and also phylogenetic branching events, are identified. A detailed development of these ideas is given, using standard transcriptions from the microscopic formulation of non-equilibrium reaction-diffusion or birth-death processes. These give the relations between stochastic rate matrices, the matrix elements of the corresponding evolution operators representing them, and the integral kernels needed to implement these as path integrals. The 'free' theory (without branching) is solved, and the correct trilinear 'interaction' terms (representing branching events) are presented. The full model is developed in perturbation theory via the derivation of explicit Feynman rules which establish that the probabilities (pattern frequencies of leaf colourations) arising as matrix elements of the time evolution operator are identical with those computed via the standard analysis. Simple examples (phylogenetic trees with two or three leaves), are discussed in detail. Further implications for the work are briefly considered including the role of time reparametrization covariance.

  9. Path integral formulation and Feynman rules for phylogenetic branching models

    International Nuclear Information System (INIS)

    Jarvis, P D; Bashford, J D; Sumner, J G

    2005-01-01

    A dynamical picture of phylogenetic evolution is given in terms of Markov models on a state space, comprising joint probability distributions for character types of taxonomic classes. Phylogenetic branching is a process which augments the number of taxa under consideration, and hence the rank of the underlying joint probability state tensor. We point out the combinatorial necessity for a second-quantized, or Fock space setting, incorporating discrete counting labels for taxa and character types, to allow for a description in the number basis. Rate operators describing both time evolution without branching, and also phylogenetic branching events, are identified. A detailed development of these ideas is given, using standard transcriptions from the microscopic formulation of non-equilibrium reaction-diffusion or birth-death processes. These give the relations between stochastic rate matrices, the matrix elements of the corresponding evolution operators representing them, and the integral kernels needed to implement these as path integrals. The 'free' theory (without branching) is solved, and the correct trilinear 'interaction' terms (representing branching events) are presented. The full model is developed in perturbation theory via the derivation of explicit Feynman rules which establish that the probabilities (pattern frequencies of leaf colourations) arising as matrix elements of the time evolution operator are identical with those computed via the standard analysis. Simple examples (phylogenetic trees with two or three leaves), are discussed in detail. Further implications for the work are briefly considered including the role of time reparametrization covariance

  10. Stability and perturbations of countable Markov maps

    Science.gov (United States)

    Jordan, Thomas; Munday, Sara; Sahlsten, Tuomas

    2018-04-01

    Let T and , , be countable Markov maps such that the branches of converge pointwise to the branches of T, as . We study the stability of various quantities measuring the singularity (dimension, Hölder exponent etc) of the topological conjugacy between and T when . This is a well-understood problem for maps with finitely-many branches, and the quantities are stable for small ɛ, that is, they converge to their expected values if . For the infinite branch case their stability might be expected to fail, but we prove that even in the infinite branch case the quantity is stable under some natural regularity assumptions on and T (under which, for instance, the Hölder exponent of fails to be stable). Our assumptions apply for example in the case of Gauss map, various Lüroth maps and accelerated Manneville-Pomeau maps when varying the parameter α. For the proof we introduce a mass transportation method from the cusp that allows us to exploit thermodynamical ideas from the finite branch case. Dedicated to the memory of Bernd O Stratmann

  11. On the degree distribution of horizontal visibility graphs associated with Markov processes and dynamical systems: diagrammatic and variational approaches

    International Nuclear Information System (INIS)

    Lacasa, Lucas

    2014-01-01

    Dynamical processes can be transformed into graphs through a family of mappings called visibility algorithms, enabling the possibility of (i) making empirical time series analysis and signal processing and (ii) characterizing classes of dynamical systems and stochastic processes using the tools of graph theory. Recent works show that the degree distribution of these graphs encapsulates much information on the signals' variability, and therefore constitutes a fundamental feature for statistical learning purposes. However, exact solutions for the degree distributions are only known in a few cases, such as for uncorrelated random processes. Here we analytically explore these distributions in a list of situations. We present a diagrammatic formalism which computes for all degrees their corresponding probability as a series expansion in a coupling constant which is the number of hidden variables. We offer a constructive solution for general Markovian stochastic processes and deterministic maps. As case tests we focus on Ornstein–Uhlenbeck processes, fully chaotic and quasiperiodic maps. Whereas only for certain degree probabilities can all diagrams be summed exactly, in the general case we show that the perturbation theory converges. In a second part, we make use of a variational technique to predict the complete degree distribution for special classes of Markovian dynamics with fast-decaying correlations. In every case we compare the theory with numerical experiments. (paper)

  12. Color identification and fuzzy reasoning based monitoring and controlling of fermentation process of branched chain amino acid

    Science.gov (United States)

    Ma, Lei; Wang, Yizhong; Xu, Qingyang; Huang, Huafang; Zhang, Rui; Chen, Ning

    2009-11-01

    The main production method of branched chain amino acid (BCAA) is microbial fermentation. In this paper, to monitor and to control the fermentation process of BCAA, especially its logarithmic phase, parameters such as the color of fermentation broth, culture temperature, pH, revolution, dissolved oxygen, airflow rate, pressure, optical density, and residual glucose, are measured and/or controlled and/or adjusted. The color of fermentation broth is measured using the HIS color model and a BP neural network. The network's input is the histograms of hue H and saturation S, and output is the color description. Fermentation process parameters are adjusted using fuzzy reasoning, which is performed by inference rules. According to the practical situation of BCAA fermentation process, all parameters are divided into four grades, and different fuzzy rules are established.

  13. Markov set-chains

    CERN Document Server

    Hartfiel, Darald J

    1998-01-01

    In this study extending classical Markov chain theory to handle fluctuating transition matrices, the author develops a theory of Markov set-chains and provides numerous examples showing how that theory can be applied. Chapters are concluded with a discussion of related research. Readers who can benefit from this monograph are those interested in, or involved with, systems whose data is imprecise or that fluctuate with time. A background equivalent to a course in linear algebra and one in probability theory should be sufficient.

  14. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  15. A framework for analysis of abortive colony size distributions using a model of branching processes in irradiated normal human fibroblasts.

    Science.gov (United States)

    Sakashita, Tetsuya; Hamada, Nobuyuki; Kawaguchi, Isao; Ouchi, Noriyuki B; Hara, Takamitsu; Kobayashi, Yasuhiko; Saito, Kimiaki

    2013-01-01

    Clonogenicity gives important information about the cellular reproductive potential following ionizing irradiation, but an abortive colony that fails to continue to grow remains poorly characterized. It was recently reported that the fraction of abortive colonies increases with increasing dose. Thus, we set out to investigate the production kinetics of abortive colonies using a model of branching processes. We firstly plotted the experimentally determined colony size distribution of abortive colonies in irradiated normal human fibroblasts, and found the linear relationship on the log-linear or log-log plot. By applying the simple model of branching processes to the linear relationship, we found the persistent reproductive cell death (RCD) over several generations following irradiation. To verify the estimated probability of RCD, abortive colony size distribution (≤ 15 cells) and the surviving fraction were simulated by the Monte Carlo computational approach for colony expansion. Parameters estimated from the log-log fit demonstrated the good performance in both simulations than those from the log-linear fit. Radiation-induced RCD, i.e. excess probability, lasted over 16 generations and mainly consisted of two components in the early (probability over 5 generations, whereas abortive colony size distribution was robust against it. These results suggest that, whereas short-term RCD is critical to the abortive colony size distribution, long-lasting RCD is important for the dose response of the surviving fraction. Our present model provides a single framework for understanding the behavior of primary cell colonies in culture following irradiation.

  16. Confluence reduction for Markov automata

    NARCIS (Netherlands)

    Timmer, Mark; Katoen, Joost P.; van de Pol, Jaco; Stoelinga, Mariëlle Ida Antoinette

    2016-01-01

    Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. As expected, the state space explosion threatens the analysability of these models. We therefore introduce confluence reduction for Markov automata, a powerful reduction

  17. Undecidability of model-checking branching-time properties of stateless probabilistic pushdown process

    OpenAIRE

    Lin, T.

    2014-01-01

    In this paper, we settle a problem in probabilistic verification of infinite--state process (specifically, {\\it probabilistic pushdown process}). We show that model checking {\\it stateless probabilistic pushdown process} (pBPA) against {\\it probabilistic computational tree logic} (PCTL) is undecidable.

  18. Study on the Evolution of Weights on the Market of Competitive Products using Markov Chains

    Directory of Open Access Journals (Sweden)

    Daniel Mihai Amariei

    2016-10-01

    Full Text Available In this paper aims the application through the Markov Process mode, within the software product WinQSB, Markov chain in the establishment of the development on the market of five brands of athletic shoes.

  19. Improving the capability of an integrated CA-Markov model to simulate spatio-temporal urban growth trends using an Analytical Hierarchy Process and Frequency Ratio

    Science.gov (United States)

    Aburas, Maher Milad; Ho, Yuek Ming; Ramli, Mohammad Firuz; Ash'aari, Zulfa Hanan

    2017-07-01

    The creation of an accurate simulation of future urban growth is considered one of the most important challenges in urban studies that involve spatial modeling. The purpose of this study is to improve the simulation capability of an integrated CA-Markov Chain (CA-MC) model using CA-MC based on the Analytical Hierarchy Process (AHP) and CA-MC based on Frequency Ratio (FR), both applied in Seremban, Malaysia, as well as to compare the performance and accuracy between the traditional and hybrid models. Various physical, socio-economic, utilities, and environmental criteria were used as predictors, including elevation, slope, soil texture, population density, distance to commercial area, distance to educational area, distance to residential area, distance to industrial area, distance to roads, distance to highway, distance to railway, distance to power line, distance to stream, and land cover. For calibration, three models were applied to simulate urban growth trends in 2010; the actual data of 2010 were used for model validation utilizing the Relative Operating Characteristic (ROC) and Kappa coefficient methods Consequently, future urban growth maps of 2020 and 2030 were created. The validation findings confirm that the integration of the CA-MC model with the FR model and employing the significant driving force of urban growth in the simulation process have resulted in the improved simulation capability of the CA-MC model. This study has provided a novel approach for improving the CA-MC model based on FR, which will provide powerful support to planners and decision-makers in the development of future sustainable urban planning.

  20. Markov Chain Monte Carlo

    Indian Academy of Sciences (India)

    be obtained as a limiting value of a sample path of a suitable ... makes a mathematical model of chance and deals with the problem by .... Is the Markov chain aperiodic? It is! Here is how you can see it. Suppose that after you do the cut, you hold the top half in your right hand, and the bottom half in your left. Then there.

  1. Perturbed Markov chains

    OpenAIRE

    Solan, Eilon; Vieille, Nicolas

    2015-01-01

    We study irreducible time-homogenous Markov chains with finite state space in discrete time. We obtain results on the sensitivity of the stationary distribution and other statistical quantities with respect to perturbations of the transition matrix. We define a new closeness relation between transition matrices, and use graph-theoretic techniques, in contrast with the matrix analysis techniques previously used.

  2. Markov Chain Monte Carlo

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.

  3. Partially Hidden Markov Models

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Rissanen, Jorma

    1996-01-01

    Partially Hidden Markov Models (PHMM) are introduced. They differ from the ordinary HMM's in that both the transition probabilities of the hidden states and the output probabilities are conditioned on past observations. As an illustration they are applied to black and white image compression where...

  4. Markov chain of distances between parked cars

    International Nuclear Information System (INIS)

    Seba, Petr

    2008-01-01

    We describe the distribution of distances between parked cars as a solution of certain Markov processes and show that its solution is obtained with the help of a distributional fixed point equation. Under certain conditions the process is solved explicitly. The resulting probability density is compared with the actual parking data measured in the city. (fast track communication)

  5. Markov and mixed models with applications

    DEFF Research Database (Denmark)

    Mortensen, Stig Bousgaard

    This thesis deals with mathematical and statistical models with focus on applications in pharmacokinetic and pharmacodynamic (PK/PD) modelling. These models are today an important aspect of the drug development in the pharmaceutical industry and continued research in statistical methodology within...... or uncontrollable factors in an individual. Modelling using SDEs also provides new tools for estimation of unknown inputs to a system and is illustrated with an application to estimation of insulin secretion rates in diabetic patients. Models for the eect of a drug is a broader area since drugs may affect...... for non-parametric estimation of Markov processes are proposed to give a detailed description of the sleep process during the night. Statistically the Markov models considered for sleep states are closely related to the PK models based on SDEs as both models share the Markov property. When the models...

  6. Nanofabrication and characterization of ZnO nanorod arrays and branched microrods by aqueous solution route and rapid thermal processing

    International Nuclear Information System (INIS)

    Lupan, Oleg; Chow, Lee; Chai, Guangyu; Roldan, Beatriz; Naitabdi, Ahmed; Schulte, Alfons; Heinrich, Helge

    2007-01-01

    This paper presents an inexpensive and fast fabrication method for one-dimensional (1D) ZnO nanorod arrays and branched two-dimensional (2D), three-dimensional (3D) - nanoarchitectures. Our synthesis technique includes the use of an aqueous solution route and post-growth rapid thermal annealing. It permits rapid and controlled growth of ZnO nanorod arrays of 1D - rods, 2D - crosses, and 3D - tetrapods without the use of templates or seeds. The obtained ZnO nanorods are uniformly distributed on the surface of Si substrates and individual or branched nano/microrods can be easily transferred to other substrates. Process parameters such as concentration, temperature and time, type of substrate and the reactor design are critical for the formation of nanorod arrays with thin diameter and transferable nanoarchitectures. X-ray diffraction, scanning electron microscopy, X-ray photoelectron spectroscopy, transmission electron microscopy and Micro-Raman spectroscopy have been used to characterize the samples

  7. Recursive smoothers for hidden discrete-time Markov chains

    Directory of Open Access Journals (Sweden)

    Lakhdar Aggoun

    2005-01-01

    Full Text Available We consider a discrete-time Markov chain observed through another Markov chain. The proposed model extends models discussed by Elliott et al. (1995. We propose improved recursive formulae to update smoothed estimates of processes related to the model. These recursive estimates are used to update the parameter of the model via the expectation maximization (EM algorithm.

  8. A Markov decision model for optimising economic production lot size ...

    African Journals Online (AJOL)

    Adopting such a Markov decision process approach, the states of a Markov chain represent possible states of demand. The decision of whether or not to produce additional inventory units is made using dynamic programming. This approach demonstrates the existence of an optimal state-dependent EPL size, and produces ...

  9. Visual control as a key factor in a production process of a company from automotive branch

    Directory of Open Access Journals (Sweden)

    Stanisław Borkowski

    2013-04-01

    Full Text Available This article presents a theoretical basis for one type of control in enterprises – visual control. It presents the meaning of visual control in the Toyota Production System and BOST researches as a tool of measure, among other things, the importance of visual control in production companies. The level of importance of visual control usage as one of the production process elements in the analysed company was indicated. The usage of visual control is a main factor in a production process of the analyzed company, the factor which provides continuous help to employees to check whether the process differs from the standard. The characteristic progression of production process elements was indicated and the SW factor (the use of visual control took the third place, PE factor (interruption of production when it detects a problem of quality turned out to be the most important one, while the least important was the EU factor (granting power of attorney down. The main tools for this evaluation: an innovative BOST survey - Toyota's management principles in questions, in particular, the Pareto-Lorenz diagram, radar graph and series of importance as graphical interpretation tools, were used to present the importance of each factor in relation to individual assessments.

  10. PROCESS PERFORMANCE LASER CUTTING THROUGH PRACTICE DRY BRANCH IN METAL-MECHANIC

    Directory of Open Access Journals (Sweden)

    Deivis Zismann

    2015-12-01

    Full Text Available The quest for optimization and quality of products has caused many organizations to eliminate the inefficiencies of their production processes, to reduce costs and increase profitability so that they can ensure their survival in the current economic scenario. Thus, it is necessary to use methods and techniques that help in getting better results. Minimize waste and promote overall product quality has become one of the main goals of the organizations. This study is Bibliographically the concept of Lean Manufacturing (Lean Manufacturing, which focused on eliminating waste, served as the basis for this study, which through an action - research aimed to applying lean practices for performance improvement the laser cutting process for an industry of the metalmechanic sector. The results show that the identification of the main sources of waste and the constant search for its elimination brought productivity advantages for the company, by reducing the processes of machines and minimize production costs time. With this, the company started to produce more, and improve their processes in the proper use of available resources.

  11. A branching process model for the analysis of abortive colony size distributions in carbon ion-irradiated normal human fibroblasts

    International Nuclear Information System (INIS)

    Sakashita, Tetsuya; Kobayashi, Yasuhiko; Hamada, Nobuyuki; Kawaguchi, Isao; Hara, Takamitsu; Saito, Kimiaki

    2014-01-01

    A single cell can form a colony, and ionizing irradiation has long been known to reduce such a cellular clonogenic potential. Analysis of abortive colonies unable to continue to grow should provide important information on the reproductive cell death (RCD) following irradiation. Our previous analysis with a branching process model showed that the RCD in normal human fibroblasts can persist over 16 generations following irradiation with low linear energy transfer (LET) γ-rays. Here we further set out to evaluate the RCD persistency in abortive colonies arising from normal human fibroblasts exposed to high-LET carbon ions (18.3 MeV/u, 108 keV/μm). We found that the abortive colony size distribution determined by biological experiments follows a linear relationship on the log–log plot, and that the Monte Carlo simulation using the RCD probability estimated from such a linear relationship well simulates the experimentally determined surviving fraction and the relative biological effectiveness (RBE). We identified the short-term phase and long-term phase for the persistent RCD following carbon-ion irradiation, which were similar to those previously identified following γ-irradiation. Taken together, our results suggest that subsequent secondary or tertiary colony formation would be invaluable for understanding the long-lasting RCD. All together, our framework for analysis with a branching process model and a colony formation assay is applicable to determination of cellular responses to low- and high-LET radiation, and suggests that the long-lasting RCD is a pivotal determinant of the surviving fraction and the RBE. (author)

  12. Markov bridges, bisection and variance reduction

    DEFF Research Database (Denmark)

    Asmussen, Søren; Hobolth, Asger

    . In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented......Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints...

  13. Characterization, treatment and utilization of rice husk ash in production processes of the industrial branch

    International Nuclear Information System (INIS)

    Stracke, Marcelo Paulo; Schmidt, Julia Isabel; Steffen, Ana Cristina; Sokolovicz, Boris; Kieckow, Flavio

    2016-01-01

    The rice husk ash (CCA) is a black powder rich in silica (contents above 90%) with many industrial applications. The ash was obtained from a rice processing industry in the state of Rio Grande do Sul. In this work the purpose is to characterize the rice husk ash and eliminate the residual carbon by methods such as acid leaching. The white ash is obtained by a chemical process followed by heating between 600 and 800 °C. The results were analyzed in DR-X, TGA and DSC. The DR-X analysis showed that the samples present high levels of silica in the crystalline form of quartz, cristobalite and tridymite. The white ash was obtained with high purity and presented a good result in the manufacture of paints. (author)

  14. Equivalent Markov-Renewal Processes.

    Science.gov (United States)

    1979-12-01

    a Fa P(X0 - JON Joml(t I’Q m-lim (tm 0 J 0 A0 jA rm A- Let td be the semi algebra of sets of the form (X0 cA 0 ,-*.,Xnc An, T1 < t l,..*,Tn < t n... IHQ (tl)Q(t 2 )"Q(tn)U - Y(tI)Y(t 2). "’Y(tn). If 0 - wy then Bl - wjjl - w, so (tl)Q(t2)... Q(tn)U - WnQ(tl)Q(t 2) ... Q(tn)U - 1O¥(t )¥(t,) ..•.y

  15. Pairwise Choice Markov Chains

    OpenAIRE

    Ragain, Stephen; Ugander, Johan

    2016-01-01

    As datasets capturing human choices grow in richness and scale---particularly in online domains---there is an increasing need for choice models that escape traditional choice-theoretic axioms such as regularity, stochastic transitivity, and Luce's choice axiom. In this work we introduce the Pairwise Choice Markov Chain (PCMC) model of discrete choice, an inferentially tractable model that does not assume any of the above axioms while still satisfying the foundational axiom of uniform expansio...

  16. A Martingale Decomposition of Discrete Markov Chains

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard

    We consider a multivariate time series whose increments are given from a homogeneous Markov chain. We show that the martingale component of this process can be extracted by a filtering method and establish the corresponding martingale decomposition in closed-form. This representation is useful fo...

  17. Bisimulation and Simulation Relations for Markov Chains

    NARCIS (Netherlands)

    Baier, Christel; Hermanns, H.; Katoen, Joost P.; Wolf, Verena; Aceto, L.; Gordon, A.

    2006-01-01

    Formal notions of bisimulation and simulation relation play a central role for any kind of process algebra. This short paper sketches the main concepts for bisimulation and simulation relations for probabilistic systems, modelled by discrete- or continuous-time Markov chains.

  18. Fracture Mechanical Markov Chain Crack Growth Model

    DEFF Research Database (Denmark)

    Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard

    1991-01-01

    propagation process can be described by a discrete space Markov theory. The model is applicable to deterministic as well as to random loading. Once the model parameters for a given material have been determined, the results can be used for any structure as soon as the geometrical function is known....

  19. Efficient Modelling and Generation of Markov Automata

    NARCIS (Netherlands)

    Koutny, M.; Timmer, Mark; Ulidowski, I.; Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the

  20. Investigation of the s-process branch-point nucleus {sup 86}Rb at HIγS

    Energy Technology Data Exchange (ETDEWEB)

    Erbacher, Philipp; Glorius, Jan; Reifarth, Rene; Sonnabend, Kerstin [Goethe Universitaet Frankfurt am Main (Germany); Isaak, Johann; Loeher, Bastian; Savran, Deniz [GSI Helmholzzentrum fuer Schwerionenforschung (Germany); Tornow, Werner [Duke University (United States)

    2016-07-01

    The branch-point nucleus {sup 86}Rb determines the isotopic abundance ratio {sup 86}Sr/{sup 87}Sr in s-process nucleosynthesis. Thus, stellar parameters such as temperature and neutron density and their evolution in time as simulated by modern s-process network calculations can be constrained by a comparison of the calculated isotopic ratio with the one observed in SiC meteoritic grains. To this end, the radiative neutron-capture cross section of the unstable isotope {sup 86}Rb has to be known with sufficient accuracy. Since the short half-life of {sup 86}Rb prohibits the direct measurement, the nuclear-physics input to a calculation of the cross section has to be measured. For this reason, the γ-ray strength function of {sup 87}Rb was measured using the γ{sup 3} setup at the High Intensity γ-ray Source facility at TUNL in Durham, USA. First experimental results are presented.

  1. Non-stationary Markov chains

    OpenAIRE

    Mallak, Saed

    1996-01-01

    Ankara : Department of Mathematics and Institute of Engineering and Sciences of Bilkent University, 1996. Thesis (Master's) -- Bilkent University, 1996. Includes bibliographical references leaves leaf 29 In thi.s work, we studierl the Ergodicilv of Non-Stationary .Markov chains. We gave several e.xainples with different cases. We proved that given a sec[uence of Markov chains such that the limit of this sec|uence is an Ergodic Markov chain, then the limit of the combination ...

  2. Musical Markov Chains

    Science.gov (United States)

    Volchenkov, Dima; Dawin, Jean René

    A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.

  3. On Weak Markov's Principle

    DEFF Research Database (Denmark)

    Kohlenbach, Ulrich Wilhelm

    2002-01-01

    We show that the so-called weak Markov's principle (WMP) which states that every pseudo-positive real number is positive is underivable in E-HA + AC. Since allows one to formalize (atl eastl arge parts of) Bishop's constructive mathematics, this makes it unlikely that WMP can be proved within...... the framework of Bishop-style mathematics (which has been open for about 20 years). The underivability even holds if the ine.ective schema of full comprehension (in all types) for negated formulas (in particular for -free formulas) is added, which allows one to derive the law of excluded middle...

  4. Theoretical and experimental analysis of dynamic processes of pipe branch for supply water to the Pelton turbine

    Directory of Open Access Journals (Sweden)

    Jovanović Miomir Lj.

    2012-01-01

    Full Text Available The paper presents the results of the analysis of pipe branch A6 to feed the Hydropower Plant ”Perućica” with integrated action Pelton turbines. The analysis was conducted experimentally (tensometric and numerically. The basis of the experimental research is the numerical finite element analysis of pipe branch A6 in pipeline C3. Pipe branch research was conducted in order to set the experiment and to determine extreme stress states. The analysis was used to perform the determination of the stress state of a geometrically complex assembly. This was done in detail as it had never been done before, even in the design phase. The actual states of the body pipe branch were established, along with the possible occurrence of water hammer accompanied by the appearance of hydraulic oscillation. This provides better energetic efficiency of the turbine devices. [Projekat Ministarstva nauke Republike Srbije, br. TR35049 and br. TR 33040

  5. Model for Studying Branching Processes, Multiplicity Distributions, and Non-Poissonian Fluctuations in Heavy-Ion Collisions

    International Nuclear Information System (INIS)

    Mekjian, A. Z.

    2001-01-01

    A change is made in a statistical framework by introducing a set of variables called ancestral or stochastic. This leads to an underlying dynamics based on branching laws, lines of descent in an hierarchical topology, period doublings, cascades, and clans. Above a certain branching probability, a percolative feature suddenly appears. Power laws emerge and cascade points arise and end at golden mean (5 -1)/2

  6. Noise can speed convergence in Markov chains.

    Science.gov (United States)

    Franzke, Brandon; Kosko, Bart

    2011-10-01

    A new theorem shows that noise can speed convergence to equilibrium in discrete finite-state Markov chains. The noise applies to the state density and helps the Markov chain explore improbable regions of the state space. The theorem ensures that a stochastic-resonance noise benefit exists for states that obey a vector-norm inequality. Such noise leads to faster convergence because the noise reduces the norm components. A corollary shows that a noise benefit still occurs if the system states obey an alternate norm inequality. This leads to a noise-benefit algorithm that requires knowledge of the steady state. An alternative blind algorithm uses only past state information to achieve a weaker noise benefit. Simulations illustrate the predicted noise benefits in three well-known Markov models. The first model is a two-parameter Ehrenfest diffusion model that shows how noise benefits can occur in the class of birth-death processes. The second model is a Wright-Fisher model of genotype drift in population genetics. The third model is a chemical reaction network of zeolite crystallization. A fourth simulation shows a convergence rate increase of 64% for states that satisfy the theorem and an increase of 53% for states that satisfy the corollary. A final simulation shows that even suboptimal noise can speed convergence if the noise applies over successive time cycles. Noise benefits tend to be sharpest in Markov models that do not converge quickly and that do not have strong absorbing states.

  7. The Bacterial Sequential Markov Coalescent.

    Science.gov (United States)

    De Maio, Nicola; Wilson, Daniel J

    2017-05-01

    Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is

  8. Flux through a Markov chain

    International Nuclear Information System (INIS)

    Floriani, Elena; Lima, Ricardo; Ourrad, Ouerdia; Spinelli, Lionel

    2016-01-01

    Highlights: • The flux through a Markov chain of a conserved quantity (mass) is studied. • Mass is supplied by an external source and ends in the absorbing states of the chain. • Meaningful for modeling open systems whose dynamics has a Markov property. • The analytical expression of mass distribution is given for a constant source. • The expression of mass distribution is given for periodic or random sources. - Abstract: In this paper we study the flux through a finite Markov chain of a quantity, that we will call mass, which moves through the states of the chain according to the Markov transition probabilities. Mass is supplied by an external source and accumulates in the absorbing states of the chain. We believe that studying how this conserved quantity evolves through the transient (non-absorbing) states of the chain could be useful for the modelization of open systems whose dynamics has a Markov property.

  9. Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium

    Science.gov (United States)

    Kapfer, Sebastian C.; Krauth, Werner

    2017-12-01

    We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.

  10. Stochastic Dynamics through Hierarchically Embedded Markov Chains.

    Science.gov (United States)

    Vasconcelos, Vítor V; Santos, Fernando P; Santos, Francisco C; Pacheco, Jorge M

    2017-02-03

    Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects-such as mutations in evolutionary dynamics and a random exploration of choices in social systems-including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.

  11. Asymptotic evolution of quantum Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Novotny, Jaroslav [FNSPE, CTU in Prague, 115 19 Praha 1 - Stare Mesto (Czech Republic); Alber, Gernot [Institut fuer Angewandte Physik, Technische Universitaet Darmstadt, D-64289 Darmstadt (Germany)

    2012-07-01

    The iterated quantum operations, so called quantum Markov chains, play an important role in various branches of physics. They constitute basis for many discrete models capable to explore fundamental physical problems, such as the approach to thermal equilibrium, or the asymptotic dynamics of macroscopic physical systems far from thermal equilibrium. On the other hand, in the more applied area of quantum technology they also describe general characteristic properties of quantum networks or they can describe different quantum protocols in the presence of decoherence. A particularly, an interesting aspect of these quantum Markov chains is their asymptotic dynamics and its characteristic features. We demonstrate there is always a vector subspace (typically low-dimensional) of so-called attractors on which the resulting superoperator governing the iterative time evolution of quantum states can be diagonalized and in which the asymptotic quantum dynamics takes place. As the main result interesting algebraic relations are presented for this set of attractors which allow to specify their dual basis and to determine them in a convenient way. Based on this general theory we show some generalizations concerning the theory of fixed points or asymptotic evolution of random quantum operations.

  12. Markov Chains For Testing Redundant Software

    Science.gov (United States)

    White, Allan L.; Sjogren, Jon A.

    1990-01-01

    Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.

  13. Markov Chain Models for the Stochastic Modeling of Pitting Corrosion

    OpenAIRE

    Valor, A.; Caleyo, F.; Alfonso, L.; Velázquez, J. C.; Hallen, J. M.

    2013-01-01

    The stochastic nature of pitting corrosion of metallic structures has been widely recognized. It is assumed that this kind of deterioration retains no memory of the past, so only the current state of the damage influences its future development. This characteristic allows pitting corrosion to be categorized as a Markov process. In this paper, two different models of pitting corrosion, developed using Markov chains, are presented. Firstly, a continuous-time, nonhomogeneous linear growth (pure ...

  14. Reliability estimation of semi-Markov systems: a case study

    International Nuclear Information System (INIS)

    Ouhbi, Brahim; Limnios, Nikolaos

    1997-01-01

    In this article, we are concerned with the estimation of the reliability and the availability of a turbo-generator rotor using a set of data observed in a real engineering situation provided by Electricite De France (EDF). The rotor is modeled by a semi-Markov process, which is used to estimate the rotor's reliability and availability. To do this, we present a method for estimating the semi-Markov kernel from a censored data

  15. Quadratic Variation by Markov Chains

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Horel, Guillaume

    We introduce a novel estimator of the quadratic variation that is based on the the- ory of Markov chains. The estimator is motivated by some general results concerning filtering contaminated semimartingales. Specifically, we show that filtering can in prin- ciple remove the effects of market...... microstructure noise in a general framework where little is assumed about the noise. For the practical implementation, we adopt the dis- crete Markov chain model that is well suited for the analysis of financial high-frequency prices. The Markov chain framework facilitates simple expressions and elegant analyti...

  16. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    Systat Software Asia-Pacific. Ltd., in Bangalore, where the technical work for the development of the statistical software Systat takes ... In Part 4, we discuss some applications of the Markov ... one can construct the joint probability distribution of.

  17. Nuclear security assessment with Markov model approach

    International Nuclear Information System (INIS)

    Suzuki, Mitsutoshi; Terao, Norichika

    2013-01-01

    Nuclear security risk assessment with the Markov model based on random event is performed to explore evaluation methodology for physical protection in nuclear facilities. Because the security incidences are initiated by malicious and intentional acts, expert judgment and Bayes updating are used to estimate scenario and initiation likelihood, and it is assumed that the Markov model derived from stochastic process can be applied to incidence sequence. Both an unauthorized intrusion as Design Based Threat (DBT) and a stand-off attack as beyond-DBT are assumed to hypothetical facilities, and performance of physical protection and mitigation and minimization of consequence are investigated to develop the assessment methodology in a semi-quantitative manner. It is shown that cooperation between facility operator and security authority is important to respond to the beyond-DBT incidence. (author)

  18. Utility evaluations for Markov states of lung cancer for PET-based disease management

    International Nuclear Information System (INIS)

    Papatheofanis, F.J.

    2000-01-01

    Utilities for the health outcomes states (Markov states) of non-small cell lung carcinoma (NSCLCL) should be measured to evaluate management options for patients because patients are key participants in the process of care, and their assessment of diagnostic and therapeutic value in the options presented to them ultimately impacts their net health outcomes. This investigation sought to measure utilities for stage-dependent outcomes states of NSCLC. Persons (n=23) with suspected NSCLC based on physical findings and computed tomography completed a short utilities survey. Utility valuations were obtained according to severity of morbidity and varied considerably. Respondents rated these health states according to accuracy measures for 18 flurodeoxyglucose ( 18 FDG) positron emission tomography (PET) imaging and mediastinoscopy. The results demonstrate that stage-dependent morbidity is an important consideration for patients with NSCLC and should be included in any decision analysis regarding the evaluation or treatment of NSCLC. Respondents valued the quality of information obtained from non-invasive mediastinoscopy comparably. The utilities obtained from this investigation are useful in clinical decision-making based on Markov processes because they provide an initial estimation of utility assessment for 18 FDG-based diagnostic evaluation of lung cancer. Consequently, these utilities will be useful in future decision analyses that require patient preference in the assignment of the evaluation of decision options (branches)

  19. Markov Chain Models for the Stochastic Modeling of Pitting Corrosion

    Directory of Open Access Journals (Sweden)

    A. Valor

    2013-01-01

    Full Text Available The stochastic nature of pitting corrosion of metallic structures has been widely recognized. It is assumed that this kind of deterioration retains no memory of the past, so only the current state of the damage influences its future development. This characteristic allows pitting corrosion to be categorized as a Markov process. In this paper, two different models of pitting corrosion, developed using Markov chains, are presented. Firstly, a continuous-time, nonhomogeneous linear growth (pure birth Markov process is used to model external pitting corrosion in underground pipelines. A closed-form solution of the system of Kolmogorov's forward equations is used to describe the transition probability function in a discrete pit depth space. The transition probability function is identified by correlating the stochastic pit depth mean with the empirical deterministic mean. In the second model, the distribution of maximum pit depths in a pitting experiment is successfully modeled after the combination of two stochastic processes: pit initiation and pit growth. Pit generation is modeled as a nonhomogeneous Poisson process, in which induction time is simulated as the realization of a Weibull process. Pit growth is simulated using a nonhomogeneous Markov process. An analytical solution of Kolmogorov's system of equations is also found for the transition probabilities from the first Markov state. Extreme value statistics is employed to find the distribution of maximum pit depths.

  20. Markov Chain Modelling for Short-Term NDVI Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Stepčenko Artūrs

    2016-12-01

    Full Text Available In this paper, the NDVI time series forecasting model has been developed based on the use of discrete time, continuous state Markov chain of suitable order. The normalised difference vegetation index (NDVI is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation; therefore, it is an important variable for vegetation forecasting. A Markov chain is a stochastic process that consists of a state space. This stochastic process undergoes transitions from one state to another in the state space with some probabilities. A Markov chain forecast model is flexible in accommodating various forecast assumptions and structures. The present paper discusses the considerations and techniques in building a Markov chain forecast model at each step. Continuous state Markov chain model is analytically described. Finally, the application of the proposed Markov chain model is illustrated with reference to a set of NDVI time series data.

  1. Prognostics for Steam Generator Tube Rupture using Markov Chain model

    International Nuclear Information System (INIS)

    Kim, Gibeom; Heo, Gyunyoung; Kim, Hyeonmin

    2016-01-01

    This paper will describe the prognostics method for evaluating and forecasting the ageing effect and demonstrate the procedure of prognostics for the Steam Generator Tube Rupture (SGTR) accident. Authors will propose the data-driven method so called MCMC (Markov Chain Monte Carlo) which is preferred to the physical-model method in terms of flexibility and availability. Degradation data is represented as growth of burst probability over time. Markov chain model is performed based on transition probability of state. And the state must be discrete variable. Therefore, burst probability that is continuous variable have to be changed into discrete variable to apply Markov chain model to the degradation data. The Markov chain model which is one of prognostics methods was described and the pilot demonstration for a SGTR accident was performed as a case study. The Markov chain model is strong since it is possible to be performed without physical models as long as enough data are available. However, in the case of the discrete Markov chain used in this study, there must be loss of information while the given data is discretized and assigned to the finite number of states. In this process, original information might not be reflected on prediction sufficiently. This should be noted as the limitation of discrete models. Now we will be studying on other prognostics methods such as GPM (General Path Model) which is also data-driven method as well as the particle filer which belongs to physical-model method and conducting comparison analysis

  2. Tornadoes and related damage costs: statistical modeling with a semi-Markov approach

    OpenAIRE

    Corini, Chiara; D'Amico, Guglielmo; Petroni, Filippo; Prattico, Flavio; Manca, Raimondo

    2015-01-01

    We propose a statistical approach to tornadoes modeling for predicting and simulating occurrences of tornadoes and accumulated cost distributions over a time interval. This is achieved by modeling the tornadoes intensity, measured with the Fujita scale, as a stochastic process. Since the Fujita scale divides tornadoes intensity into six states, it is possible to model the tornadoes intensity by using Markov and semi-Markov models. We demonstrate that the semi-Markov approach is able to reprod...

  3. Markov Networks in Evolutionary Computation

    CERN Document Server

    Shakya, Siddhartha

    2012-01-01

    Markov networks and other probabilistic graphical modes have recently received an upsurge in attention from Evolutionary computation community, particularly in the area of Estimation of distribution algorithms (EDAs).  EDAs have arisen as one of the most successful experiences in the application of machine learning methods in optimization, mainly due to their efficiency to solve complex real-world optimization problems and their suitability for theoretical analysis. This book focuses on the different steps involved in the conception, implementation and application of EDAs that use Markov networks, and undirected models in general. It can serve as a general introduction to EDAs but covers also an important current void in the study of these algorithms by explaining the specificities and benefits of modeling optimization problems by means of undirected probabilistic models. All major developments to date in the progressive introduction of Markov networks based EDAs are reviewed in the book. Hot current researc...

  4. Markov chains and mixing times

    CERN Document Server

    Levin, David A; Wilmer, Elizabeth L

    2009-01-01

    This book is an introduction to the modern approach to the theory of Markov chains. The main goal of this approach is to determine the rate of convergence of a Markov chain to the stationary distribution as a function of the size and geometry of the state space. The authors develop the key tools for estimating convergence times, including coupling, strong stationary times, and spectral methods. Whenever possible, probabilistic methods are emphasized. The book includes many examples and provides brief introductions to some central models of statistical mechanics. Also provided are accounts of r

  5. Markov Models for Handwriting Recognition

    CERN Document Server

    Plotz, Thomas

    2011-01-01

    Since their first inception, automatic reading systems have evolved substantially, yet the recognition of handwriting remains an open research problem due to its substantial variation in appearance. With the introduction of Markovian models to the field, a promising modeling and recognition paradigm was established for automatic handwriting recognition. However, no standard procedures for building Markov model-based recognizers have yet been established. This text provides a comprehensive overview of the application of Markov models in the field of handwriting recognition, covering both hidden

  6. U(1) x U(1) x U(1) symmetry of the Kimura 3ST model and phylogenetic branching processes

    International Nuclear Information System (INIS)

    Bashford, J D; Jarvis, P D; Sumner, J G; Steel, M A

    2004-01-01

    An analysis of the Kimura 3ST model of DNA sequence evolution is given on the basis of its continuous Lie symmetries. The rate matrix commutes with a U(1) x U(1) x U(1) phase subgroup of the group GL(4) of 4 x 4 invertible complex matrices acting on a linear space spanned by the four nucleic acid base letters. The diagonal 'branching operator' representing speciation is defined, and shown to intertwine the U(1) x U(1) x U(1) action. Using the intertwining property, a general formula for the probability density on the leaves of a binary tree under the Kimura model is derived, which is shown to be equivalent to established phylogenetic spectral transform methods. (letter to the editor)

  7. Cross-Section Measurements of the Kr86(γ,n) Reaction to Probe the s-Process Branching at Kr85

    Science.gov (United States)

    Raut, R.; Tonchev, A. P.; Rusev, G.; Tornow, W.; Iliadis, C.; Lugaro, M.; Buntain, J.; Goriely, S.; Kelley, J. H.; Schwengner, R.; Banu, A.; Tsoneva, N.

    2013-09-01

    We have carried out photodisintegration cross-section measurements on Kr86 using monoenergetic photon beams ranging from the neutron separation energy, Sn=9.86MeV, to 13 MeV. We combine our experimental Kr86(γ,n)Kr85 cross section with results from our recent Kr86(γ,γ') measurement below the neutron separation energy to obtain the complete nuclear dipole response of Kr86. The new experimental information is used to predict the neutron capture cross section of Kr85, an important branching point nucleus on the abundance flow path during s-process nucleosynthesis. Our new and more precise Kr85(n,γ)Kr86 cross section allows us to produce more precise predictions of the Kr86 abundance from s-process models. In particular, we find that the models of the s process in asymptotic giant branch stars of mass <1.5M⊙, where the C13 neutron source burns convectively rather than radiatively, represent a possible solution for the highest Kr86∶Kr82 ratios observed in meteoritic stardust SiC grains.

  8. Cross-section measurements of the 86Kr(γ,n) reaction to probe the s-process branching at 85Kr.

    Science.gov (United States)

    Raut, R; Tonchev, A P; Rusev, G; Tornow, W; Iliadis, C; Lugaro, M; Buntain, J; Goriely, S; Kelley, J H; Schwengner, R; Banu, A; Tsoneva, N

    2013-09-13

    We have carried out photodisintegration cross-section measurements on 86Kr using monoenergetic photon beams ranging from the neutron separation energy, S(n) = 9.86  MeV, to 13 MeV. We combine our experimental 86Kr(γ,n)85Kr cross section with results from our recent 86Kr(γ,γ') measurement below the neutron separation energy to obtain the complete nuclear dipole response of 86Kr. The new experimental information is used to predict the neutron capture cross section of 85Kr, an important branching point nucleus on the abundance flow path during s-process nucleosynthesis. Our new and more precise 85Kr(n,γ)86Kr cross section allows us to produce more precise predictions of the 86Kr abundance from s-process models. In particular, we find that the models of the s process in asymptotic giant branch stars of mass <1.5M⊙, where the 13C neutron source burns convectively rather than radiatively, represent a possible solution for the highest 86Kr:82Kr ratios observed in meteoritic stardust SiC grains.

  9. Transportation and concentration inequalities for bifurcating Markov chains

    DEFF Research Database (Denmark)

    Penda, S. Valère Bitseki; Escobar-Bach, Mikael; Guillin, Arnaud

    2017-01-01

    We investigate the transportation inequality for bifurcating Markov chains which are a class of processes indexed by a regular binary tree. Fitting well models like cell growth when each individual gives birth to exactly two offsprings, we use transportation inequalities to provide useful...... concentration inequalities.We also study deviation inequalities for the empirical means under relaxed assumptions on the Wasserstein contraction for the Markov kernels. Applications to bifurcating nonlinear autoregressive processes are considered for point-wise estimates of the non-linear autoregressive...

  10. Consistency and refinement for Interval Markov Chains

    DEFF Research Database (Denmark)

    Delahaye, Benoit; Larsen, Kim Guldstrand; Legay, Axel

    2012-01-01

    Interval Markov Chains (IMC), or Markov Chains with probability intervals in the transition matrix, are the base of a classic specification theory for probabilistic systems [18]. The standard semantics of IMCs assigns to a specification the set of all Markov Chains that satisfy its interval...

  11. A Markov reward model checker

    NARCIS (Netherlands)

    Katoen, Joost P.; Maneesh Khattri, M.; Zapreev, I.S.; Zapreev, I.S.

    2005-01-01

    This short tool paper introduces MRMC, a model checker for discrete-time and continuous-time Markov reward models. It supports reward extensions of PCTL and CSL, and allows for the automated verification of properties concerning long-run and instantaneous rewards as well as cumulative rewards. In

  12. Adaptive Partially Hidden Markov Models

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Rasmussen, Tage

    1996-01-01

    Partially Hidden Markov Models (PHMM) have recently been introduced. The transition and emission probabilities are conditioned on the past. In this report, the PHMM is extended with a multiple token version. The different versions of the PHMM are applied to bi-level image coding....

  13. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  14. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    2013-01-01

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  15. Markov chain modelling of pitting corrosion in underground pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Caleyo, F. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico)], E-mail: fcaleyo@gmail.com; Velazquez, J.C. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico); Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado, 10400 La Habana (Cuba); Hallen, J.M. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico)

    2009-09-15

    A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.

  16. Markov chain modelling of pitting corrosion in underground pipelines

    International Nuclear Information System (INIS)

    Caleyo, F.; Velazquez, J.C.; Valor, A.; Hallen, J.M.

    2009-01-01

    A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.

  17. An Application of Graph Theory in Markov Chains Reliability Analysis

    Directory of Open Access Journals (Sweden)

    Pavel Skalny

    2014-01-01

    Full Text Available The paper presents reliability analysis which was realized for an industrial company. The aim of the paper is to present the usage of discrete time Markov chains and the flow in network approach. Discrete Markov chains a well-known method of stochastic modelling describes the issue. The method is suitable for many systems occurring in practice where we can easily distinguish various amount of states. Markov chains are used to describe transitions between the states of the process. The industrial process is described as a graph network. The maximal flow in the network corresponds to the production. The Ford-Fulkerson algorithm is used to quantify the production for each state. The combination of both methods are utilized to quantify the expected value of the amount of manufactured products for the given time period.

  18. Serial Branches

    DEFF Research Database (Denmark)

    Schindler, Christoph; Tamke, Martin; Tabatabai, Ali

    2013-01-01

    Within a 8-day workshop 19 students of KADK explored the performative potential of naturally angled and forked wood – a desired material until 19th century, but swept away by industrialization and its standardization of processes and materials. The workshop questioned whether contemporary...

  19. A semi-Markov model for the duration of stay in a non-homogenous ...

    African Journals Online (AJOL)

    The semi-Markov approach to a non-homogenous manpower system is considered. The mean duration of stay in a grade and the total duration of stay in the system are obtained. A renewal type equation is developed and used in deriving the limiting distribution of the semi – Markov process. Empirical estimators of the ...

  20. ANALYSING ACCEPTANCE SAMPLING PLANS BY MARKOV CHAINS

    Directory of Open Access Journals (Sweden)

    Mohammad Mirabi

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: In this research, a Markov analysis of acceptance sampling plans in a single stage and in two stages is proposed, based on the quality of the items inspected. In a stage of this policy, if the number of defective items in a sample of inspected items is more than the upper threshold, the batch is rejected. However, the batch is accepted if the number of defective items is less than the lower threshold. Nonetheless, when the number of defective items falls between the upper and lower thresholds, the decision-making process continues to inspect the items and collect further samples. The primary objective is to determine the optimal values of the upper and lower thresholds using a Markov process to minimise the total cost associated with a batch acceptance policy. A solution method is presented, along with a numerical demonstration of the application of the proposed methodology.

    AFRIKAANSE OPSOMMING: In hierdie navorsing word ’n Markov-ontleding gedoen van aannamemonsternemingsplanne wat plaasvind in ’n enkele stap of in twee stappe na gelang van die kwaliteit van die items wat geïnspekteer word. Indien die eerste monster toon dat die aantal defektiewe items ’n boonste grens oorskry, word die lot afgekeur. Indien die eerste monster toon dat die aantal defektiewe items minder is as ’n onderste grens, word die lot aanvaar. Indien die eerste monster toon dat die aantal defektiewe items in die gebied tussen die boonste en onderste grense lê, word die besluitnemingsproses voortgesit en verdere monsters word geneem. Die primêre doel is om die optimale waardes van die booonste en onderste grense te bepaal deur gebruik te maak van ’n Markov-proses sodat die totale koste verbonde aan die proses geminimiseer kan word. ’n Oplossing word daarna voorgehou tesame met ’n numeriese voorbeeld van die toepassing van die voorgestelde oplossing.

  1. Benchmarking of a Markov multizone model of contaminant transport.

    Science.gov (United States)

    Jones, Rachael M; Nicas, Mark

    2014-10-01

    A Markov chain model previously applied to the simulation of advection and diffusion process of gaseous contaminants is extended to three-dimensional transport of particulates in indoor environments. The model framework and assumptions are described. The performance of the Markov model is benchmarked against simple conventional models of contaminant transport. The Markov model is able to replicate elutriation predictions of particle deposition with distance from a point source, and the stirred settling of respirable particles. Comparisons with turbulent eddy diffusion models indicate that the Markov model exhibits numerical diffusion in the first seconds after release, but over time accurately predicts mean lateral dispersion. The Markov model exhibits some instability with grid length aspect when turbulence is incorporated by way of the turbulent diffusion coefficient, and advection is present. However, the magnitude of prediction error may be tolerable for some applications and can be avoided by incorporating turbulence by way of fluctuating velocity (e.g. turbulence intensity). © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  2. Markov transitions and the propagation of chaos

    International Nuclear Information System (INIS)

    Gottlieb, A.

    1998-01-01

    The propagation of chaos is a central concept of kinetic theory that serves to relate the equations of Boltzmann and Vlasov to the dynamics of many-particle systems. Propagation of chaos means that molecular chaos, i.e., the stochastic independence of two random particles in a many-particle system, persists in time, as the number of particles tends to infinity. We establish a necessary and sufficient condition for a family of general n-particle Markov processes to propagate chaos. This condition is expressed in terms of the Markov transition functions associated to the n-particle processes, and it amounts to saying that chaos of random initial states propagates if it propagates for pure initial states. Our proof of this result relies on the weak convergence approach to the study of chaos due to Sztitman and Tanaka. We assume that the space in which the particles live is homomorphic to a complete and separable metric space so that we may invoke Prohorov's theorem in our proof. We also show that, if the particles can be in only finitely many states, then molecular chaos implies that the specific entropies in the n-particle distributions converge to the entropy of the limiting single-particle distribution

  3. Influence of credit scoring on the dynamics of Markov chain

    Science.gov (United States)

    Galina, Timofeeva

    2015-11-01

    Markov processes are widely used to model the dynamics of a credit portfolio and forecast the portfolio risk and profitability. In the Markov chain model the loan portfolio is divided into several groups with different quality, which determined by presence of indebtedness and its terms. It is proposed that dynamics of portfolio shares is described by a multistage controlled system. The article outlines mathematical formalization of controls which reflect the actions of the bank's management in order to improve the loan portfolio quality. The most important control is the organization of approval procedure of loan applications. The credit scoring is studied as a control affecting to the dynamic system. Different formalizations of "good" and "bad" consumers are proposed in connection with the Markov chain model.

  4. Markov chain solution of photon multiple scattering through turbid slabs.

    Science.gov (United States)

    Lin, Ying; Northrop, William F; Li, Xuesong

    2016-11-14

    This work introduces a Markov Chain solution to model photon multiple scattering through turbid slabs via anisotropic scattering process, i.e., Mie scattering. Results show that the proposed Markov Chain model agree with commonly used Monte Carlo simulation for various mediums such as medium with non-uniform phase functions and absorbing medium. The proposed Markov Chain solution method successfully converts the complex multiple scattering problem with practical phase functions into a matrix form and solves transmitted/reflected photon angular distributions by matrix multiplications. Such characteristics would potentially allow practical inversions by matrix manipulation or stochastic algorithms where widely applied stochastic methods such as Monte Carlo simulations usually fail, and thus enable practical diagnostics reconstructions such as medical diagnosis, spray analysis, and atmosphere sciences.

  5. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  6. Application of Hidden Markov Models in Biomolecular Simulations.

    Science.gov (United States)

    Shukla, Saurabh; Shamsi, Zahra; Moffett, Alexander S; Selvam, Balaji; Shukla, Diwakar

    2017-01-01

    Hidden Markov models (HMMs) provide a framework to analyze large trajectories of biomolecular simulation datasets. HMMs decompose the conformational space of a biological molecule into finite number of states that interconvert among each other with certain rates. HMMs simplify long timescale trajectories for human comprehension, and allow comparison of simulations with experimental data. In this chapter, we provide an overview of building HMMs for analyzing bimolecular simulation datasets. We demonstrate the procedure for building a Hidden Markov model for Met-enkephalin peptide simulation dataset and compare the timescales of the process.

  7. Markov chain analysis of single spin flip Ising simulations

    International Nuclear Information System (INIS)

    Hennecke, M.

    1997-01-01

    The Markov processes defined by random and loop-based schemes for single spin flip attempts in Monte Carlo simulations of the 2D Ising model are investigated, by explicitly constructing their transition matrices. Their analysis reveals that loops over all lattice sites using a Metropolis-type single spin flip probability often do not define ergodic Markov chains, and have distorted dynamical properties even if they are ergodic. The transition matrices also enable a comparison of the dynamics of random versus loop spin selection and Glauber versus Metropolis probabilities

  8. Hierarchical Multiple Markov Chain Model for Unsupervised Texture Segmentation

    Czech Academy of Sciences Publication Activity Database

    Scarpa, G.; Gaetano, R.; Haindl, Michal; Zerubia, J.

    2009-01-01

    Roč. 18, č. 8 (2009), s. 1830-1843 ISSN 1057-7149 R&D Projects: GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : Classification * texture analysis * segmentation * hierarchical image models * Markov process Subject RIV: BD - Theory of Information Impact factor: 2.848, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-hierarchical multiple markov chain model for unsupervised texture segmentation.pdf

  9. Detecting Faults By Use Of Hidden Markov Models

    Science.gov (United States)

    Smyth, Padhraic J.

    1995-01-01

    Frequency of false alarms reduced. Faults in complicated dynamic system (e.g., antenna-aiming system, telecommunication network, or human heart) detected automatically by method of automated, continuous monitoring. Obtains time-series data by sampling multiple sensor outputs at discrete intervals of t and processes data via algorithm determining whether system in normal or faulty state. Algorithm implements, among other things, hidden first-order temporal Markov model of states of system. Mathematical model of dynamics of system not needed. Present method is "prior" method mentioned in "Improved Hidden-Markov-Model Method of Detecting Faults" (NPO-18982).

  10. Honest Importance Sampling with Multiple Markov Chains.

    Science.gov (United States)

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable

  11. Markov Chain Ontology Analysis (MCOA).

    Science.gov (United States)

    Frost, H Robert; McCray, Alexa T

    2012-02-03

    Biomedical ontologies have become an increasingly critical lens through which researchers analyze the genomic, clinical and bibliographic data that fuels scientific research. Of particular relevance are methods, such as enrichment analysis, that quantify the importance of ontology classes relative to a collection of domain data. Current analytical techniques, however, remain limited in their ability to handle many important types of structural complexity encountered in real biological systems including class overlaps, continuously valued data, inter-instance relationships, non-hierarchical relationships between classes, semantic distance and sparse data. In this paper, we describe a methodology called Markov Chain Ontology Analysis (MCOA) and illustrate its use through a MCOA-based enrichment analysis application based on a generative model of gene activation. MCOA models the classes in an ontology, the instances from an associated dataset and all directional inter-class, class-to-instance and inter-instance relationships as a single finite ergodic Markov chain. The adjusted transition probability matrix for this Markov chain enables the calculation of eigenvector values that quantify the importance of each ontology class relative to other classes and the associated data set members. On both controlled Gene Ontology (GO) data sets created with Escherichia coli, Drosophila melanogaster and Homo sapiens annotations and real gene expression data extracted from the Gene Expression Omnibus (GEO), the MCOA enrichment analysis approach provides the best performance of comparable state-of-the-art methods. A methodology based on Markov chain models and network analytic metrics can help detect the relevant signal within large, highly interdependent and noisy data sets and, for applications such as enrichment analysis, has been shown to generate superior performance on both real and simulated data relative to existing state-of-the-art approaches.

  12. Neuroevolution Mechanism for Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Nabil M. Hewahi

    2011-12-01

    Full Text Available Hidden Markov Model (HMM is a statistical model based on probabilities. HMM is becoming one of the major models involved in many applications such as natural language
    processing, handwritten recognition, image processing, prediction systems and many more. In this research we are concerned with finding out the best HMM for a certain application domain. We propose a neuroevolution process that is based first on converting the HMM to a neural network, then generating many neural networks at random where each represents a HMM. We proceed by
    applying genetic operators to obtain new set of neural networks where each represents HMMs, and updating the population. Finally select the best neural network based on a fitness function.

  13. A Markov Chain Model for Contagion

    Directory of Open Access Journals (Sweden)

    Angelos Dassios

    2014-11-01

    Full Text Available We introduce a bivariate Markov chain counting process with contagion for modelling the clustering arrival of loss claims with delayed settlement for an insurance company. It is a general continuous-time model framework that also has the potential to be applicable to modelling the clustering arrival of events, such as jumps, bankruptcies, crises and catastrophes in finance, insurance and economics with both internal contagion risk and external common risk. Key distributional properties, such as the moments and probability generating functions, for this process are derived. Some special cases with explicit results and numerical examples and the motivation for further actuarial applications are also discussed. The model can be considered a generalisation of the dynamic contagion process introduced by Dassios and Zhao (2011.

  14. Estimation of the workload correlation in a Markov fluid queue

    NARCIS (Netherlands)

    Kaynar, B.; Mandjes, M.R.H.

    2013-01-01

    This paper considers a Markov fluid queue, focusing on the correlation function of the stationary workload process. A simulation-based computation technique is proposed, which relies on a coupling idea. Then an upper bound on the variance of the resulting estimator is given, which reveals how the

  15. Efficient Approximation of Optimal Control for Markov Games

    DEFF Research Database (Denmark)

    Fearnley, John; Rabe, Markus; Schewe, Sven

    2011-01-01

    We study the time-bounded reachability problem for continuous-time Markov decision processes (CTMDPs) and games (CTMGs). Existing techniques for this problem use discretisation techniques to break time into discrete intervals, and optimal control is approximated for each interval separately...

  16. The Candy model revisited: Markov properties and inference

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette); R.S. Stoica

    2001-01-01

    textabstractThis paper studies the Candy model, a marked point process introduced by Stoica et al. (2000). We prove Ruelle and local stability, investigate its Markov properties, and discuss how the model may be sampled. Finally, we consider estimation of the model parameters and present some

  17. Efficient Modelling and Generation of Markov Automata (extended version)

    NARCIS (Netherlands)

    Timmer, Mark; Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    2012-01-01

    This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the

  18. Epitope discovery with phylogenetic hidden Markov models.

    LENUS (Irish Health Repository)

    Lacerda, Miguel

    2010-05-01

    Existing methods for the prediction of immunologically active T-cell epitopes are based on the amino acid sequence or structure of pathogen proteins. Additional information regarding the locations of epitopes may be acquired by considering the evolution of viruses in hosts with different immune backgrounds. In particular, immune-dependent evolutionary patterns at sites within or near T-cell epitopes can be used to enhance epitope identification. We have developed a mutation-selection model of T-cell epitope evolution that allows the human leukocyte antigen (HLA) genotype of the host to influence the evolutionary process. This is one of the first examples of the incorporation of environmental parameters into a phylogenetic model and has many other potential applications where the selection pressures exerted on an organism can be related directly to environmental factors. We combine this novel evolutionary model with a hidden Markov model to identify contiguous amino acid positions that appear to evolve under immune pressure in the presence of specific host immune alleles and that therefore represent potential epitopes. This phylogenetic hidden Markov model provides a rigorous probabilistic framework that can be combined with sequence or structural information to improve epitope prediction. As a demonstration, we apply the model to a data set of HIV-1 protein-coding sequences and host HLA genotypes.

  19. Monte Carlo methods for the reliability analysis of Markov systems

    International Nuclear Information System (INIS)

    Buslik, A.J.

    1985-01-01

    This paper presents Monte Carlo methods for the reliability analysis of Markov systems. Markov models are useful in treating dependencies between components. The present paper shows how the adjoint Monte Carlo method for the continuous time Markov process can be derived from the method for the discrete-time Markov process by a limiting process. The straightforward extensions to the treatment of mean unavailability (over a time interval) are given. System unavailabilities can also be estimated; this is done by making the system failed states absorbing, and not permitting repair from them. A forward Monte Carlo method is presented in which the weighting functions are related to the adjoint function. In particular, if the exact adjoint function is known then weighting factors can be constructed such that the exact answer can be obtained with a single Monte Carlo trial. Of course, if the exact adjoint function is known, there is no need to perform the Monte Carlo calculation. However, the formulation is useful since it gives insight into choices of the weight factors which will reduce the variance of the estimator

  20. Operations and support cost modeling using Markov chains

    Science.gov (United States)

    Unal, Resit

    1989-01-01

    Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.

  1. Using social network analysis tools in ecology : Markov process transition models applied to the seasonal trophic network dynamics of the Chesapeake Bay

    NARCIS (Netherlands)

    Johnson, Jeffrey C.; Luczkovich, Joseph J.; Borgatti, Stephen P.; Snijders, Tom A. B.; Luczkovich, S.P.

    2009-01-01

    Ecosystem components interact in complex ways and change over time due to a variety of both internal and external influences (climate change, season cycles, human impacts). Such processes need to be modeled dynamically using appropriate statistical methods for assessing change in network structure.

  2. Data-driven Markov models and their application in the evaluation of adverse events in radiotherapy

    CERN Document Server

    Abler, Daniel; Davies, Jim; Dosanjh, Manjit; Jena, Raj; Kirkby, Norman; Peach, Ken

    2013-01-01

    Decision-making processes in medicine rely increasingly on modelling and simulation techniques; they are especially useful when combining evidence from multiple sources. Markov models are frequently used to synthesize the available evidence for such simulation studies, by describing disease and treatment progress, as well as associated factors such as the treatment's effects on a patient's life and the costs to society. When the same decision problem is investigated by multiple stakeholders, differing modelling assumptions are often applied, making synthesis and interpretation of the results difficult. This paper proposes a standardized approach towards the creation of Markov models. It introduces the notion of ‘general Markov models’, providing a common definition of the Markov models that underlie many similar decision problems, and develops a language for their specification. We demonstrate the application of this language by developing a general Markov model for adverse event analysis in radiotherapy ...

  3. Stochastic processes in cell biology

    CERN Document Server

    Bressloff, Paul C

    2014-01-01

    This book develops the theory of continuous and discrete stochastic processes within the context of cell biology.  A wide range of biological topics are covered including normal and anomalous diffusion in complex cellular environments, stochastic ion channels and excitable systems, stochastic calcium signaling, molecular motors, intracellular transport, signal transduction, bacterial chemotaxis, robustness in gene networks, genetic switches and oscillators, cell polarization, polymerization, cellular length control, and branching processes. The book also provides a pedagogical introduction to the theory of stochastic process – Fokker Planck equations, stochastic differential equations, master equations and jump Markov processes, diffusion approximations and the system size expansion, first passage time problems, stochastic hybrid systems, reaction-diffusion equations, exclusion processes, WKB methods, martingales and branching processes, stochastic calculus, and numerical methods.   This text is primarily...

  4. Verification of Open Interactive Markov Chains

    OpenAIRE

    Brazdil, Tomas; Hermanns, Holger; Krcal, Jan; Kretinsky, Jan; Rehak, Vojtech

    2012-01-01

    Interactive Markov chains (IMC) are compositional behavioral models extending both labeled transition systems and continuous-time Markov chains. IMC pair modeling convenience - owed to compositionality properties - with effective verification algorithms and tools - owed to Markov properties. Thus far however, IMC verification did not consider compositionality properties, but considered closed systems. This paper discusses the evaluation of IMC in an open and thus compositional interpretation....

  5. A scaling analysis of a cat and mouse Markov chain

    NARCIS (Netherlands)

    Litvak, Nelli; Robert, Philippe

    2012-01-01

    If ($C_n$) a Markov chain on a discrete state space $S$, a Markov chain ($C_n, M_n$) on the product space $S \\times S$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain behaves like the original Markov chain and the second component changes only when both

  6. Criterion of Semi-Markov Dependent Risk Model

    Institute of Scientific and Technical Information of China (English)

    Xiao Yun MO; Xiang Qun YANG

    2014-01-01

    A rigorous definition of semi-Markov dependent risk model is given. This model is a generalization of the Markov dependent risk model. A criterion and necessary conditions of semi-Markov dependent risk model are obtained. The results clarify relations between elements among semi-Markov dependent risk model more clear and are applicable for Markov dependent risk model.

  7. Markov chain aggregation for agent-based models

    CERN Document Server

    Banisch, Sven

    2016-01-01

    This self-contained text develops a Markov chain approach that makes the rigorous analysis of a class of microscopic models that specify the dynamics of complex systems at the individual level possible. It presents a general framework of aggregation in agent-based and related computational models, one which makes use of lumpability and information theory in order to link the micro and macro levels of observation. The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent-based model (ABM), which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. An explicit formal representation of a resulting “micro-chain” including microscopic transition rates is derived for a class of models by using the random mapping representation of a Markov process. The type of probability distribution used to implement the stochastic part of the model, which defines the upd...

  8. Determination of a limit on the branching ratio of the rare process b → s γ with the L3 detector at LEP

    International Nuclear Information System (INIS)

    Dorne, I.

    1996-01-01

    This work is dedicated to the determination of a limit on the branching ratio of the rare process b -> Sγ, from Z -> bb-bar events collected at LEP with the L3 detector during collisions at √S ∼ M Z , M Z ± 2 GeV. The rare decay of the b quark, b -> sγ, is forbidden at tree level and occurs, in the Standard Model, through one loop diagram (called penguin diagram) which makes it sensitive to contributions of new particles such as charged Higgs bosons or supersymmetric particles. The theoretical branching ratio is given in Standard Model by Br(b->Sγ) (2.55 ± 0.58) x 10 -4 . The aim of this study was to observe, in the inclusive mode, a possible excess of the rate of the b -> sγ transition, compare to the expected value. The selection of b hadrons from Z hadronic decays is achieved by the use of both an algorithm based on a multidimensional analysis of the event shape and an algorithm based on the impact parameter of the tracks. The energetic photon is selected by using a π 0 /γ discriminator based on the transverse shape of its electromagnetic shower. The s-jet reconstruction is achieved by the use of an iterative method with search of the minimum invariant mass. It allows the determination of the b hadron rest frame, which picks near 2.5 GeV, is used in a new method of signal events simulation. No excess of event is observed in the data after the analysis of 1.5 million of Z decays. The limit obtained, when the systematic errors are included, is: Br(b -> sγ) ≤9.2 x 10 4 at 90% confidence level. This result is consistent with the Standard Model expectation. (author)

  9. Bundle Branch Block

    Science.gov (United States)

    ... known cause. Causes can include: Left bundle branch block Heart attacks (myocardial infarction) Thickened, stiffened or weakened ... myocarditis) High blood pressure (hypertension) Right bundle branch block A heart abnormality that's present at birth (congenital) — ...

  10. Neuro-Oncology Branch

    Science.gov (United States)

    ... BTTC are experts in their respective fields. Neuro-Oncology Clinical Fellowship This is a joint program with ... can increase survival rates. Learn more... The Neuro-Oncology Branch welcomes Dr. Mark Gilbert as new Branch ...

  11. Spiral branches and star formation

    International Nuclear Information System (INIS)

    Zasov, A.V.

    1974-01-01

    Origin of spiral branches of galaxies and formation of stars in them are considered from the point of view of the theory of the gravitational gas condensation, one of comparatively young theories. Arguments are presented in favour of the stellar condensation theory. The concept of the star formation of gas is no longer a speculative hypothesis. This is a theory which assumes quantitative verification and explains qualitatively many facts observed. And still our knowledge on the nature of spiral branches is very poor. It still remains vague what processes give origin to spiral branches, why some galaxies have spirals and others have none. And shapes of spiral branches are diverse. Some cases are known when spiral branches spread outside boundaries of galaxies themselves. Such spirals arise exclusively in the region where there are two or some interacting galaxies. Only first steps have been made in the explanation of the galaxy spiral branches, and it is necessary to carry out new observations and new theoretical calculations

  12. Quasi-Feller Markov chains

    Directory of Open Access Journals (Sweden)

    Jean B. Lasserre

    2000-01-01

    Full Text Available We consider the class of Markov kernels for which the weak or strong Feller property fails to hold at some discontinuity set. We provide a simple necessary and sufficient condition for existence of an invariant probability measure as well as a Foster-Lyapunov sufficient condition. We also characterize a subclass, the quasi (weak or strong Feller kernels, for which the sequences of expected occupation measures share the same asymptotic properties as for (weak or strong Feller kernels. In particular, it is shown that the sequences of expected occupation measures of strong and quasi strong-Feller kernels with an invariant probability measure converge setwise to an invariant measure.

  13. Branched polynomial covering maps

    DEFF Research Database (Denmark)

    Hansen, Vagn Lundsgaard

    1999-01-01

    A Weierstrass polynomial with multiple roots in certain points leads to a branched covering map. With this as the guiding example, we formally define and study the notion of a branched polynomial covering map. We shall prove that many finite covering maps are polynomial outside a discrete branch...... set. Particular studies are made of branched polynomial covering maps arising from Riemann surfaces and from knots in the 3-sphere....

  14. Resonance Energy Transfer-Based Molecular Switch Designed Using a Systematic Design Process Based on Monte Carlo Methods and Markov Chains

    Science.gov (United States)

    Rallapalli, Arjun

    A RET network consists of a network of photo-active molecules called chromophores that can participate in inter-molecular energy transfer called resonance energy transfer (RET). RET networks are used in a variety of applications including cryptographic devices, storage systems, light harvesting complexes, biological sensors, and molecular rulers. In this dissertation, we focus on creating a RET device called closed-diffusive exciton valve (C-DEV) in which the input to output transfer function is controlled by an external energy source, similar to a semiconductor transistor like the MOSFET. Due to their biocompatibility, molecular devices like the C-DEVs can be used to introduce computing power in biological, organic, and aqueous environments such as living cells. Furthermore, the underlying physics in RET devices are stochastic in nature, making them suitable for stochastic computing in which true random distribution generation is critical. In order to determine a valid configuration of chromophores for the C-DEV, we developed a systematic process based on user-guided design space pruning techniques and built-in simulation tools. We show that our C-DEV is 15x better than C-DEVs designed using ad hoc methods that rely on limited data from prior experiments. We also show ways in which the C-DEV can be improved further and how different varieties of C-DEVs can be combined to form more complex logic circuits. Moreover, the systematic design process can be used to search for valid chromophore network configurations for a variety of RET applications. We also describe a feasibility study for a technique used to control the orientation of chromophores attached to DNA. Being able to control the orientation can expand the design space for RET networks because it provides another parameter to tune their collective behavior. While results showed limited control over orientation, the analysis required the development of a mathematical model that can be used to determine the

  15. Green functions and Langevin equations for nonlinear diffusion equations: A comment on ‘Markov processes, Hurst exponents, and nonlinear diffusion equations’ by Bassler et al.

    Science.gov (United States)

    Frank, T. D.

    2008-02-01

    We discuss two central claims made in the study by Bassler et al. [K.E. Bassler, G.H. Gunaratne, J.L. McCauley, Physica A 369 (2006) 343]. Bassler et al. claimed that Green functions and Langevin equations cannot be defined for nonlinear diffusion equations. In addition, they claimed that nonlinear diffusion equations are linear partial differential equations disguised as nonlinear ones. We review bottom-up and top-down approaches that have been used in the literature to derive Green functions for nonlinear diffusion equations and, in doing so, show that the first claim needs to be revised. We show that the second claim as well needs to be revised. To this end, we point out similarities and differences between non-autonomous linear Fokker-Planck equations and autonomous nonlinear Fokker-Planck equations. In this context, we raise the question whether Bassler et al.’s approach to financial markets is physically plausible because it necessitates the introduction of external traders and causes. Such external entities can easily be eliminated when taking self-organization principles and concepts of nonextensive thermostatistics into account and modeling financial processes by means of nonlinear Fokker-Planck equations.

  16. Learning Markov models for stationary system behaviors

    DEFF Research Database (Denmark)

    Chen, Yingke; Mao, Hua; Jaeger, Manfred

    2012-01-01

    to a single long observation sequence, and in these situations existing automatic learning methods cannot be applied. In this paper, we adapt algorithms for learning variable order Markov chains from a single observation sequence of a target system, so that stationary system properties can be verified using......Establishing an accurate model for formal verification of an existing hardware or software system is often a manual process that is both time consuming and resource demanding. In order to ease the model construction phase, methods have recently been proposed for automatically learning accurate...... the learned model. Experiments demonstrate that system properties (formulated as stationary probabilities of LTL formulas) can be reliably identified using the learned model....

  17. Estimation and uncertainty of reversible Markov models.

    Science.gov (United States)

    Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank

    2015-11-07

    Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software--http://pyemma.org--as of version 2.0.

  18. Monte Carlo simulation of Markov unreliability models

    International Nuclear Information System (INIS)

    Lewis, E.E.; Boehm, F.

    1984-01-01

    A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)

  19. Markov state models of protein misfolding

    Science.gov (United States)

    Sirur, Anshul; De Sancho, David; Best, Robert B.

    2016-02-01

    Markov state models (MSMs) are an extremely useful tool for understanding the conformational dynamics of macromolecules and for analyzing MD simulations in a quantitative fashion. They have been extensively used for peptide and protein folding, for small molecule binding, and for the study of native ensemble dynamics. Here, we adapt the MSM methodology to gain insight into the dynamics of misfolded states. To overcome possible flaws in root-mean-square deviation (RMSD)-based metrics, we introduce a novel discretization approach, based on coarse-grained contact maps. In addition, we extend the MSM methodology to include "sink" states in order to account for the irreversibility (on simulation time scales) of processes like protein misfolding. We apply this method to analyze the mechanism of misfolding of tandem repeats of titin domains, and how it is influenced by confinement in a chaperonin-like cavity.

  20. Numerical research of the optimal control problem in the semi-Markov inventory model

    Energy Technology Data Exchange (ETDEWEB)

    Gorshenin, Andrey K. [Institute of Informatics Problems, Russian Academy of Sciences, Vavilova str., 44/2, Moscow, Russia MIREA, Faculty of Information Technology (Russian Federation); Belousov, Vasily V. [Institute of Informatics Problems, Russian Academy of Sciences, Vavilova str., 44/2, Moscow (Russian Federation); Shnourkoff, Peter V.; Ivanov, Alexey V. [National research university Higher school of economics, Moscow (Russian Federation)

    2015-03-10

    This paper is devoted to the numerical simulation of stochastic system for inventory management products using controlled semi-Markov process. The results of a special software for the system’s research and finding the optimal control are presented.

  1. Numerical research of the optimal control problem in the semi-Markov inventory model

    International Nuclear Information System (INIS)

    Gorshenin, Andrey K.; Belousov, Vasily V.; Shnourkoff, Peter V.; Ivanov, Alexey V.

    2015-01-01

    This paper is devoted to the numerical simulation of stochastic system for inventory management products using controlled semi-Markov process. The results of a special software for the system’s research and finding the optimal control are presented

  2. Probabilistic Reachability for Parametric Markov Models

    DEFF Research Database (Denmark)

    Hahn, Ernst Moritz; Hermanns, Holger; Zhang, Lijun

    2011-01-01

    Given a parametric Markov model, we consider the problem of computing the rational function expressing the probability of reaching a given set of states. To attack this principal problem, Daws has suggested to first convert the Markov chain into a finite automaton, from which a regular expression...

  3. Markov-modulated and feedback fluid queues

    NARCIS (Netherlands)

    Scheinhardt, Willem R.W.

    1998-01-01

    In the last twenty years the field of Markov-modulated fluid queues has received considerable attention. In these models a fluid reservoir receives and/or releases fluid at rates which depend on the actual state of a background Markov chain. In the first chapter of this thesis we give a short

  4. Branched polynomial covering maps

    DEFF Research Database (Denmark)

    Hansen, Vagn Lundsgaard

    2002-01-01

    A Weierstrass polynomial with multiple roots in certain points leads to a branched covering map. With this as the guiding example, we formally define and study the notion of a branched polynomial covering map. We shall prove that many finite covering maps are polynomial outside a discrete branch ...... set. Particular studies are made of branched polynomial covering maps arising from Riemann surfaces and from knots in the 3-sphere. (C) 2001 Elsevier Science B.V. All rights reserved.......A Weierstrass polynomial with multiple roots in certain points leads to a branched covering map. With this as the guiding example, we formally define and study the notion of a branched polynomial covering map. We shall prove that many finite covering maps are polynomial outside a discrete branch...

  5. Classification Using Markov Blanket for Feature Selection

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Luo, Jian

    2009-01-01

    Selecting relevant features is in demand when a large data set is of interest in a classification task. It produces a tractable number of features that are sufficient and possibly improve the classification performance. This paper studies a statistical method of Markov blanket induction algorithm...... for filtering features and then applies a classifier using the Markov blanket predictors. The Markov blanket contains a minimal subset of relevant features that yields optimal classification performance. We experimentally demonstrate the improved performance of several classifiers using a Markov blanket...... induction as a feature selection method. In addition, we point out an important assumption behind the Markov blanket induction algorithm and show its effect on the classification performance....

  6. Long time behavior of Markov processes

    Directory of Open Access Journals (Sweden)

    Cattiaux Patrick

    2014-01-01

    Full Text Available These notes correspond to a three hours lecture given during the workshop “Metastability and Stochastic Processes”held in Marne-la-Vallée in September 21st-23rd 2011. I would like to warmly thank the organizers Tony Lelièvre and Arnaud Guillin for a very nice organization and for obliging me first to give the lecture, second to write these notes. I also want to acknowledge all the people who attended the lecture.

  7. Markov Decision Processes Discrete Stochastic Dynamic Programming

    CERN Document Server

    Puterman, Martin L

    2005-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "This text is unique in bringing together so many results hitherto found only in part in other texts and papers. . . . The text is fairly self-contained, inclusive of some basic mathematical results needed, and provides a rich diet

  8. A Markov Decision Process * EZUGWU, VO

    African Journals Online (AJOL)

    ADOWIE PERE

    strategy of traders who, in the presence of cost transaction, invest on this risky ... designing autonomous intelligent agent for forest fire fighting. ... minimizing energy consumption and maximizing sensing .... The actions allow us to modify the ...

  9. Bayesian estimation for quantification by real-time polymerase chain reaction under a branching process model of the DNA molecules amplification process

    NARCIS (Netherlands)

    Lalam, N.; Jacob, C.

    2007-01-01

    The aim of Quantitative Polymerase Chain Reaction is to determine the initial amount X0 of specific nucleic acids from an observed trajectory of the amplification process, the amplification being achieved through successive replication cycles. This process depends on the efficiency fpngn of

  10. Schmidt games and Markov partitions

    International Nuclear Information System (INIS)

    Tseng, Jimmy

    2009-01-01

    Let T be a C 2 -expanding self-map of a compact, connected, C ∞ , Riemannian manifold M. We correct a minor gap in the proof of a theorem from the literature: the set of points whose forward orbits are nondense has full Hausdorff dimension. Our correction allows us to strengthen the theorem. Combining the correction with Schmidt games, we generalize the theorem in dimension one: given a point x 0 in M, the set of points whose forward orbit closures miss x 0 is a winning set. Finally, our key lemma, the no matching lemma, may be of independent interest in the theory of symbolic dynamics or the theory of Markov partitions

  11. MARKOV CHAIN MODELING OF PERFORMANCE DEGRADATION OF PHOTOVOLTAIC SYSTEM

    OpenAIRE

    E. Suresh Kumar; Asis Sarkar; Dhiren kumar Behera

    2012-01-01

    Modern probability theory studies chance processes for which theknowledge of previous outcomes influence predictions for future experiments. In principle, when a sequence of chance experiments, all of the past outcomes could influence the predictions for the next experiment. In Markov chain type of chance, the outcome of a given experiment can affect the outcome of the next experiment. The system state changes with time and the state X and time t are two random variables. Each of these variab...

  12. Markov chains models, algorithms and applications

    CERN Document Server

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  13. Markov chains analytic and Monte Carlo computations

    CERN Document Server

    Graham, Carl

    2014-01-01

    Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec

  14. Geometrical scaling, furry branching and minijets

    International Nuclear Information System (INIS)

    Hwa, R.C.

    1988-01-01

    Scaling properties and their violations in hadronic collisions are discussed in the framework of the geometrical branching model. Geometrical scaling supplemented by Furry branching characterizes the soft component, while the production of jets specifies the hard component. Many features of multiparticle production processes are well described by this model. 21 refs

  15. Extracting Markov Models of Peptide Conformational Dynamics from Simulation Data.

    Science.gov (United States)

    Schultheis, Verena; Hirschberger, Thomas; Carstens, Heiko; Tavan, Paul

    2005-07-01

    A high-dimensional time series obtained by simulating a complex and stochastic dynamical system (like a peptide in solution) may code an underlying multiple-state Markov process. We present a computational approach to most plausibly identify and reconstruct this process from the simulated trajectory. Using a mixture of normal distributions we first construct a maximum likelihood estimate of the point density associated with this time series and thus obtain a density-oriented partition of the data space. This discretization allows us to estimate the transfer operator as a matrix of moderate dimension at sufficient statistics. A nonlinear dynamics involving that matrix and, alternatively, a deterministic coarse-graining procedure are employed to construct respective hierarchies of Markov models, from which the model most plausibly mapping the generating stochastic process is selected by consideration of certain observables. Within both procedures the data are classified in terms of prototypical points, the conformations, marking the various Markov states. As a typical example, the approach is applied to analyze the conformational dynamics of a tripeptide in solution. The corresponding high-dimensional time series has been obtained from an extended molecular dynamics simulation.

  16. A scaling analysis of a cat and mouse Markov chain

    NARCIS (Netherlands)

    Litvak, Nelli; Robert, Philippe

    Motivated by an original on-line page-ranking algorithm, starting from an arbitrary Markov chain $(C_n)$ on a discrete state space ${\\cal S}$, a Markov chain $(C_n,M_n)$ on the product space ${\\cal S}^2$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain

  17. Cash efficiency for bank branches.

    Science.gov (United States)

    Cabello, Julia García

    2013-01-01

    Bank liquidity management has become a major issue during the financial crisis as liquidity shortages have intensified and have put pressure on banks to diversity and improve their liquidity sources. While a significant strand of the literature concentrates on wholesale liquidity generation and on the alternative to deposit funding, the management of an inventory of cash holdings within the banks' branches is also a relevant issue as any significant improvement in cash management at the bank distribution channels may have a positive effect in reducing liquidity tensions. In this paper, we propose a simple programme of cash efficiency for the banks' branches, very easy to implement, which conform to a set of instructions to be imposed from the bank to their branches. This model proves to significantly reduce cash holdings at branches thereby providing efficiency improvements in liquidity management. The methodology we propose is based on the definition of some stochastic processes combined with renewal processes, which capture the random elements of the cash flow, before applying suitable optimization programmes to all the costs involved in cash movements. The classical issue of the Transaction Demand for the Cash and some aspects of Inventory Theory are also present. Mathematics Subject Classification (2000) C02, C60, E50.

  18. Quantitative analysis of the side-branch orifice after bifurcation stenting using en-face processing of OCT images: a comparison between Xience V and Resolute Integrity stents.

    Science.gov (United States)

    Minami, Yoshiyasu; Wang, Zhao; Aguirre, Aaron D; Lee, Stephen; Uemura, Shiro; Soeda, Tsunenari; Vergallo, Rocco; Raffel, Owen C; Barlis, Peter; Itoh, Tomonori; Lee, Hang; Fujimoto, James; Jang, Ik-Kyung

    2016-01-01

    Methods for intravascular assessment of the side-branch (SB) orifice after stenting are not readily available. The aim of this study was to assess the utility of an en-face projection processing for optical coherence tomography (OCT) images for SB evaluation. Measurements of the SB orifice obtained using en-face OCT images were validated using a phantom model. Linear regression modeling was applied to estimated area measurements made on the en-face images. The SB orifice was then analyzed in 88 patients with bifurcation lesions treated with either Xience V (everolimus-eluting stent) or Resolute Integrity [zotarolimus-eluting stent (ZES)]. The SB orifice area (A) and the area obstructed by struts (B) were calculated, and the %open area was evaluated as (A-B)/A*100. Linear regression modeling demonstrated that the observed departures of the intercept and slope were not significantly different from 0 (-0.12 ± 0.22, P=0.59) and 1 (1.01 ± 0.06, R(2)=0.88, P=0.87), respectively. In cases without SB dilatation, the %open area was significantly larger in the everolimus-eluting stent group (n=25) than in the ZES group [n=32; 89.2% (83.7-91.3) vs. 84.3% (78.9-87.8), P=0.04]. A significant difference in %open area between cases with and those without SB dilatation was demonstrated in the ZES group [91.4% (86.1-94.0) vs. 84.3% (78.9-87.8), P=0.04]. The accuracy of SB orifice measurement on an en-face OCT image was validated using a phantom model. This novel approach enables quantitative evaluation of the differences in SB orifice area free from struts among different stent types and different treatment strategies in vivo.

  19. APPLICATION OF HIDDEN MARKOV CHAINS IN QUALITY CONTROL

    Directory of Open Access Journals (Sweden)

    Hanife DEMIRALP

    2013-01-01

    Full Text Available The ever growing technological innovations and sophistication in industrial processes require adequate checks on quality. Thus, there is an increasing demand for simple and efficient quality control methods. In this regard the control charts stand out in simplicity and efficiency. In this paper, we propose a method of controlling quality based on the theory of hidden Markov chains. Based on samples drawn at different times from the production process, the method obtains the state of the process probabilistically. The main advantage of the method is that it requires no assumption on the normality of the process output.

  20. Entanglement branching operator

    Science.gov (United States)

    Harada, Kenji

    2018-01-01

    We introduce an entanglement branching operator to split a composite entanglement flow in a tensor network which is a promising theoretical tool for many-body systems. We can optimize an entanglement branching operator by solving a minimization problem based on squeezing operators. The entanglement branching is a new useful operation to manipulate a tensor network. For example, finding a particular entanglement structure by an entanglement branching operator, we can improve a higher-order tensor renormalization group method to catch a proper renormalization flow in a tensor network space. This new method yields a new type of tensor network states. The second example is a many-body decomposition of a tensor by using an entanglement branching operator. We can use it for a perfect disentangling among tensors. Applying a many-body decomposition recursively, we conceptually derive projected entangled pair states from quantum states that satisfy the area law of entanglement entropy.

  1. Recursive utility in a Markov environment with stochastic growth.

    Science.gov (United States)

    Hansen, Lars Peter; Scheinkman, José A

    2012-07-24

    Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron-Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility.

  2. Multivariate longitudinal data analysis with mixed effects hidden Markov models.

    Science.gov (United States)

    Raffa, Jesse D; Dubin, Joel A

    2015-09-01

    Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.

  3. A Bayesian model for binary Markov chains

    Directory of Open Access Journals (Sweden)

    Belkheir Essebbar

    2004-02-01

    Full Text Available This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.

  4. Subharmonic projections for a quantum Markov semigroup

    International Nuclear Information System (INIS)

    Fagnola, Franco; Rebolledo, Rolando

    2002-01-01

    This article introduces a concept of subharmonic projections for a quantum Markov semigroup, in view of characterizing the support projection of a stationary state in terms of the semigroup generator. These results, together with those of our previous article [J. Math. Phys. 42, 1296 (2001)], lead to a method for proving the existence of faithful stationary states. This is often crucial in the analysis of ergodic properties of quantum Markov semigroups. The method is illustrated by applications to physical models

  5. Transition Effect Matrices and Quantum Markov Chains

    Science.gov (United States)

    Gudder, Stan

    2009-06-01

    A transition effect matrix (TEM) is a quantum generalization of a classical stochastic matrix. By employing a TEM we obtain a quantum generalization of a classical Markov chain. We first discuss state and operator dynamics for a quantum Markov chain. We then consider various types of TEMs and vector states. In particular, we study invariant, equilibrium and singular vector states and investigate projective, bistochastic, invertible and unitary TEMs.

  6. Robust Dynamics and Control of a Partially Observed Markov Chain

    International Nuclear Information System (INIS)

    Elliott, R. J.; Malcolm, W. P.; Moore, J. P.

    2007-01-01

    In a seminal paper, Martin Clark (Communications Systems and Random Process Theory, Darlington, 1977, pp. 721-734, 1978) showed how the filtered dynamics giving the optimal estimate of a Markov chain observed in Gaussian noise can be expressed using an ordinary differential equation. These results offer substantial benefits in filtering and in control, often simplifying the analysis and an in some settings providing numerical benefits, see, for example Malcolm et al. (J. Appl. Math. Stoch. Anal., 2007, to appear).Clark's method uses a gauge transformation and, in effect, solves the Wonham-Zakai equation using variation of constants. In this article, we consider the optimal control of a partially observed Markov chain. This problem is discussed in Elliott et al. (Hidden Markov Models Estimation and Control, Applications of Mathematics Series, vol. 29, 1995). The innovation in our results is that the robust dynamics of Clark are used to compute forward in time dynamics for a simplified adjoint process. A stochastic minimum principle is established

  7. Adaptive Markov Chain Monte Carlo

    KAUST Repository

    Jadoon, Khan

    2016-08-08

    A substantial interpretation of electromagnetic induction (EMI) measurements requires quantifying optimal model parameters and uncertainty of a nonlinear inverse problem. For this purpose, an adaptive Bayesian Markov chain Monte Carlo (MCMC) algorithm is used to assess multi-orientation and multi-offset EMI measurements in an agriculture field with non-saline and saline soil. In the MCMC simulations, posterior distribution was computed using Bayes rule. The electromagnetic forward model based on the full solution of Maxwell\\'s equations was used to simulate the apparent electrical conductivity measured with the configurations of EMI instrument, the CMD mini-Explorer. The model parameters and uncertainty for the three-layered earth model are investigated by using synthetic data. Our results show that in the scenario of non-saline soil, the parameters of layer thickness are not well estimated as compared to layers electrical conductivity because layer thicknesses in the model exhibits a low sensitivity to the EMI measurements, and is hence difficult to resolve. Application of the proposed MCMC based inversion to the field measurements in a drip irrigation system demonstrate that the parameters of the model can be well estimated for the saline soil as compared to the non-saline soil, and provide useful insight about parameter uncertainty for the assessment of the model outputs.

  8. Fitting Hidden Markov Models to Psychological Data

    Directory of Open Access Journals (Sweden)

    Ingmar Visser

    2002-01-01

    Full Text Available Markov models have been used extensively in psychology of learning. Applications of hidden Markov models are rare however. This is partially due to the fact that comprehensive statistics for model selection and model assessment are lacking in the psychological literature. We present model selection and model assessment statistics that are particularly useful in applying hidden Markov models in psychology. These statistics are presented and evaluated by simulation studies for a toy example. We compare AIC, BIC and related criteria and introduce a prediction error measure for assessing goodness-of-fit. In a simulation study, two methods of fitting equality constraints are compared. In two illustrative examples with experimental data we apply selection criteria, fit models with constraints and assess goodness-of-fit. First, data from a concept identification task is analyzed. Hidden Markov models provide a flexible approach to analyzing such data when compared to other modeling methods. Second, a novel application of hidden Markov models in implicit learning is presented. Hidden Markov models are used in this context to quantify knowledge that subjects express in an implicit learning task. This method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.

  9. Robust filtering and prediction for systems with embedded finite-state Markov-Chain dynamics

    International Nuclear Information System (INIS)

    Pate, E.B.

    1986-01-01

    This research developed new methodologies for the design of robust near-optimal filters/predictors for a class of system models that exhibit embedded finite-state Markov-chain dynamics. These methodologies are developed through the concepts and methods of stochastic model building (including time-series analysis), game theory, decision theory, and filtering/prediction for linear dynamic systems. The methodology is based on the relationship between the robustness of a class of time-series models and quantization which is applied to the time series as part of the model identification process. This relationship is exploited by utilizing the concept of an equivalence, through invariance of spectra, between the class of Markov-chain models and the class of autoregressive moving average (ARMA) models. This spectral equivalence permits a straightforward implementation of the desirable robust properties of the Markov-chain approximation in a class of models which may be applied in linear-recursive form in a linear Kalman filter/predictor structure. The linear filter/predictor structure is shown to provide asymptotically optimal estimates of states which represent one or more integrations of the Markov-chain state. The development of a new saddle-point theorem for a game based on the Markov-chain model structure gives rise to a technique for determining a worst case Markov-chain process, upon which a robust filter/predictor design if based

  10. Critical role of alkyl chain branching of organic semiconductors in enabling solution-processed N-channel organic thin-film transistors with mobility of up to 3.50 cm² V(-1) s(-1).

    Science.gov (United States)

    Zhang, Fengjiao; Hu, Yunbin; Schuettfort, Torben; Di, Chong-an; Gao, Xike; McNeill, Christopher R; Thomsen, Lars; Mannsfeld, Stefan C B; Yuan, Wei; Sirringhaus, Henning; Zhu, Daoben

    2013-02-13

    Substituted side chains are fundamental units in solution processable organic semiconductors in order to achieve a balance of close intermolecular stacking, high crystallinity, and good compatibility with different wet techniques. Based on four air-stable solution-processed naphthalene diimides fused with 2-(1,3-dithiol-2-ylidene)malononitrile groups (NDI-DTYM2) that bear branched alkyl chains with varied side-chain length and different branching position, we have carried out systematic studies on the relationship between film microstructure and charge transport in their organic thin-film transistors (OTFTs). In particular synchrotron measurements (grazing incidence X-ray diffraction and near-edge X-ray absorption fine structure) are combined with device optimization studies to probe the interplay between molecular structure, molecular packing, and OTFT mobility. It is found that the side-chain length has a moderate influence on thin-film microstructure but leads to only limited changes in OTFT performance. In contrast, the position of branching point results in subtle, yet critical changes in molecular packing and leads to dramatic differences in electron mobility ranging from ~0.001 to >3.0 cm(2) V(-1) s(-1). Incorporating a NDI-DTYM2 core with three-branched N-alkyl substituents of C(11,6) results in a dense in-plane molecular packing with an unit cell area of 127 Å(2), larger domain sizes of up to 1000 × 3000 nm(2), and an electron mobility of up to 3.50 cm(2) V(-1) s(-1), which is an unprecedented value for ambient stable n-channel solution-processed OTFTs reported to date. These results demonstrate that variation of the alkyl chain branching point is a powerful strategy for tuning of molecular packing to enable high charge transport mobilities.

  11. Turing mechanism underlying a branching model for lung morphogenesis.

    Science.gov (United States)

    Xu, Hui; Sun, Mingzhu; Zhao, Xin

    2017-01-01

    The mammalian lung develops through branching morphogenesis. Two primary forms of branching, which occur in order, in the lung have been identified: tip bifurcation and side branching. However, the mechanisms of lung branching morphogenesis remain to be explored. In our previous study, a biological mechanism was presented for lung branching pattern formation through a branching model. Here, we provide a mathematical mechanism underlying the branching patterns. By decoupling the branching model, we demonstrated the existence of Turing instability. We performed Turing instability analysis to reveal the mathematical mechanism of the branching patterns. Our simulation results show that the Turing patterns underlying the branching patterns are spot patterns that exhibit high local morphogen concentration. The high local morphogen concentration induces the growth of branching. Furthermore, we found that the sparse spot patterns underlie the tip bifurcation patterns, while the dense spot patterns underlies the side branching patterns. The dispersion relation analysis shows that the Turing wavelength affects the branching structure. As the wavelength decreases, the spot patterns change from sparse to dense, the rate of tip bifurcation decreases and side branching eventually occurs instead. In the process of transformation, there may exists hybrid branching that mixes tip bifurcation and side branching. Since experimental studies have reported that branching mode switching from side branching to tip bifurcation in the lung is under genetic control, our simulation results suggest that genes control the switch of the branching mode by regulating the Turing wavelength. Our results provide a novel insight into and understanding of the formation of branching patterns in the lung and other biological systems.

  12. Renal Branch Artery Stenosis

    DEFF Research Database (Denmark)

    Andersson, Zarah; Thisted, Ebbe; Andersen, Ulrik Bjørn

    2017-01-01

    Renovascular hypertension is a common cause of pediatric hypertension. In the fraction of cases that are unrelated to syndromes such as neurofibromatosis, patients with a solitary stenosis on a branch of the renal artery are common and can be diagnostically challenging. Imaging techniques...... that perform well in the diagnosis of main renal artery stenosis may fall short when it comes to branch artery stenosis. We report 2 cases that illustrate these difficulties and show that a branch artery stenosis may be overlooked even by the gold standard method, renal angiography....

  13. Grey-Markov prediction model based on background value optimization and central-point triangular whitenization weight function

    Science.gov (United States)

    Ye, Jing; Dang, Yaoguo; Li, Bingjun

    2018-01-01

    Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.

  14. Finding metastabilities in reversible Markov chains based on incomplete sampling

    Directory of Open Access Journals (Sweden)

    Fackeldey Konstantin

    2017-01-01

    Full Text Available In order to fully characterize the state-transition behaviour of finite Markov chains one needs to provide the corresponding transition matrix P. In many applications such as molecular simulation and drug design, the entries of the transition matrix P are estimated by generating realizations of the Markov chain and determining the one-step conditional probability Pij for a transition from one state i to state j. This sampling can be computational very demanding. Therefore, it is a good idea to reduce the sampling effort. The main purpose of this paper is to design a sampling strategy, which provides a partial sampling of only a subset of the rows of such a matrix P. Our proposed approach fits very well to stochastic processes stemming from simulation of molecular systems or random walks on graphs and it is different from the matrix completion approaches which try to approximate the transition matrix by using a low-rank-assumption. It will be shown how Markov chains can be analyzed on the basis of a partial sampling. More precisely. First, we will estimate the stationary distribution from a partially given matrix P. Second, we will estimate the infinitesimal generator Q of P on the basis of this stationary distribution. Third, from the generator we will compute the leading invariant subspace, which should be identical to the leading invariant subspace of P. Forth, we will apply Robust Perron Cluster Analysis (PCCA+ in order to identify metastabilities using this subspace.

  15. A Bayesian Markov geostatistical model for estimation of hydrogeological properties

    International Nuclear Information System (INIS)

    Rosen, L.; Gustafson, G.

    1996-01-01

    A geostatistical methodology based on Markov-chain analysis and Bayesian statistics was developed for probability estimations of hydrogeological and geological properties in the siting process of a nuclear waste repository. The probability estimates have practical use in decision-making on issues such as siting, investigation programs, and construction design. The methodology is nonparametric which makes it possible to handle information that does not exhibit standard statistical distributions, as is often the case for classified information. Data do not need to meet the requirements on additivity and normality as with the geostatistical methods based on regionalized variable theory, e.g., kriging. The methodology also has a formal way for incorporating professional judgments through the use of Bayesian statistics, which allows for updating of prior estimates to posterior probabilities each time new information becomes available. A Bayesian Markov Geostatistical Model (BayMar) software was developed for implementation of the methodology in two and three dimensions. This paper gives (1) a theoretical description of the Bayesian Markov Geostatistical Model; (2) a short description of the BayMar software; and (3) an example of application of the model for estimating the suitability for repository establishment with respect to the three parameters of lithology, hydraulic conductivity, and rock quality designation index (RQD) at 400--500 meters below ground surface in an area around the Aespoe Hard Rock Laboratory in southeastern Sweden

  16. Hidden Markov models: the best models for forager movements?

    Science.gov (United States)

    Joo, Rocio; Bertrand, Sophie; Tam, Jorge; Fablet, Ronan

    2013-01-01

    One major challenge in the emerging field of movement ecology is the inference of behavioural modes from movement patterns. This has been mainly addressed through Hidden Markov models (HMMs). We propose here to evaluate two sets of alternative and state-of-the-art modelling approaches. First, we consider hidden semi-Markov models (HSMMs). They may better represent the behavioural dynamics of foragers since they explicitly model the duration of the behavioural modes. Second, we consider discriminative models which state the inference of behavioural modes as a classification issue, and may take better advantage of multivariate and non linear combinations of movement pattern descriptors. For this work, we use a dataset of >200 trips from human foragers, Peruvian fishermen targeting anchovy. Their movements were recorded through a Vessel Monitoring System (∼1 record per hour), while their behavioural modes (fishing, searching and cruising) were reported by on-board observers. We compare the efficiency of hidden Markov, hidden semi-Markov, and three discriminative models (random forests, artificial neural networks and support vector machines) for inferring the fishermen behavioural modes, using a cross-validation procedure. HSMMs show the highest accuracy (80%), significantly outperforming HMMs and discriminative models. Simulations show that data with higher temporal resolution, HSMMs reach nearly 100% of accuracy. Our results demonstrate to what extent the sequential nature of movement is critical for accurately inferring behavioural modes from a trajectory and we strongly recommend the use of HSMMs for such purpose. In addition, this work opens perspectives on the use of hybrid HSMM-discriminative models, where a discriminative setting for the observation process of HSMMs could greatly improve inference performance.

  17. Hidden Markov models: the best models for forager movements?

    Directory of Open Access Journals (Sweden)

    Rocio Joo

    Full Text Available One major challenge in the emerging field of movement ecology is the inference of behavioural modes from movement patterns. This has been mainly addressed through Hidden Markov models (HMMs. We propose here to evaluate two sets of alternative and state-of-the-art modelling approaches. First, we consider hidden semi-Markov models (HSMMs. They may better represent the behavioural dynamics of foragers since they explicitly model the duration of the behavioural modes. Second, we consider discriminative models which state the inference of behavioural modes as a classification issue, and may take better advantage of multivariate and non linear combinations of movement pattern descriptors. For this work, we use a dataset of >200 trips from human foragers, Peruvian fishermen targeting anchovy. Their movements were recorded through a Vessel Monitoring System (∼1 record per hour, while their behavioural modes (fishing, searching and cruising were reported by on-board observers. We compare the efficiency of hidden Markov, hidden semi-Markov, and three discriminative models (random forests, artificial neural networks and support vector machines for inferring the fishermen behavioural modes, using a cross-validation procedure. HSMMs show the highest accuracy (80%, significantly outperforming HMMs and discriminative models. Simulations show that data with higher temporal resolution, HSMMs reach nearly 100% of accuracy. Our results demonstrate to what extent the sequential nature of movement is critical for accurately inferring behavioural modes from a trajectory and we strongly recommend the use of HSMMs for such purpose. In addition, this work opens perspectives on the use of hybrid HSMM-discriminative models, where a discriminative setting for the observation process of HSMMs could greatly improve inference performance.

  18. Hidden Markov models in automatic speech recognition

    Science.gov (United States)

    Wrzoskowicz, Adam

    1993-11-01

    This article describes a method for constructing an automatic speech recognition system based on hidden Markov models (HMMs). The author discusses the basic concepts of HMM theory and the application of these models to the analysis and recognition of speech signals. The author provides algorithms which make it possible to train the ASR system and recognize signals on the basis of distinct stochastic models of selected speech sound classes. The author describes the specific components of the system and the procedures used to model and recognize speech. The author discusses problems associated with the choice of optimal signal detection and parameterization characteristics and their effect on the performance of the system. The author presents different options for the choice of speech signal segments and their consequences for the ASR process. The author gives special attention to the use of lexical, syntactic, and semantic information for the purpose of improving the quality and efficiency of the system. The author also describes an ASR system developed by the Speech Acoustics Laboratory of the IBPT PAS. The author discusses the results of experiments on the effect of noise on the performance of the ASR system and describes methods of constructing HMM's designed to operate in a noisy environment. The author also describes a language for human-robot communications which was defined as a complex multilevel network from an HMM model of speech sounds geared towards Polish inflections. The author also added mandatory lexical and syntactic rules to the system for its communications vocabulary.

  19. Tornadoes and related damage costs: statistical modelling with a semi-Markov approach

    Directory of Open Access Journals (Sweden)

    Guglielmo D’Amico

    2016-09-01

    Full Text Available We propose a statistical approach to modelling for predicting and simulating occurrences of tornadoes and accumulated cost distributions over a time interval. This is achieved by modelling the tornado intensity, measured with the Fujita scale, as a stochastic process. Since the Fujita scale divides tornado intensity into six states, it is possible to model the tornado intensity by using Markov and semi-Markov models. We demonstrate that the semi-Markov approach is able to reproduce the duration effect that is detected in tornado occurrence. The superiority of the semi-Markov model as compared to the Markov chain model is also affirmed by means of a statistical test of hypothesis. As an application, we compute the expected value and the variance of the costs generated by the tornadoes over a given time interval in a given area. The paper contributes to the literature by demonstrating that semi-Markov models represent an effective tool for physical analysis of tornadoes as well as for the estimation of the economic damages to human things.

  20. Data-driven Markov models and their application in the evaluation of adverse events in radiotherapy

    Science.gov (United States)

    Abler, Daniel; Kanellopoulos, Vassiliki; Davies, Jim; Dosanjh, Manjit; Jena, Raj; Kirkby, Norman; Peach, Ken

    2013-01-01

    Decision-making processes in medicine rely increasingly on modelling and simulation techniques; they are especially useful when combining evidence from multiple sources. Markov models are frequently used to synthesize the available evidence for such simulation studies, by describing disease and treatment progress, as well as associated factors such as the treatment's effects on a patient's life and the costs to society. When the same decision problem is investigated by multiple stakeholders, differing modelling assumptions are often applied, making synthesis and interpretation of the results difficult. This paper proposes a standardized approach towards the creation of Markov models. It introduces the notion of ‘general Markov models’, providing a common definition of the Markov models that underlie many similar decision problems, and develops a language for their specification. We demonstrate the application of this language by developing a general Markov model for adverse event analysis in radiotherapy and argue that the proposed method can automate the creation of Markov models from existing data. The approach has the potential to support the radiotherapy community in conducting systematic analyses involving predictive modelling of existing and upcoming radiotherapy data. We expect it to facilitate the application of modelling techniques in medical decision problems beyond the field of radiotherapy, and to improve the comparability of their results. PMID:23824126

  1. Data-driven Markov models and their application in the evaluation of adverse events in radiotherapy

    International Nuclear Information System (INIS)

    Abler, Daniel; Kanellopoulos, Vassiliki; Dosanjh, Manjit; Davies, Jim; Peach, Ken; Jena, Raj; Kirkby, Norman

    2013-01-01

    Decision-making processes in medicine rely increasingly on modelling and simulation techniques; they are especially useful when combining evidence from multiple sources. Markov models are frequently used to synthesize the available evidence for such simulation studies, by describing disease and treatment progress, as well as associated factors such as the treatment's effects on a patient's life and the costs to society. When the same decision problem is investigated by multiple stakeholders, differing modelling assumptions are often applied, making synthesis and interpretation of the results difficult. This paper proposes a standardized approach towards the creation of Markov models. It introduces the notion of 'general Markov models', providing a common definition of the Markov models that underlie many similar decision problems, and develops a language for their specification. We demonstrate the application of this language by developing a general Markov model for adverse event analysis in radiotherapy and argue that the proposed method can automate the creation of Markov models from existing data. The approach has the potential to support the radiotherapy community in conducting systematic analyses involving predictive modelling of existing and upcoming radiotherapy data. We expect it to facilitate the application of modelling techniques in medical decision problems beyond the field of radiotherapy, and to improve the comparability of their results. (author)

  2. Continuous-Time Semi-Markov Models in Health Economic Decision Making: An Illustrative Example in Heart Failure Disease Management.

    Science.gov (United States)

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    2016-01-01

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.

  3. Branching trajectory continual integral

    International Nuclear Information System (INIS)

    Maslov, V.P.; Chebotarev, A.M.

    1980-01-01

    Heuristic definition of the Feynman continual integral over branching trajectories is suggested which makes it possible to obtain in the closed form the solution of the Cauchy problem for the model Hartree equation. A number of properties of the solution is derived from an integral representation. In particular, the quasiclassical asymptotics, exact solution in the gaussian case and perturbation theory series are described. The existence theorem for the simpliest continual integral over branching trajectories is proved [ru

  4. Branches of the landscape

    International Nuclear Information System (INIS)

    Dine, Michael; O'Neil, Deva; Sun Zheng

    2005-01-01

    With respect to the question of supersymmetry breaking, there are three branches of the flux landscape. On one of these, if one requires small cosmological constant, supersymmetry breaking is predominantly at the fundamental scale; on another, the distribution is roughly flat on a logarithmic scale; on the third, the preponderance of vacua are at very low scale. A priori, as we will explain, one can say little about the first branch. The vast majority of these states are not accessible even to crude, approximate analysis. On the other two branches one can hope to do better. But as a result of the lack of access to branch one, and our poor understanding of cosmology, we can at best conjecture about whether string theory predicts low energy supersymmetry or not. If we hypothesize that are on branch two or three, distinctive predictions may be possible. We comment of the status of naturalness within the landscape, deriving, for example, the statistics of the first branch from simple effective field theory reasoning

  5. Maximally reliable Markov chains under energy constraints.

    Science.gov (United States)

    Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam

    2009-07-01

    Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.

  6. Zipf exponent of trajectory distribution in the hidden Markov model

    Science.gov (United States)

    Bochkarev, V. V.; Lerner, E. Yu

    2014-03-01

    This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different.

  7. Zipf exponent of trajectory distribution in the hidden Markov model

    International Nuclear Information System (INIS)

    Bochkarev, V V; Lerner, E Yu

    2014-01-01

    This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different

  8. Performance Modeling of Communication Networks with Markov Chains

    CERN Document Server

    Mo, Jeonghoon

    2010-01-01

    This book is an introduction to Markov chain modeling with applications to communication networks. It begins with a general introduction to performance modeling in Chapter 1 where we introduce different performance models. We then introduce basic ideas of Markov chain modeling: Markov property, discrete time Markov chain (DTMe and continuous time Markov chain (CTMe. We also discuss how to find the steady state distributions from these Markov chains and how they can be used to compute the system performance metric. The solution methodologies include a balance equation technique, limiting probab

  9. Branches of the Facial Artery.

    Science.gov (United States)

    Hwang, Kun; Lee, Geun In; Park, Hye Jin

    2015-06-01

    The aim of this study is to review the name of the branches, to review the classification of the branching pattern, and to clarify a presence percentage of each branch of the facial artery, systematically. In a PubMed search, the search terms "facial," AND "artery," AND "classification OR variant OR pattern" were used. The IBM SPSS Statistics 20 system was used for statistical analysis. Among the 500 titles, 18 articles were selected and reviewed systematically. Most of the articles focused on "classification" according to the "terminal branch." Several authors classified the facial artery according to their terminal branches. Most of them, however, did not describe the definition of "terminal branch." There were confusions within the classifications. When the inferior labial artery was absent, 3 different types were used. The "alar branch" or "nasal branch" was used instead of the "lateral nasal branch." The angular branch was used to refer to several different branches. The presence as a percentage of each branch according to the branches in Gray's Anatomy (premasseteric, inferior labial, superior labial, lateral nasal, and angular) varied. No branch was used with 100% consistency. The superior labial branch was most frequently cited (95.7%, 382 arteries in 399 hemifaces). The angular branch (53.9%, 219 arteries in 406 hemifaces) and the premasseteric branch were least frequently cited (53.8%, 43 arteries in 80 hemifaces). There were significant differences among each of the 5 branches (P < 0.05) except between the angular branch and the premasseteric branch and between the superior labial branch and the inferior labial branch. The authors believe identifying the presence percentage of each branch will be helpful for surgical procedures.

  10. BDC 500 branch driver controller

    CERN Document Server

    Dijksman, A

    1981-01-01

    This processor has been designed for very fast data acquisition and date pre-processing. The dataway and branch highway speeds have been optimized for approximately 1.5 mu sec. The internal processor cycle is approximately 0.8 mu sec. The standard version contains the following functions (slots): crate controller type A1; branch highway driver including terminator; serial I/O port (TTY, VDU); 24 bit ALU and 24 bit program counter; 16 bit memory address counter and 4 word stack; 4k bit memory for program and/or data; battery backup for the memory; CNAFD and crate LAM display; request/grant logic for time- sharing operation of several BDCs. The free slots can be equipped with e.g. extra RAM, computer interfaces, hardware multiplier/dividers, etc. (0 refs).

  11. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  12. Inhomogeneous Markov Models for Describing Driving Patterns

    DEFF Research Database (Denmark)

    Iversen, Emil Banning; Møller, Jan K.; Morales, Juan Miguel

    2017-01-01

    . Specifically, an inhomogeneous Markov model that captures the diurnal variation in the use of a vehicle is presented. The model is defined by the time-varying probabilities of starting and ending a trip, and is justified due to the uncertainty associated with the use of the vehicle. The model is fitted to data...... collected from the actual utilization of a vehicle. Inhomogeneous Markov models imply a large number of parameters. The number of parameters in the proposed model is reduced using B-splines....

  13. Inhomogeneous Markov Models for Describing Driving Patterns

    DEFF Research Database (Denmark)

    Iversen, Jan Emil Banning; Møller, Jan Kloppenborg; Morales González, Juan Miguel

    . Specically, an inhomogeneous Markov model that captures the diurnal variation in the use of a vehicle is presented. The model is dened by the time-varying probabilities of starting and ending a trip and is justied due to the uncertainty associated with the use of the vehicle. The model is tted to data...... collected from the actual utilization of a vehicle. Inhomogeneous Markov models imply a large number of parameters. The number of parameters in the proposed model is reduced using B-splines....

  14. Detecting Structural Breaks using Hidden Markov Models

    DEFF Research Database (Denmark)

    Ntantamis, Christos

    Testing for structural breaks and identifying their location is essential for econometric modeling. In this paper, a Hidden Markov Model (HMM) approach is used in order to perform these tasks. Breaks are defined as the data points where the underlying Markov Chain switches from one state to another....... The estimation of the HMM is conducted using a variant of the Iterative Conditional Expectation-Generalized Mixture (ICE-GEMI) algorithm proposed by Delignon et al. (1997), that permits analysis of the conditional distributions of economic data and allows for different functional forms across regimes...

  15. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  16. Estimation with Right-Censored Observations Under A Semi-Markov Model.

    Science.gov (United States)

    Zhao, Lihui; Hu, X Joan

    2013-06-01

    The semi-Markov process often provides a better framework than the classical Markov process for the analysis of events with multiple states. The purpose of this paper is twofold. First, we show that in the presence of right censoring, when the right end-point of the support of the censoring time is strictly less than the right end-point of the support of the semi-Markov kernel, the transition probability of the semi-Markov process is nonidentifiable, and the estimators proposed in the literature are inconsistent in general. We derive the set of all attainable values for the transition probability based on the censored data, and we propose a nonparametric inference procedure for the transition probability using this set. Second, the conventional approach to constructing confidence bands is not applicable for the semi-Markov kernel and the sojourn time distribution. We propose new perturbation resampling methods to construct these confidence bands. Different weights and transformations are explored in the construction. We use simulation to examine our proposals and illustrate them with hospitalization data from a recent cancer survivor study.

  17. VD-411 branch driver

    International Nuclear Information System (INIS)

    Gorbunov, N.V.; Karev, A.G.; Mal'tsev, Eh.I.; Morozov, B.A.

    1985-01-01

    The VD-411 branch driver for CAMAC moduli control by the SM-4 computer is described. The driver realizes data exchange with moduli disposed in 28 crates grouped in 4 branches. Data exchange can be carried out either in the program regime or in the regime of direct access to the memory. Fulfilment of 11 block regimes and one program regime is provided for. A possibility of individual programming of exchange methods in block regimes is left for users for organisation of quicker and most flexible data removal from the CAMAC moduli. In the regime of direct access the driver provides data transmission at the size up to 64 Kwords placing it in the computer memory of 2 M byte. High rate of data transmission and the developed system of interruptions ensure efficient utilization of the VD-411 branch driver at data removal from facilities in high energy physics experiments

  18. Prediction of Annual Rainfall Pattern Using Hidden Markov Model ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Hidden Markov model is very influential in stochastic world because of its ... the earth from the clouds. The usual ... Rainfall modelling and ... Markov Models have become popular tools ... environment sciences, University of Jos, plateau state,.

  19. Extending Markov Automata with State and Action Rewards

    NARCIS (Netherlands)

    Guck, Dennis; Timmer, Mark; Blom, Stefan; Bertrand, N.; Bortolussi, L.

    This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton that allows the modelling of systems incorporating rewards in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Our models support both rewards that are

  20. AGB [asymptotic giant branch]: Star evolution

    International Nuclear Information System (INIS)

    Becker, S.A.

    1987-01-01

    Asymptotic giant branch stars are red supergiant stars of low-to-intermediate mass. This class of stars is of particular interest because many of these stars can have nuclear processed material brought up repeatedly from the deep interior to the surface where it can be observed. A review of recent theoretical and observational work on stars undergoing the asymptotic giant branch phase is presented. 41 refs

  1. Assessing type I error and power of multistate Markov models for panel data-A simulation study.

    Science.gov (United States)

    Cassarly, Christy; Martin, Renee' H; Chimowitz, Marc; Peña, Edsel A; Ramakrishnan, Viswanathan; Palesch, Yuko Y

    2017-01-01

    Ordinal outcomes collected at multiple follow-up visits are common in clinical trials. Sometimes, one visit is chosen for the primary analysis and the scale is dichotomized amounting to loss of information. Multistate Markov models describe how a process moves between states over time. Here, simulation studies are performed to investigate the type I error and power characteristics of multistate Markov models for panel data with limited non-adjacent state transitions. The results suggest that the multistate Markov models preserve the type I error and adequate power is achieved with modest sample sizes for panel data with limited non-adjacent state transitions.

  2. Hidden-Markov-Model Analysis Of Telemanipulator Data

    Science.gov (United States)

    Hannaford, Blake; Lee, Paul

    1991-01-01

    Mathematical model and procedure based on hidden-Markov-model concept undergoing development for use in analysis and prediction of outputs of force and torque sensors of telerobotic manipulators. In model, overall task broken down into subgoals, and transition probabilities encode ease with which operator completes each subgoal. Process portion of model encodes task-sequence/subgoal structure, and probability-density functions for forces and torques associated with each state of manipulation encode sensor signals that one expects to observe at subgoal. Parameters of model constructed from engineering knowledge of task.

  3. Perturbation theory for Markov chains via Wasserstein distance

    NARCIS (Netherlands)

    Rudolf, Daniel; Schweizer, Nikolaus

    2017-01-01

    Perturbation theory for Markov chains addresses the question of how small differences in the transition probabilities of Markov chains are reflected in differences between their distributions. We prove powerful and flexible bounds on the distance of the nth step distributions of two Markov chains

  4. Quantum Enhanced Inference in Markov Logic Networks.

    Science.gov (United States)

    Wittek, Peter; Gogolin, Christian

    2017-04-19

    Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.

  5. Evaluation of Usability Utilizing Markov Models

    Science.gov (United States)

    Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane

    2012-01-01

    Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…

  6. Bayesian analysis for reversible Markov chains

    NARCIS (Netherlands)

    Diaconis, P.; Rolles, S.W.W.

    2006-01-01

    We introduce a natural conjugate prior for the transition matrix of a reversible Markov chain. This allows estimation and testing. The prior arises from random walk with reinforcement in the same way the Dirichlet prior arises from Pólya’s urn. We give closed form normalizing constants, a simple

  7. Discounted Markov games : generalized policy iteration method

    NARCIS (Netherlands)

    Wal, van der J.

    1978-01-01

    In this paper, we consider two-person zero-sum discounted Markov games with finite state and action spaces. We show that the Newton-Raphson or policy iteration method as presented by Pollats-chek and Avi-Itzhak does not necessarily converge, contradicting a proof of Rao, Chandrasekaran, and Nair.

  8. Hidden Markov Models for Human Genes

    DEFF Research Database (Denmark)

    Baldi, Pierre; Brunak, Søren; Chauvin, Yves

    1997-01-01

    We analyse the sequential structure of human genomic DNA by hidden Markov models. We apply models of widely different design: conventional left-right constructs and models with a built-in periodic architecture. The models are trained on segments of DNA sequences extracted such that they cover com...

  9. Markov Trends in Macroeconomic Time Series

    NARCIS (Netherlands)

    R. Paap (Richard)

    1997-01-01

    textabstractMany macroeconomic time series are characterised by long periods of positive growth, expansion periods, and short periods of negative growth, recessions. A popular model to describe this phenomenon is the Markov trend, which is a stochastic segmented trend where the slope depends on the

  10. Optimal dividend distribution under Markov regime switching

    NARCIS (Netherlands)

    Jiang, Z.; Pistorius, M.

    2012-01-01

    We investigate the problem of optimal dividend distribution for a company in the presence of regime shifts. We consider a company whose cumulative net revenues evolve as a Brownian motion with positive drift that is modulated by a finite state Markov chain, and model the discount rate as a

  11. Revisiting Weak Simulation for Substochastic Markov Chains

    DEFF Research Database (Denmark)

    Jansen, David N.; Song, Lei; Zhang, Lijun

    2013-01-01

    of the logic PCTL\\x, and its completeness was conjectured. We revisit this result and show that soundness does not hold in general, but only for Markov chains without divergence. It is refuted for some systems with substochastic distributions. Moreover, we provide a counterexample to completeness...

  12. Multi-dimensional quasitoeplitz Markov chains

    Directory of Open Access Journals (Sweden)

    Alexander N. Dudin

    1999-01-01

    Full Text Available This paper deals with multi-dimensional quasitoeplitz Markov chains. We establish a sufficient equilibrium condition and derive a functional matrix equation for the corresponding vector-generating function, whose solution is given algorithmically. The results are demonstrated in the form of examples and applications in queues with BMAP-input, which operate in synchronous random environment.

  13. Markov chains with quasitoeplitz transition matrix

    Directory of Open Access Journals (Sweden)

    Alexander M. Dukhovny

    1989-01-01

    Full Text Available This paper investigates a class of Markov chains which are frequently encountered in various applications (e.g. queueing systems, dams and inventories with feedback. Generating functions of transient and steady state probabilities are found by solving a special Riemann boundary value problem on the unit circle. A criterion of ergodicity is established.

  14. Markov Chain Estimation of Avian Seasonal Fecundity

    Science.gov (United States)

    To explore the consequences of modeling decisions on inference about avian seasonal fecundity we generalize previous Markov chain (MC) models of avian nest success to formulate two different MC models of avian seasonal fecundity that represent two different ways to model renestin...

  15. Model Checking Infinite-State Markov Chains

    NARCIS (Netherlands)

    Remke, Anne Katharina Ingrid; Haverkort, Boudewijn R.H.M.; Cloth, L.

    2004-01-01

    In this paper algorithms for model checking CSL (continuous stochastic logic) against infinite-state continuous-time Markov chains of so-called quasi birth-death type are developed. In doing so we extend the applicability of CSL model checking beyond the recently proposed case for finite-state

  16. Model Checking Markov Chains: Techniques and Tools

    NARCIS (Netherlands)

    Zapreev, I.S.

    2008-01-01

    This dissertation deals with four important aspects of model checking Markov chains: the development of efficient model-checking tools, the improvement of model-checking algorithms, the efficiency of the state-space reduction techniques, and the development of simulation-based model-checking

  17. Quantum Enhanced Inference in Markov Logic Networks

    Science.gov (United States)

    Wittek, Peter; Gogolin, Christian

    2017-04-01

    Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.

  18. Model Checking Structured Infinite Markov Chains

    NARCIS (Netherlands)

    Remke, Anne Katharina Ingrid

    2008-01-01

    In the past probabilistic model checking hast mostly been restricted to finite state models. This thesis explores the possibilities of model checking with continuous stochastic logic (CSL) on infinite-state Markov chains. We present an in-depth treatment of model checking algorithms for two special

  19. Hidden Markov models for labeled sequences

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose

    1994-01-01

    A hidden Markov model for labeled observations, called a class HMM, is introduced and a maximum likelihood method is developed for estimating the parameters of the model. Instead of training it to model the statistics of the training sequences it is trained to optimize recognition. It resembles MMI...

  20. Tracheobronchial Branching Anomalies

    International Nuclear Information System (INIS)

    Hong, Min Ji; Kim, Young Tong; Jou, Sung Shick; Park, A Young

    2010-01-01

    There are various congenital anomalies with respect to the number, length, diameter, and location of tracheobronchial branching patterns. The tracheobronchial anomalies are classified into two groups. The first one, anomalies of division, includes tracheal bronchus, cardiac bronchus, tracheal diverticulum, pulmonary isomerism, and minor variations. The second one, dysmorphic lung, includes lung agenesis-hypoplasia complex and lobar agenesis-aplasia complex