WorldWideScience

Sample records for continuous time markov

  1. STATISTICAL ANALYSIS OF NOTATIONAL AFL DATA USING CONTINUOUS TIME MARKOV CHAINS

    Directory of Open Access Journals (Sweden)

    Denny Meyer

    2006-12-01

    Full Text Available Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs, with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated

  2. Model checking conditional CSL for continuous-time Markov chains

    DEFF Research Database (Denmark)

    Gao, Yang; Xu, Ming; Zhan, Naijun

    2013-01-01

    In this paper, we consider the model-checking problem of continuous-time Markov chains (CTMCs) with respect to conditional logic. To the end, we extend Continuous Stochastic Logic introduced in Aziz et al. (2000) [1] to Conditional Continuous Stochastic Logic (CCSL) by introducing a conditional...

  3. Continuous-time Markov decision processes theory and applications

    CERN Document Server

    Guo, Xianping

    2009-01-01

    This volume provides the first book entirely devoted to recent developments on the theory and applications of continuous-time Markov decision processes (MDPs). The MDPs presented here include most of the cases that arise in applications.

  4. Computing continuous-time Markov chains as transformers of unbounded observables

    DEFF Research Database (Denmark)

    Danos, Vincent; Heindel, Tobias; Garnier, Ilias

    2017-01-01

    The paper studies continuous-time Markov chains (CTMCs) as transformers of real-valued functions on their state space, considered as generalised predicates and called observables. Markov chains are assumed to take values in a countable state space S; observables f: S → ℝ may be unbounded...

  5. A mathematical approach for evaluating Markov models in continuous time without discrete-event simulation.

    Science.gov (United States)

    van Rosmalen, Joost; Toy, Mehlika; O'Mahony, James F

    2013-08-01

    Markov models are a simple and powerful tool for analyzing the health and economic effects of health care interventions. These models are usually evaluated in discrete time using cohort analysis. The use of discrete time assumes that changes in health states occur only at the end of a cycle period. Discrete-time Markov models only approximate the process of disease progression, as clinical events typically occur in continuous time. The approximation can yield biased cost-effectiveness estimates for Markov models with long cycle periods and if no half-cycle correction is made. The purpose of this article is to present an overview of methods for evaluating Markov models in continuous time. These methods use mathematical results from stochastic process theory and control theory. The methods are illustrated using an applied example on the cost-effectiveness of antiviral therapy for chronic hepatitis B. The main result is a mathematical solution for the expected time spent in each state in a continuous-time Markov model. It is shown how this solution can account for age-dependent transition rates and discounting of costs and health effects, and how the concept of tunnel states can be used to account for transition rates that depend on the time spent in a state. The applied example shows that the continuous-time model yields more accurate results than the discrete-time model but does not require much computation time and is easily implemented. In conclusion, continuous-time Markov models are a feasible alternative to cohort analysis and can offer several theoretical and practical advantages.

  6. Summary statistics for end-point conditioned continuous-time Markov chains

    DEFF Research Database (Denmark)

    Hobolth, Asger; Jensen, Jens Ledet

    Continuous-time Markov chains are a widely used modelling tool. Applications include DNA sequence evolution, ion channel gating behavior and mathematical finance. We consider the problem of calculating properties of summary statistics (e.g. mean time spent in a state, mean number of jumps between...... two states and the distribution of the total number of jumps) for discretely observed continuous time Markov chains. Three alternative methods for calculating properties of summary statistics are described and the pros and cons of the methods are discussed. The methods are based on (i) an eigenvalue...... decomposition of the rate matrix, (ii) the uniformization method, and (iii) integrals of matrix exponentials. In particular we develop a framework that allows for analyses of rather general summary statistics using the uniformization method....

  7. Subgeometric Ergodicity Analysis of Continuous-Time Markov Chains under Random-Time State-Dependent Lyapunov Drift Conditions

    Directory of Open Access Journals (Sweden)

    Mokaedi V. Lekgari

    2014-01-01

    Full Text Available We investigate random-time state-dependent Foster-Lyapunov analysis on subgeometric rate ergodicity of continuous-time Markov chains (CTMCs. We are mainly concerned with making use of the available results on deterministic state-dependent drift conditions for CTMCs and on random-time state-dependent drift conditions for discrete-time Markov chains and transferring them to CTMCs.

  8. The deviation matrix of a continuous-time Markov chain

    NARCIS (Netherlands)

    Coolen-Schrijner, P.; van Doorn, E.A.

    2001-01-01

    The deviation matrix of an ergodic, continuous-time Markov chain with transition probability matrix $P(.)$ and ergodic matrix $\\Pi$ is the matrix $D \\equiv \\int_0^{\\infty} (P(t)-\\Pi)dt$. We give conditions for $D$ to exist and discuss properties and a representation of $D$. The deviation matrix of a

  9. The deviation matrix of a continuous-time Markov chain

    NARCIS (Netherlands)

    Coolen-Schrijner, Pauline; van Doorn, Erik A.

    2002-01-01

    he deviation matrix of an ergodic, continuous-time Markov chain with transition probability matrix $P(.)$ and ergodic matrix $\\Pi$ is the matrix $D \\equiv \\int_0^{\\infty} (P(t)-\\Pi)dt$. We give conditions for $D$ to exist and discuss properties and a representation of $D$. The deviation matrix of a

  10. Generalization bounds of ERM-based learning processes for continuous-time Markov chains.

    Science.gov (United States)

    Zhang, Chao; Tao, Dacheng

    2012-12-01

    Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.

  11. Parallel algorithms for simulating continuous time Markov chains

    Science.gov (United States)

    Nicol, David M.; Heidelberger, Philip

    1992-01-01

    We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.

  12. Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.

    Science.gov (United States)

    Serebrinsky, Santiago A

    2011-03-01

    We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.

  13. An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains

    Directory of Open Access Journals (Sweden)

    Qihong Duan

    2010-01-01

    Full Text Available In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there are m transient states in the system and that there are n failure time data. The devised algorithm only needs to compute the exponential of m×m upper triangular matrices for O(nm2 times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.

  14. Nonequilibrium thermodynamic potentials for continuous-time Markov chains.

    Science.gov (United States)

    Verley, Gatien

    2016-01-01

    We connect the rare fluctuations of an equilibrium (EQ) process and the typical fluctuations of a nonequilibrium (NE) stationary process. In the framework of large deviation theory, this observation allows us to introduce NE thermodynamic potentials. For continuous-time Markov chains, we identify the relevant pairs of conjugated variables and propose two NE ensembles: one with fixed dynamics and fluctuating time-averaged variables, and another with fixed time-averaged variables, but a fluctuating dynamics. Accordingly, we show that NE processes are equivalent to conditioned EQ processes ensuring that NE potentials are Legendre dual. We find a variational principle satisfied by the NE potentials that reach their maximum in the NE stationary state and whose first derivatives produce the NE equations of state and second derivatives produce the NE Maxwell relations generalizing the Onsager reciprocity relations.

  15. Fitting timeseries by continuous-time Markov chains: A quadratic programming approach

    International Nuclear Information System (INIS)

    Crommelin, D.T.; Vanden-Eijnden, E.

    2006-01-01

    Construction of stochastic models that describe the effective dynamics of observables of interest is an useful instrument in various fields of application, such as physics, climate science, and finance. We present a new technique for the construction of such models. From the timeseries of an observable, we construct a discrete-in-time Markov chain and calculate the eigenspectrum of its transition probability (or stochastic) matrix. As a next step we aim to find the generator of a continuous-time Markov chain whose eigenspectrum resembles the observed eigenspectrum as closely as possible, using an appropriate norm. The generator is found by solving a minimization problem: the norm is chosen such that the object function is quadratic and convex, so that the minimization problem can be solved using quadratic programming techniques. The technique is illustrated on various toy problems as well as on datasets stemming from simulations of molecular dynamics and of atmospheric flows

  16. Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach

    Energy Technology Data Exchange (ETDEWEB)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Bordeaux INP, IMB, UMR CNRS 5251 (France); Piunovskiy, A. B., E-mail: piunov@liv.ac.uk [University of Liverpool, Department of Mathematical Sciences (United Kingdom)

    2016-08-15

    In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures of the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.

  17. Fitting and interpreting continuous-time latent Markov models for panel data.

    Science.gov (United States)

    Lange, Jane M; Minin, Vladimir N

    2013-11-20

    Multistate models characterize disease processes within an individual. Clinical studies often observe the disease status of individuals at discrete time points, making exact times of transitions between disease states unknown. Such panel data pose considerable modeling challenges. Assuming the disease process progresses accordingly, a standard continuous-time Markov chain (CTMC) yields tractable likelihoods, but the assumption of exponential sojourn time distributions is typically unrealistic. More flexible semi-Markov models permit generic sojourn distributions yet yield intractable likelihoods for panel data in the presence of reversible transitions. One attractive alternative is to assume that the disease process is characterized by an underlying latent CTMC, with multiple latent states mapping to each disease state. These models retain analytic tractability due to the CTMC framework but allow for flexible, duration-dependent disease state sojourn distributions. We have developed a robust and efficient expectation-maximization algorithm in this context. Our complete data state space consists of the observed data and the underlying latent trajectory, yielding computationally efficient expectation and maximization steps. Our algorithm outperforms alternative methods measured in terms of time to convergence and robustness. We also examine the frequentist performance of latent CTMC point and interval estimates of disease process functionals based on simulated data. The performance of estimates depends on time, functional, and data-generating scenario. Finally, we illustrate the interpretive power of latent CTMC models for describing disease processes on a dataset of lung transplant patients. We hope our work will encourage wider use of these models in the biomedical setting. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Nuclide transport of decay chain in the fractured rock medium: a model using continuous time Markov process

    International Nuclear Information System (INIS)

    Younmyoung Lee; Kunjai Lee

    1995-01-01

    A model using continuous time Markov process for nuclide transport of decay chain of arbitrary length in the fractured rock medium has been developed. Considering the fracture in the rock matrix as a finite number of compartments, the transition probability for nuclide from the transition intensity between and out of the compartments is represented utilizing Chapman-Kolmogorov equation, with which the expectation and the variance of nuclide distribution for the fractured rock medium could be obtained. A comparison between continuous time Markov process model and available analytical solutions for the nuclide transport of three decay chains without rock matrix diffusion has been made showing comparatively good agreement. Fittings with experimental breakthrough curves obtained with nonsorbing materials such as NaLS and uranine in the artificial fractured rock are also made. (author)

  19. Continuous-Time Semi-Markov Models in Health Economic Decision Making: An Illustrative Example in Heart Failure Disease Management.

    Science.gov (United States)

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    2016-01-01

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.

  20. Markov Chain Modelling for Short-Term NDVI Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Stepčenko Artūrs

    2016-12-01

    Full Text Available In this paper, the NDVI time series forecasting model has been developed based on the use of discrete time, continuous state Markov chain of suitable order. The normalised difference vegetation index (NDVI is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation; therefore, it is an important variable for vegetation forecasting. A Markov chain is a stochastic process that consists of a state space. This stochastic process undergoes transitions from one state to another in the state space with some probabilities. A Markov chain forecast model is flexible in accommodating various forecast assumptions and structures. The present paper discusses the considerations and techniques in building a Markov chain forecast model at each step. Continuous state Markov chain model is analytically described. Finally, the application of the proposed Markov chain model is illustrated with reference to a set of NDVI time series data.

  1. A joint logistic regression and covariate-adjusted continuous-time Markov chain model.

    Science.gov (United States)

    Rubin, Maria Laura; Chan, Wenyaw; Yamal, Jose-Miguel; Robertson, Claudia Sue

    2017-12-10

    The use of longitudinal measurements to predict a categorical outcome is an increasingly common goal in research studies. Joint models are commonly used to describe two or more models simultaneously by considering the correlated nature of their outcomes and the random error present in the longitudinal measurements. However, there is limited research on joint models with longitudinal predictors and categorical cross-sectional outcomes. Perhaps the most challenging task is how to model the longitudinal predictor process such that it represents the true biological mechanism that dictates the association with the categorical response. We propose a joint logistic regression and Markov chain model to describe a binary cross-sectional response, where the unobserved transition rates of a two-state continuous-time Markov chain are included as covariates. We use the method of maximum likelihood to estimate the parameters of our model. In a simulation study, coverage probabilities of about 95%, standard deviations close to standard errors, and low biases for the parameter values show that our estimation method is adequate. We apply the proposed joint model to a dataset of patients with traumatic brain injury to describe and predict a 6-month outcome based on physiological data collected post-injury and admission characteristics. Our analysis indicates that the information provided by physiological changes over time may help improve prediction of long-term functional status of these severely ill subjects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Continuous-Time Semi-Markov Models in Health Economic Decision Making : An Illustrative Example in Heart Failure Disease Management

    NARCIS (Netherlands)

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease

  3. A study on the stochastic model for nuclide transport in the fractured porous rock using continuous time Markov process

    International Nuclear Information System (INIS)

    Lee, Youn Myoung

    1995-02-01

    As a newly approaching model, a stochastic model using continuous time Markov process for nuclide decay chain transport of arbitrary length in the fractured porous rock medium has been proposed, by which the need for solving a set of partial differential equations corresponding to various sets of side conditions can be avoided. Once the single planar fracture in the rock matrix is represented by a series of finite number of compartments having region wise constant parameter values in them, the medium is continuous in view of various processes associated with nuclide transport but discrete in medium space and such geologic system is assumed to have Markov property, since the Markov process requires that only the present value of the time dependent random variable be known to determine the future value of random variable, nuclide transport in the medium can then be modeled as a continuous time Markov process. Processes that are involved in nuclide transport are advective transport due to groundwater flow, diffusion into the rock matrix, adsorption onto the wall of the fracture and within the pores in the rock matrix, and radioactive decay chain. The transition probabilities for nuclide from the transition intensities between and out of the compartments are represented utilizing Chapman-Kolmogorov equation, through which the expectation and the variance of nuclide distribution for each compartment or the fractured rock medium can be obtained. Some comparisons between Markov process model developed in this work and available analytical solutions for one-dimensional layered porous medium, fractured medium with rock matrix diffusion, and porous medium considering three member nuclide decay chain without rock matrix diffusion have been made showing comparatively good agreement for all cases. To verify the model developed in this work another comparative study was also made by fitting the experimental data obtained with NaLS and uranine running in the artificial fractured

  4. A toolbox for safety instrumented system evaluation based on improved continuous-time Markov chain

    Science.gov (United States)

    Wardana, Awang N. I.; Kurniady, Rahman; Pambudi, Galih; Purnama, Jaka; Suryopratomo, Kutut

    2017-08-01

    Safety instrumented system (SIS) is designed to restore a plant into a safe condition when pre-hazardous event is occur. It has a vital role especially in process industries. A SIS shall be meet with safety requirement specifications. To confirm it, SIS shall be evaluated. Typically, the evaluation is calculated by hand. This paper presents a toolbox for SIS evaluation. It is developed based on improved continuous-time Markov chain. The toolbox supports to detailed approach of evaluation. This paper also illustrates an industrial application of the toolbox to evaluate arch burner safety system of primary reformer. The results of the case study demonstrates that the toolbox can be used to evaluate industrial SIS in detail and to plan the maintenance strategy.

  5. Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes

    Directory of Open Access Journals (Sweden)

    Tomoaki Nakamura

    2017-12-01

    Full Text Available Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM, the emission distributions of which are Gaussian processes (GPs. Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods.

  6. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains

    DEFF Research Database (Denmark)

    Tataru, Paula Cristina; Hobolth, Asger

    2011-01-01

    past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. RESULTS: We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned......BACKGROUND: Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications...... of the algorithms is available at www.birc.au.dk/~paula/. CONCLUSIONS: We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually...

  7. Mission reliability of semi-Markov systems under generalized operational time requirements

    International Nuclear Information System (INIS)

    Wu, Xiaoyue; Hillston, Jane

    2015-01-01

    Mission reliability of a system depends on specific criteria for mission success. To evaluate the mission reliability of some mission systems that do not need to work normally for the whole mission time, two types of mission reliability for such systems are studied. The first type corresponds to the mission requirement that the system must remain operational continuously for a minimum time within the given mission time interval, while the second corresponds to the mission requirement that the total operational time of the system within the mission time window must be greater than a given value. Based on Markov renewal properties, matrix integral equations are derived for semi-Markov systems. Numerical algorithms and a simulation procedure are provided for both types of mission reliability. Two examples are used for illustration purposes. One is a one-unit repairable Markov system, and the other is a cold standby semi-Markov system consisting of two components. By the proposed approaches, the mission reliability of systems with time redundancy can be more precisely estimated to avoid possible unnecessary redundancy of system resources. - Highlights: • Two types of mission reliability under generalized requirements are defined. • Equations for both types of reliability are derived for semi-Markov systems. • Numerical methods are given for solving both types of reliability. • Simulation procedure is given for estimating both types of reliability. • Verification of the numerical methods is given by the results of simulation

  8. The Green-Kubo formula, autocorrelation function and fluctuation spectrum for finite Markov chains with continuous time

    International Nuclear Information System (INIS)

    Chen Yong; Chen Xi; Qian Minping

    2006-01-01

    A general form of the Green-Kubo formula, which describes the fluctuations pertaining to all the steady states whether equilibrium or non-equilibrium, for a system driven by a finite Markov chain with continuous time (briefly, MC) {ξ t }, is shown. The equivalence of different forms of the Green-Kubo formula is exploited. We also look at the differences in terms of the autocorrelation function and the fluctuation spectrum between the equilibrium state and the non-equilibrium steady state. Also, if the MC is in the non-equilibrium steady state, we can always find a complex function ψ, such that the fluctuation spectrum of {φ(ξ t )} is non-monotonous in [0, + ∞)

  9. Continuous strong Markov processes in dimension one a stochastic calculus approach

    CERN Document Server

    Assing, Sigurd

    1998-01-01

    The book presents an in-depth study of arbitrary one-dimensional continuous strong Markov processes using methods of stochastic calculus. Departing from the classical approaches, a unified investigation of regular as well as arbitrary non-regular diffusions is provided. A general construction method for such processes, based on a generalization of the concept of a perfect additive functional, is developed. The intrinsic decomposition of a continuous strong Markov semimartingale is discovered. The book also investigates relations to stochastic differential equations and fundamental examples of irregular diffusions.

  10. Markov chains and mixing times

    CERN Document Server

    Levin, David A; Wilmer, Elizabeth L

    2009-01-01

    This book is an introduction to the modern approach to the theory of Markov chains. The main goal of this approach is to determine the rate of convergence of a Markov chain to the stationary distribution as a function of the size and geometry of the state space. The authors develop the key tools for estimating convergence times, including coupling, strong stationary times, and spectral methods. Whenever possible, probabilistic methods are emphasized. The book includes many examples and provides brief introductions to some central models of statistical mechanics. Also provided are accounts of r

  11. The Green-Kubo formula, autocorrelation function and fluctuation spectrum for finite Markov chains with continuous time

    Energy Technology Data Exchange (ETDEWEB)

    Chen Yong; Chen Xi; Qian Minping [School of Mathematical Sciences, Peking University, Beijing 100871 (China)

    2006-03-17

    A general form of the Green-Kubo formula, which describes the fluctuations pertaining to all the steady states whether equilibrium or non-equilibrium, for a system driven by a finite Markov chain with continuous time (briefly, MC) {l_brace}{xi}{sub t}{r_brace}, is shown. The equivalence of different forms of the Green-Kubo formula is exploited. We also look at the differences in terms of the autocorrelation function and the fluctuation spectrum between the equilibrium state and the non-equilibrium steady state. Also, if the MC is in the non-equilibrium steady state, we can always find a complex function {psi}, such that the fluctuation spectrum of {l_brace}{phi}({xi}{sub t}){r_brace} is non-monotonous in [0, + {infinity})

  12. Timed Comparisons of Semi-Markov Processes

    DEFF Research Database (Denmark)

    Pedersen, Mathias Ruggaard; Larsen, Kim Guldstrand; Bacci, Giorgio

    2018-01-01

    -Markov processes, and investigate the question of how to compare two semi-Markov processes with respect to their time-dependent behaviour. To this end, we introduce the relation of being “faster than” between processes and study its algorithmic complexity. Through a connection to probabilistic automata we obtain...

  13. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains.

    Science.gov (United States)

    Tataru, Paula; Hobolth, Asger

    2011-12-05

    Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD), the second on uniformization (UNI), and the third on integrals of matrix exponentials (EXPM). The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  14. Semi-Markov processes

    CERN Document Server

    Grabski

    2014-01-01

    Semi-Markov Processes: Applications in System Reliability and Maintenance is a modern view of discrete state space and continuous time semi-Markov processes and their applications in reliability and maintenance. The book explains how to construct semi-Markov models and discusses the different reliability parameters and characteristics that can be obtained from those models. The book is a useful resource for mathematicians, engineering practitioners, and PhD and MSc students who want to understand the basic concepts and results of semi-Markov process theory. Clearly defines the properties and

  15. Optimal Time-Abstract Schedulers for CTMDPs and Markov Games

    Directory of Open Access Journals (Sweden)

    Markus Rabe

    2010-06-01

    Full Text Available We study time-bounded reachability in continuous-time Markov decision processes for time-abstract scheduler classes. Such reachability problems play a paramount role in dependability analysis and the modelling of manufacturing and queueing systems. Consequently, their analysis has been studied intensively, and techniques for the approximation of optimal control are well understood. From a mathematical point of view, however, the question of approximation is secondary compared to the fundamental question whether or not optimal control exists. We demonstrate the existence of optimal schedulers for the time-abstract scheduler classes for all CTMDPs. Our proof is constructive: We show how to compute optimal time-abstract strategies with finite memory. It turns out that these optimal schedulers have an amazingly simple structure---they converge to an easy-to-compute memoryless scheduling policy after a finite number of steps. Finally, we show that our argument can easily be lifted to Markov games: We show that both players have a likewise simple optimal strategy in these more general structures.

  16. Singular Perturbation for the Discounted Continuous Control of Piecewise Deterministic Markov Processes

    International Nuclear Information System (INIS)

    Costa, O. L. V.; Dufour, F.

    2011-01-01

    This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP’s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space ℝ n . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter ε>0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as ε goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as ε goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.

  17. Relative entropy and waiting time for continuous-time Markov processes

    NARCIS (Netherlands)

    Chazottes, J.R.; Giardinà, C.; Redig, F.H.J.

    2006-01-01

    For discrete-time stochastic processes, there is a close connection between return (resp. waiting) times and entropy (resp. relative entropy). Such a connection cannot be straightforwardly extended to the continuous-time setting. Contrarily to the discrete-time case one needs a reference measure on

  18. Process Algebra and Markov Chains

    NARCIS (Netherlands)

    Brinksma, Hendrik; Hermanns, H.; Brinksma, Hendrik; Hermanns, H.; Katoen, Joost P.

    This paper surveys and relates the basic concepts of process algebra and the modelling of continuous time Markov chains. It provides basic introductions to both fields, where we also study the Markov chains from an algebraic perspective, viz. that of Markov chain algebra. We then proceed to study

  19. Process algebra and Markov chains

    NARCIS (Netherlands)

    Brinksma, E.; Hermanns, H.; Brinksma, E.; Hermanns, H.; Katoen, J.P.

    2001-01-01

    This paper surveys and relates the basic concepts of process algebra and the modelling of continuous time Markov chains. It provides basic introductions to both fields, where we also study the Markov chains from an algebraic perspective, viz. that of Markov chain algebra. We then proceed to study

  20. Harmonic spectral components in time sequences of Markov correlated events

    Science.gov (United States)

    Mazzetti, Piero; Carbone, Anna

    2017-07-01

    The paper concerns the analysis of the conditions allowing time sequences of Markov correlated events give rise to a line power spectrum having a relevant physical interest. It is found that by specializing the Markov matrix in order to represent closed loop sequences of events with arbitrary distribution, generated in a steady physical condition, a large set of line spectra, covering all possible frequency values, is obtained. The amplitude of the spectral lines is given by a matrix equation based on a generalized Markov matrix involving the Fourier transform of the distribution functions representing the time intervals between successive events of the sequence. The paper is a complement of a previous work where a general expression for the continuous power spectrum was given. In that case the Markov matrix was left in a more general form, thus preventing the possibility of finding line spectra of physical interest. The present extension is also suggested by the interest of explaining the emergence of a broad set of waves found in the electro and magneto-encephalograms, whose frequency ranges from 0.5 to about 40Hz, in terms of the effects produced by chains of firing neurons within the complex neural network of the brain. An original model based on synchronized closed loop sequences of firing neurons is proposed, and a few numerical simulations are reported as an application of the above cited equation.

  1. Application of Stochastic Automata Networks for Creation of Continuous Time Markov Chain Models of Voltage Gating of Gap Junction Channels

    Directory of Open Access Journals (Sweden)

    Mindaugas Snipas

    2015-01-01

    Full Text Available The primary goal of this work was to study advantages of numerical methods used for the creation of continuous time Markov chain models (CTMC of voltage gating of gap junction (GJ channels composed of connexin protein. This task was accomplished by describing gating of GJs using the formalism of the stochastic automata networks (SANs, which allowed for very efficient building and storing of infinitesimal generator of the CTMC that allowed to produce matrices of the models containing a distinct block structure. All of that allowed us to develop efficient numerical methods for a steady-state solution of CTMC models. This allowed us to accelerate CPU time, which is necessary to solve CTMC models, ∼20 times.

  2. Application of Stochastic Automata Networks for Creation of Continuous Time Markov Chain Models of Voltage Gating of Gap Junction Channels

    Science.gov (United States)

    Pranevicius, Henrikas; Pranevicius, Mindaugas; Pranevicius, Osvaldas; Bukauskas, Feliksas F.

    2015-01-01

    The primary goal of this work was to study advantages of numerical methods used for the creation of continuous time Markov chain models (CTMC) of voltage gating of gap junction (GJ) channels composed of connexin protein. This task was accomplished by describing gating of GJs using the formalism of the stochastic automata networks (SANs), which allowed for very efficient building and storing of infinitesimal generator of the CTMC that allowed to produce matrices of the models containing a distinct block structure. All of that allowed us to develop efficient numerical methods for a steady-state solution of CTMC models. This allowed us to accelerate CPU time, which is necessary to solve CTMC models, ∼20 times. PMID:25705700

  3. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains

    Directory of Open Access Journals (Sweden)

    Tataru Paula

    2011-12-01

    Full Text Available Abstract Background Continuous time Markov chains (CTMCs is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes are unaccessible and the past must be inferred from DNA sequence data observed in the present. Results We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD, the second on uniformization (UNI, and the third on integrals of matrix exponentials (EXPM. The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. Conclusions We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  4. A Hybrid Secure Scheme for Wireless Sensor Networks against Timing Attacks Using Continuous-Time Markov Chain and Queueing Model.

    Science.gov (United States)

    Meng, Tianhui; Li, Xiaofan; Zhang, Sha; Zhao, Yubin

    2016-09-28

    Wireless sensor networks (WSNs) have recently gained popularity for a wide spectrum of applications. Monitoring tasks can be performed in various environments. This may be beneficial in many scenarios, but it certainly exhibits new challenges in terms of security due to increased data transmission over the wireless channel with potentially unknown threats. Among possible security issues are timing attacks, which are not prevented by traditional cryptographic security. Moreover, the limited energy and memory resources prohibit the use of complex security mechanisms in such systems. Therefore, balancing between security and the associated energy consumption becomes a crucial challenge. This paper proposes a secure scheme for WSNs while maintaining the requirement of the security-performance tradeoff. In order to proceed to a quantitative treatment of this problem, a hybrid continuous-time Markov chain (CTMC) and queueing model are put forward, and the tradeoff analysis of the security and performance attributes is carried out. By extending and transforming this model, the mean time to security attributes failure is evaluated. Through tradeoff analysis, we show that our scheme can enhance the security of WSNs, and the optimal rekeying rate of the performance and security tradeoff can be obtained.

  5. Markov chains and semi-Markov models in time-to-event analysis.

    Science.gov (United States)

    Abner, Erin L; Charnigo, Richard J; Kryscio, Richard J

    2013-10-25

    A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields.

  6. Regeneration and general Markov chains

    Directory of Open Access Journals (Sweden)

    Vladimir V. Kalashnikov

    1994-01-01

    Full Text Available Ergodicity, continuity, finite approximations and rare visits of general Markov chains are investigated. The obtained results permit further quantitative analysis of characteristics, such as, rates of convergence, continuity (measured as a distance between perturbed and non-perturbed characteristics, deviations between Markov chains, accuracy of approximations and bounds on the distribution function of the first visit time to a chosen subset, etc. The underlying techniques use the embedding of the general Markov chain into a wide sense regenerative process with the help of splitting construction.

  7. Markov chains and mixing times

    CERN Document Server

    Levin, David A

    2017-01-01

    Markov Chains and Mixing Times is a magical book, managing to be both friendly and deep. It gently introduces probabilistic techniques so that an outsider can follow. At the same time, it is the first book covering the geometric theory of Markov chains and has much that will be new to experts. It is certainly THE book that I will use to teach from. I recommend it to all comers, an amazing achievement. -Persi Diaconis, Mary V. Sunseri Professor of Statistics and Mathematics, Stanford University Mixing times are an active research topic within many fields from statistical physics to the theory of algorithms, as well as having intrinsic interest within mathematical probability and exploiting discrete analogs of important geometry concepts. The first edition became an instant classic, being accessible to advanced undergraduates and yet bringing readers close to current research frontiers. This second edition adds chapters on monotone chains, the exclusion process and hitting time parameters. Having both exercises...

  8. Performance Modeling of Communication Networks with Markov Chains

    CERN Document Server

    Mo, Jeonghoon

    2010-01-01

    This book is an introduction to Markov chain modeling with applications to communication networks. It begins with a general introduction to performance modeling in Chapter 1 where we introduce different performance models. We then introduce basic ideas of Markov chain modeling: Markov property, discrete time Markov chain (DTMe and continuous time Markov chain (CTMe. We also discuss how to find the steady state distributions from these Markov chains and how they can be used to compute the system performance metric. The solution methodologies include a balance equation technique, limiting probab

  9. Modeling nonhomogeneous Markov processes via time transformation.

    Science.gov (United States)

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  10. Markov chains theory and applications

    CERN Document Server

    Sericola, Bruno

    2013-01-01

    Markov chains are a fundamental class of stochastic processes. They are widely used to solve problems in a large number of domains such as operational research, computer science, communication networks and manufacturing systems. The success of Markov chains is mainly due to their simplicity of use, the large number of available theoretical results and the quality of algorithms developed for the numerical evaluation of many metrics of interest.The author presents the theory of both discrete-time and continuous-time homogeneous Markov chains. He carefully examines the explosion phenomenon, the

  11. Sampling rare fluctuations of discrete-time Markov chains

    Science.gov (United States)

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  12. Markov bridges, bisection and variance reduction

    DEFF Research Database (Denmark)

    Asmussen, Søren; Hobolth, Asger

    . In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented......Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints...

  13. SIMULATION FROM ENDPOINT-CONDITIONED, CONTINUOUS-TIME MARKOV CHAINS ON A FINITE STATE SPACE, WITH APPLICATIONS TO MOLECULAR EVOLUTION.

    Science.gov (United States)

    Hobolth, Asger; Stone, Eric A

    2009-09-01

    Analyses of serially-sampled data often begin with the assumption that the observations represent discrete samples from a latent continuous-time stochastic process. The continuous-time Markov chain (CTMC) is one such generative model whose popularity extends to a variety of disciplines ranging from computational finance to human genetics and genomics. A common theme among these diverse applications is the need to simulate sample paths of a CTMC conditional on realized data that is discretely observed. Here we present a general solution to this sampling problem when the CTMC is defined on a discrete and finite state space. Specifically, we consider the generation of sample paths, including intermediate states and times of transition, from a CTMC whose beginning and ending states are known across a time interval of length T. We first unify the literature through a discussion of the three predominant approaches: (1) modified rejection sampling, (2) direct sampling, and (3) uniformization. We then give analytical results for the complexity and efficiency of each method in terms of the instantaneous transition rate matrix Q of the CTMC, its beginning and ending states, and the length of sampling time T. In doing so, we show that no method dominates the others across all model specifications, and we give explicit proof of which method prevails for any given Q, T, and endpoints. Finally, we introduce and compare three applications of CTMCs to demonstrate the pitfalls of choosing an inefficient sampler.

  14. Verification of Open Interactive Markov Chains

    OpenAIRE

    Brazdil, Tomas; Hermanns, Holger; Krcal, Jan; Kretinsky, Jan; Rehak, Vojtech

    2012-01-01

    Interactive Markov chains (IMC) are compositional behavioral models extending both labeled transition systems and continuous-time Markov chains. IMC pair modeling convenience - owed to compositionality properties - with effective verification algorithms and tools - owed to Markov properties. Thus far however, IMC verification did not consider compositionality properties, but considered closed systems. This paper discusses the evaluation of IMC in an open and thus compositional interpretation....

  15. Model Checking Markov Reward Models with Impulse Rewards

    NARCIS (Netherlands)

    Cloth, Lucia; Katoen, Joost-Pieter; Khattri, Maneesh; Pulungan, Reza; Bondavalli, Andrea; Haverkort, Boudewijn; Tang, Dong

    This paper considers model checking of Markov reward models (MRMs), continuous-time Markov chains with state rewards as well as impulse rewards. The reward extension of the logic CSL (Continuous Stochastic Logic) is interpreted over such MRMs, and two numerical algorithms are provided to check the

  16. Recursive smoothers for hidden discrete-time Markov chains

    Directory of Open Access Journals (Sweden)

    Lakhdar Aggoun

    2005-01-01

    Full Text Available We consider a discrete-time Markov chain observed through another Markov chain. The proposed model extends models discussed by Elliott et al. (1995. We propose improved recursive formulae to update smoothed estimates of processes related to the model. These recursive estimates are used to update the parameter of the model via the expectation maximization (EM algorithm.

  17. Mapping of uncertainty relations between continuous and discrete time.

    Science.gov (United States)

    Chiuchiù, Davide; Pigolotti, Simone

    2018-03-01

    Lower bounds on fluctuations of thermodynamic currents depend on the nature of time, discrete or continuous. To understand the physical reason, we compare current fluctuations in discrete-time Markov chains and continuous-time master equations. We prove that current fluctuations in the master equations are always more likely, due to random timings of transitions. This comparison leads to a mapping of the moments of a current between discrete and continuous time. We exploit this mapping to obtain uncertainty bounds. Our results reduce the quests for uncertainty bounds in discrete and continuous time to a single problem.

  18. Markov processes

    CERN Document Server

    Kirkwood, James R

    2015-01-01

    Review of ProbabilityShort HistoryReview of Basic Probability DefinitionsSome Common Probability DistributionsProperties of a Probability DistributionProperties of the Expected ValueExpected Value of a Random Variable with Common DistributionsGenerating FunctionsMoment Generating FunctionsExercisesDiscrete-Time, Finite-State Markov ChainsIntroductionNotationTransition MatricesDirected Graphs: Examples of Markov ChainsRandom Walk with Reflecting BoundariesGambler’s RuinEhrenfest ModelCentral Problem of Markov ChainsCondition to Ensure a Unique Equilibrium StateFinding the Equilibrium StateTransient and Recurrent StatesIndicator FunctionsPerron-Frobenius TheoremAbsorbing Markov ChainsMean First Passage TimeMean Recurrence Time and the Equilibrium StateFundamental Matrix for Regular Markov ChainsDividing a Markov Chain into Equivalence ClassesPeriodic Markov ChainsReducible Markov ChainsSummaryExercisesDiscrete-Time, Infinite-State Markov ChainsRenewal ProcessesDelayed Renewal ProcessesEquilibrium State f...

  19. Long memory of financial time series and hidden Markov models with time-varying parameters

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    Hidden Markov models are often used to capture stylized facts of daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time-varying behavior for the ability to reproduce the stylized...... facts have not been thoroughly examined. This paper presents an adaptive estimation approach that allows for the parameters of the estimated models to be time-varying. It is shown that a two-state Gaussian hidden Markov model with time-varying parameters is able to reproduce the long memory of squared...... daily returns that was previously believed to be the most difficult fact to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step predictions....

  20. a Continuous-Time Positive Linear System

    Directory of Open Access Journals (Sweden)

    Kyungsup Kim

    2013-01-01

    Full Text Available This paper discusses a computational method to construct positive realizations with sparse matrices for continuous-time positive linear systems with multiple complex poles. To construct a positive realization of a continuous-time system, we use a Markov sequence similar to the impulse response sequence that is used in the discrete-time case. The existence of the proposed positive realization can be analyzed with the concept of a polyhedral convex cone. We provide a constructive algorithm to compute positive realizations with sparse matrices of some positive systems under certain conditions. A sufficient condition for the existence of a positive realization, under which the proposed constructive algorithm works well, is analyzed.

  1. Markov Trends in Macroeconomic Time Series

    NARCIS (Netherlands)

    R. Paap (Richard)

    1997-01-01

    textabstractMany macroeconomic time series are characterised by long periods of positive growth, expansion periods, and short periods of negative growth, recessions. A popular model to describe this phenomenon is the Markov trend, which is a stochastic segmented trend where the slope depends on the

  2. Utilization of two web-based continuing education courses evaluated by Markov chain model.

    Science.gov (United States)

    Tian, Hao; Lin, Jin-Mann S; Reeves, William C

    2012-01-01

    To evaluate the web structure of two web-based continuing education courses, identify problems and assess the effects of web site modifications. Markov chain models were built from 2008 web usage data to evaluate the courses' web structure and navigation patterns. The web site was then modified to resolve identified design issues and the improvement in user activity over the subsequent 12 months was quantitatively evaluated. Web navigation paths were collected between 2008 and 2010. The probability of navigating from one web page to another was analyzed. The continuing education courses' sequential structure design was clearly reflected in the resulting actual web usage models, and none of the skip transitions provided was heavily used. The web navigation patterns of the two different continuing education courses were similar. Two possible design flaws were identified and fixed in only one of the two courses. Over the following 12 months, the drop-out rate in the modified course significantly decreased from 41% to 35%, but remained unchanged in the unmodified course. The web improvement effects were further verified via a second-order Markov chain model. The results imply that differences in web content have less impact than web structure design on how learners navigate through continuing education courses. Evaluation of user navigation can help identify web design flaws and guide modifications. This study showed that Markov chain models provide a valuable tool to evaluate web-based education courses. Both the results and techniques in this study would be very useful for public health education and research specialists.

  3. Basic problems and solution methods for two-dimensional continuous 3 × 3 order hidden Markov model

    International Nuclear Information System (INIS)

    Wang, Guo-gang; Tang, Gui-jin; Gan, Zong-liang; Cui, Zi-guan; Zhu, Xiu-chang

    2016-01-01

    A novel model referred to as two-dimensional continuous 3 × 3 order hidden Markov model is put forward to avoid the disadvantages of the classical hypothesis of two-dimensional continuous hidden Markov model. This paper presents three equivalent definitions of the model, in which the state transition probability relies on not only immediate horizontal and vertical states but also immediate diagonal state, and in which the probability density of the observation relies on not only current state but also immediate horizontal and vertical states. The paper focuses on the three basic problems of the model, namely probability density calculation, parameters estimation and path backtracking. Some algorithms solving the questions are theoretically derived, by exploiting the idea that the sequences of states on rows or columns of the model can be viewed as states of a one-dimensional continuous 1 × 2 order hidden Markov model. Simulation results further demonstrate the performance of the algorithms. Because there are more statistical characteristics in the structure of the proposed new model, it can more accurately describe some practical problems, as compared to two-dimensional continuous hidden Markov model.

  4. Extending Markov Automata with State and Action Rewards

    NARCIS (Netherlands)

    Guck, Dennis; Timmer, Mark; Blom, Stefan; Bertrand, N.; Bortolussi, L.

    This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton that allows the modelling of systems incorporating rewards in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Our models support both rewards that are

  5. Decoding and modelling of time series count data using Poisson hidden Markov model and Markov ordinal logistic regression models.

    Science.gov (United States)

    Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I

    2018-01-01

    Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.

  6. Markov Chains and Markov Processes

    OpenAIRE

    Ogunbayo, Segun

    2016-01-01

    Markov chain, which was named after Andrew Markov is a mathematical system that transfers a state to another state. Many real world systems contain uncertainty. This study helps us to understand the basic idea of a Markov chain and how is been useful in our daily lives. For some times there had been suspense on distinct predictions and future existences. Also in different games there had been different expectations or results involved. That is the reason why we need Markov chains to predict o...

  7. Markov chains models, algorithms and applications

    CERN Document Server

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  8. Nonlinearly perturbed semi-Markov processes

    CERN Document Server

    Silvestrov, Dmitrii

    2017-01-01

    The book presents new methods of asymptotic analysis for nonlinearly perturbed semi-Markov processes with a finite phase space. These methods are based on special time-space screening procedures for sequential phase space reduction of semi-Markov processes combined with the systematical use of operational calculus for Laurent asymptotic expansions. Effective recurrent algorithms are composed for getting asymptotic expansions, without and with explicit upper bounds for remainders, for power moments of hitting times, stationary and conditional quasi-stationary distributions for nonlinearly perturbed semi-Markov processes. These results are illustrated by asymptotic expansions for birth-death-type semi-Markov processes, which play an important role in various applications. The book will be a useful contribution to the continuing intensive studies in the area. It is an essential reference for theoretical and applied researchers in the field of stochastic processes and their applications that will cont...

  9. Rate Reduction for State-labelled Markov Chains with Upper Time-bounded CSL Requirements

    Directory of Open Access Journals (Sweden)

    Bharath Siva Kumar Tati

    2016-07-01

    Full Text Available This paper presents algorithms for identifying and reducing a dedicated set of controllable transition rates of a state-labelled continuous-time Markov chain model. The purpose of the reduction is to make states to satisfy a given requirement, specified as a CSL upper time-bounded Until formula. We distinguish two different cases, depending on the type of probability bound. A natural partitioning of the state space allows us to develop possible solutions, leading to simple algorithms for both cases.

  10. Adiabatic condition and the quantum hitting time of Markov chains

    International Nuclear Information System (INIS)

    Krovi, Hari; Ozols, Maris; Roland, Jeremie

    2010-01-01

    We present an adiabatic quantum algorithm for the abstract problem of searching marked vertices in a graph, or spatial search. Given a random walk (or Markov chain) P on a graph with a set of unknown marked vertices, one can define a related absorbing walk P ' where outgoing transitions from marked vertices are replaced by self-loops. We build a Hamiltonian H(s) from the interpolated Markov chain P(s)=(1-s)P+sP ' and use it in an adiabatic quantum algorithm to drive an initial superposition over all vertices to a superposition over marked vertices. The adiabatic condition implies that, for any reversible Markov chain and any set of marked vertices, the running time of the adiabatic algorithm is given by the square root of the classical hitting time. This algorithm therefore demonstrates a novel connection between the adiabatic condition and the classical notion of hitting time of a random walk. It also significantly extends the scope of previous quantum algorithms for this problem, which could only obtain a full quadratic speedup for state-transitive reversible Markov chains with a unique marked vertex.

  11. Road maintenance optimization through a discrete-time semi-Markov decision process

    International Nuclear Information System (INIS)

    Zhang Xueqing; Gao Hui

    2012-01-01

    Optimization models are necessary for efficient and cost-effective maintenance of a road network. In this regard, road deterioration is commonly modeled as a discrete-time Markov process such that an optimal maintenance policy can be obtained based on the Markov decision process, or as a renewal process such that an optimal maintenance policy can be obtained based on the renewal theory. However, the discrete-time Markov process cannot capture the real time at which the state transits while the renewal process considers only one state and one maintenance action. In this paper, road deterioration is modeled as a semi-Markov process in which the state transition has the Markov property and the holding time in each state is assumed to follow a discrete Weibull distribution. Based on this semi-Markov process, linear programming models are formulated for both infinite and finite planning horizons in order to derive optimal maintenance policies to minimize the life-cycle cost of a road network. A hypothetical road network is used to illustrate the application of the proposed optimization models. The results indicate that these linear programming models are practical for the maintenance of a road network having a large number of road segments and that they are convenient to incorporate various constraints on the decision process, for example, performance requirements and available budgets. Although the optimal maintenance policies obtained for the road network are randomized stationary policies, the extent of this randomness in decision making is limited. The maintenance actions are deterministic for most states and the randomness in selecting actions occurs only for a few states.

  12. Long Memory of Financial Time Series and Hidden Markov Models with Time-Varying Parameters

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    2016-01-01

    Hidden Markov models are often used to model daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time-varying behavior have not been thoroughly examined. This paper presents an adaptive...... to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step density forecasts. Finally, it is shown that the forecasting performance of the estimated models can be further improved using local smoothing to forecast the parameter variations....

  13. Applying Markov Chains for NDVI Time Series Forecasting of Latvian Regions

    Directory of Open Access Journals (Sweden)

    Stepchenko Arthur

    2015-12-01

    Full Text Available Time series of earth observation based estimates of vegetation inform about variations in vegetation at the scale of Latvia. A vegetation index is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation. NDVI index is an important variable for vegetation forecasting and management of various problems, such as climate change monitoring, energy usage monitoring, managing the consumption of natural resources, agricultural productivity monitoring, drought monitoring and forest fire detection. In this paper, we make a one-step-ahead prediction of 7-daily time series of NDVI index using Markov chains. The choice of a Markov chain is due to the fact that a Markov chain is a sequence of random variables where each variable is located in some state. And a Markov chain contains probabilities of moving from one state to other.

  14. A Markov reward model checker

    NARCIS (Netherlands)

    Katoen, Joost P.; Maneesh Khattri, M.; Zapreev, I.S.; Zapreev, I.S.

    2005-01-01

    This short tool paper introduces MRMC, a model checker for discrete-time and continuous-time Markov reward models. It supports reward extensions of PCTL and CSL, and allows for the automated verification of properties concerning long-run and instantaneous rewards as well as cumulative rewards. In

  15. Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium

    Science.gov (United States)

    Kapfer, Sebastian C.; Krauth, Werner

    2017-12-01

    We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.

  16. Markov processes and controlled Markov chains

    CERN Document Server

    Filar, Jerzy; Chen, Anyue

    2002-01-01

    The general theory of stochastic processes and the more specialized theory of Markov processes evolved enormously in the second half of the last century. In parallel, the theory of controlled Markov chains (or Markov decision processes) was being pioneered by control engineers and operations researchers. Researchers in Markov processes and controlled Markov chains have been, for a long time, aware of the synergies between these two subject areas. However, this may be the first volume dedicated to highlighting these synergies and, almost certainly, it is the first volume that emphasizes the contributions of the vibrant and growing Chinese school of probability. The chapters that appear in this book reflect both the maturity and the vitality of modern day Markov processes and controlled Markov chains. They also will provide an opportunity to trace the connections that have emerged between the work done by members of the Chinese school of probability and the work done by the European, US, Central and South Ameri...

  17. Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains

    Science.gov (United States)

    Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.

    2018-01-01

    We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.

  18. Model Checking Infinite-State Markov Chains

    NARCIS (Netherlands)

    Remke, Anne Katharina Ingrid; Haverkort, Boudewijn R.H.M.; Cloth, L.

    2004-01-01

    In this paper algorithms for model checking CSL (continuous stochastic logic) against infinite-state continuous-time Markov chains of so-called quasi birth-death type are developed. In doing so we extend the applicability of CSL model checking beyond the recently proposed case for finite-state

  19. The Green-Kubo formula for general Markov processes with a continuous time parameter

    International Nuclear Information System (INIS)

    Yang Fengxia; Liu Yong; Chen Yong

    2010-01-01

    For general Markov processes, the Green-Kubo formula is shown to be valid under a mild condition. A class of stochastic evolution equations on a separable Hilbert space and three typical infinite systems of locally interacting diffusions on Z d (irreversible in most cases) are shown to satisfy the Green-Kubo formula, and the Einstein relations for these stochastic evolution equations are shown explicitly as a corollary.

  20. Absolute continuity of the distribution of some Markov geometric series

    Institute of Scientific and Technical Information of China (English)

    Ai-hua; FAN; Ji-hong; ZHANG

    2007-01-01

    Let (∈n)≥0 be the Markov chain of two states with respect to the probability measure of the maximal entropy on the subshift space ∑A defined by Fibonacci incident matrix A.We consider the measure μλ of the probability distribution of the random series ∑∞n=0 εnλn (0 <λ< 1).It is proved that μλ is singular if λ∈ (0,√5-1/2) and that μλ is absolutely continuous for almost all λ∈ (√5-1/2,0.739).

  1. Introduction to the numerical solutions of Markov chains

    CERN Document Server

    Stewart, Williams J

    1994-01-01

    A cornerstone of applied probability, Markov chains can be used to help model how plants grow, chemicals react, and atoms diffuse - and applications are increasingly being found in such areas as engineering, computer science, economics, and education. To apply the techniques to real problems, however, it is necessary to understand how Markov chains can be solved numerically. In this book, the first to offer a systematic and detailed treatment of the numerical solution of Markov chains, William Stewart provides scientists on many levels with the power to put this theory to use in the actual world, where it has applications in areas as diverse as engineering, economics, and education. His efforts make for essential reading in a rapidly growing field. Here, Stewart explores all aspects of numerically computing solutions of Markov chains, especially when the state is huge. He provides extensive background to both discrete-time and continuous-time Markov chains and examines many different numerical computing metho...

  2. Clustering Multivariate Time Series Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Shima Ghassempour

    2014-03-01

    Full Text Available In this paper we describe an algorithm for clustering multivariate time series with variables taking both categorical and continuous values. Time series of this type are frequent in health care, where they represent the health trajectories of individuals. The problem is challenging because categorical variables make it difficult to define a meaningful distance between trajectories. We propose an approach based on Hidden Markov Models (HMMs, where we first map each trajectory into an HMM, then define a suitable distance between HMMs and finally proceed to cluster the HMMs with a method based on a distance matrix. We test our approach on a simulated, but realistic, data set of 1,255 trajectories of individuals of age 45 and over, on a synthetic validation set with known clustering structure, and on a smaller set of 268 trajectories extracted from the longitudinal Health and Retirement Survey. The proposed method can be implemented quite simply using standard packages in R and Matlab and may be a good candidate for solving the difficult problem of clustering multivariate time series with categorical variables using tools that do not require advanced statistic knowledge, and therefore are accessible to a wide range of researchers.

  3. Modeling Uncertainty of Directed Movement via Markov Chains

    Directory of Open Access Journals (Sweden)

    YIN Zhangcai

    2015-10-01

    Full Text Available Probabilistic time geography (PTG is suggested as an extension of (classical time geography, in order to present the uncertainty of an agent located at the accessible position by probability. This may provide a quantitative basis for most likely finding an agent at a location. In recent years, PTG based on normal distribution or Brown bridge has been proposed, its variance, however, is irrelevant with the agent's speed or divergent with the increase of the speed; so they are difficult to take into account application pertinence and stability. In this paper, a new method is proposed to model PTG based on Markov chain. Firstly, a bidirectional conditions Markov chain is modeled, the limit of which, when the moving speed is large enough, can be regarded as the Brown bridge, thus has the characteristics of digital stability. Then, the directed movement is mapped to Markov chains. The essential part is to build step length, the state space and transfer matrix of Markov chain according to the space and time position of directional movement, movement speed information, to make sure the Markov chain related to the movement speed. Finally, calculating continuously the probability distribution of the directed movement at any time by the Markov chains, it can be get the possibility of an agent located at the accessible position. Experimental results show that, the variance based on Markov chains not only is related to speed, but also is tending towards stability with increasing the agent's maximum speed.

  4. Analyzing the profit-loss sharing contracts with Markov model

    Directory of Open Access Journals (Sweden)

    Imam Wahyudi

    2016-12-01

    Full Text Available The purpose of this paper is to examine how to use first order Markov chain to build a reliable monitoring system for the profit-loss sharing based contracts (PLS as the mode of financing contracts in Islamic bank with censored continuous-time observations. The paper adopts the longitudinal analysis with the first order Markov chain framework. Laplace transform was used with homogenous continuous time assumption, from discretized generator matrix, to generate the transition matrix. Various metrics, i.e.: eigenvalue and eigenvector were used to test the first order Markov chain assumption. Cox semi parametric model was used also to analyze the momentum and waiting time effect as non-Markov behavior. The result shows that first order Markov chain is powerful as a monitoring tool for Islamic banks. We find that waiting time negatively affected present rating downgrade (upgrade significantly. Likewise, momentum covariate showed negative effect. Finally, the result confirms that different origin rating have different movement behavior. The paper explores the potential of Markov chain framework as a risk management tool for Islamic banks. It provides valuable insight and integrative model for banks to manage their borrower accounts. This model can be developed to be a powerful early warning system to identify which borrower needs to be monitored intensively. Ultimately, this model could potentially increase the efficiency, productivity and competitiveness of Islamic banks in Indonesia. The analysis used only rating data. Further study should be able to give additional information about the determinant factors of rating movement of the borrowers by incorporating various factors such as contract-related factors, bank-related factors, borrower-related factors and macroeconomic factors.

  5. Efficient Approximation of Optimal Control for Markov Games

    DEFF Research Database (Denmark)

    Fearnley, John; Rabe, Markus; Schewe, Sven

    2011-01-01

    We study the time-bounded reachability problem for continuous-time Markov decision processes (CTMDPs) and games (CTMGs). Existing techniques for this problem use discretisation techniques to break time into discrete intervals, and optimal control is approximated for each interval separately...

  6. Stylised facts of financial time series and hidden Markov models in continuous time

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    2015-01-01

    presents an extension to continuous time where it is possible to increase the number of states with a linear rather than quadratic growth in the number of parameters. The possibility of increasing the number of states leads to a better fit to both the distributional and temporal properties of daily returns....

  7. Pemodelan Markov Switching Dengan Time-varying Transition Probability

    OpenAIRE

    Savitri, Anggita Puri; Warsito, Budi; Rahmawati, Rita

    2016-01-01

    Exchange rate or currency is an economic variable which reflects country's state of economy. It fluctuates over time because of its ability to switch the condition or regime caused by economic and political factors. The changes in the exchange rate are depreciation and appreciation. Therefore, it could be modeled using Markov Switching with Time-Varying Transition Probability which observe the conditional changes and use information variable. From this model, time-varying transition probabili...

  8. Continuity Properties of Distances for Markov Processes

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Mao, Hua; Larsen, Kim Guldstrand

    2014-01-01

    In this paper we investigate distance functions on finite state Markov processes that measure the behavioural similarity of non-bisimilar processes. We consider both probabilistic bisimilarity metrics, and trace-based distances derived from standard Lp and Kullback-Leibler distances. Two desirable...

  9. Expectation propagation for continuous time stochastic processes

    International Nuclear Information System (INIS)

    Cseke, Botond; Schnoerr, David; Sanguinetti, Guido; Opper, Manfred

    2016-01-01

    We consider the inverse problem of reconstructing the posterior measure over the trajectories of a diffusion process from discrete time observations and continuous time constraints. We cast the problem in a Bayesian framework and derive approximations to the posterior distributions of single time marginals using variational approximate inference, giving rise to an expectation propagation type algorithm. For non-linear diffusion processes, this is achieved by leveraging moment closure approximations. We then show how the approximation can be extended to a wide class of discrete-state Markov jump processes by making use of the chemical Langevin equation. Our empirical results show that the proposed method is computationally efficient and provides good approximations for these classes of inverse problems. (paper)

  10. Semi-Markov Chains and Hidden Semi-Markov Models toward Applications Their Use in Reliability and DNA Analysis

    CERN Document Server

    Barbu, Vlad

    2008-01-01

    Semi-Markov processes are much more general and better adapted to applications than the Markov ones because sojourn times in any state can be arbitrarily distributed, as opposed to the geometrically distributed sojourn time in the Markov case. This book concerns with the estimation of discrete-time semi-Markov and hidden semi-Markov processes

  11. Bisimulation and Simulation Relations for Markov Chains

    NARCIS (Netherlands)

    Baier, Christel; Hermanns, H.; Katoen, Joost P.; Wolf, Verena; Aceto, L.; Gordon, A.

    2006-01-01

    Formal notions of bisimulation and simulation relation play a central role for any kind of process algebra. This short paper sketches the main concepts for bisimulation and simulation relations for probabilistic systems, modelled by discrete- or continuous-time Markov chains.

  12. Effective degree Markov-chain approach for discrete-time epidemic processes on uncorrelated networks.

    Science.gov (United States)

    Cai, Chao-Ran; Wu, Zhi-Xi; Guan, Jian-Yue

    2014-11-01

    Recently, Gómez et al. proposed a microscopic Markov-chain approach (MMCA) [S. Gómez, J. Gómez-Gardeñes, Y. Moreno, and A. Arenas, Phys. Rev. E 84, 036105 (2011)PLEEE81539-375510.1103/PhysRevE.84.036105] to the discrete-time susceptible-infected-susceptible (SIS) epidemic process and found that the epidemic prevalence obtained by this approach agrees well with that by simulations. However, we found that the approach cannot be straightforwardly extended to a susceptible-infected-recovered (SIR) epidemic process (due to its irreversible property), and the epidemic prevalences obtained by MMCA and Monte Carlo simulations do not match well when the infection probability is just slightly above the epidemic threshold. In this contribution we extend the effective degree Markov-chain approach, proposed for analyzing continuous-time epidemic processes [J. Lindquist, J. Ma, P. Driessche, and F. Willeboordse, J. Math. Biol. 62, 143 (2011)JMBLAJ0303-681210.1007/s00285-010-0331-2], to address discrete-time binary-state (SIS) or three-state (SIR) epidemic processes on uncorrelated complex networks. It is shown that the final epidemic size as well as the time series of infected individuals obtained from this approach agree very well with those by Monte Carlo simulations. Our results are robust to the change of different parameters, including the total population size, the infection probability, the recovery probability, the average degree, and the degree distribution of the underlying networks.

  13. Markov processes an introduction for physical scientists

    CERN Document Server

    Gillespie, Daniel T

    1991-01-01

    Markov process theory is basically an extension of ordinary calculus to accommodate functions whos time evolutions are not entirely deterministic. It is a subject that is becoming increasingly important for many fields of science. This book develops the single-variable theory of both continuous and jump Markov processes in a way that should appeal especially to physicists and chemists at the senior and graduate level.Key Features* A self-contained, prgamatic exposition of the needed elements of random variable theory* Logically integrated derviations of the Chapman-Kolmogorov e

  14. Markov stochasticity coordinates

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2017-01-01

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  15. Markov stochasticity coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Eliazar, Iddo, E-mail: iddo.eliazar@intel.com

    2017-01-15

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  16. A Bayesian method for construction of Markov models to describe dynamics on various time-scales.

    Science.gov (United States)

    Rains, Emily K; Andersen, Hans C

    2010-10-14

    The dynamics of many biological processes of interest, such as the folding of a protein, are slow and complicated enough that a single molecular dynamics simulation trajectory of the entire process is difficult to obtain in any reasonable amount of time. Moreover, one such simulation may not be sufficient to develop an understanding of the mechanism of the process, and multiple simulations may be necessary. One approach to circumvent this computational barrier is the use of Markov state models. These models are useful because they can be constructed using data from a large number of shorter simulations instead of a single long simulation. This paper presents a new Bayesian method for the construction of Markov models from simulation data. A Markov model is specified by (τ,P,T), where τ is the mesoscopic time step, P is a partition of configuration space into mesostates, and T is an N(P)×N(P) transition rate matrix for transitions between the mesostates in one mesoscopic time step, where N(P) is the number of mesostates in P. The method presented here is different from previous Bayesian methods in several ways. (1) The method uses Bayesian analysis to determine the partition as well as the transition probabilities. (2) The method allows the construction of a Markov model for any chosen mesoscopic time-scale τ. (3) It constructs Markov models for which the diagonal elements of T are all equal to or greater than 0.5. Such a model will be called a "consistent mesoscopic Markov model" (CMMM). Such models have important advantages for providing an understanding of the dynamics on a mesoscopic time-scale. The Bayesian method uses simulation data to find a posterior probability distribution for (P,T) for any chosen τ. This distribution can be regarded as the Bayesian probability that the kinetics observed in the atomistic simulation data on the mesoscopic time-scale τ was generated by the CMMM specified by (P,T). An optimization algorithm is used to find the most

  17. Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions

    OpenAIRE

    Masuyama, Hiroyuki

    2014-01-01

    In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally,...

  18. Fast-slow asymptotics for a Markov chain model of fast sodium current

    Science.gov (United States)

    Starý, Tomáš; Biktashev, Vadim N.

    2017-09-01

    We explore the feasibility of using fast-slow asymptotics to eliminate the computational stiffness of discrete-state, continuous-time deterministic Markov chain models of ionic channels underlying cardiac excitability. We focus on a Markov chain model of fast sodium current, and investigate its asymptotic behaviour with respect to small parameters identified in different ways.

  19. On the Total Variation Distance of Semi-Markov Chains

    DEFF Research Database (Denmark)

    Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand

    2015-01-01

    Semi-Markov chains (SMCs) are continuous-time probabilistic transition systems where the residence time on states is governed by generic distributions on the positive real line. This paper shows the tight relation between the total variation distance on SMCs and their model checking problem over...

  20. Detecting Faults By Use Of Hidden Markov Models

    Science.gov (United States)

    Smyth, Padhraic J.

    1995-01-01

    Frequency of false alarms reduced. Faults in complicated dynamic system (e.g., antenna-aiming system, telecommunication network, or human heart) detected automatically by method of automated, continuous monitoring. Obtains time-series data by sampling multiple sensor outputs at discrete intervals of t and processes data via algorithm determining whether system in normal or faulty state. Algorithm implements, among other things, hidden first-order temporal Markov model of states of system. Mathematical model of dynamics of system not needed. Present method is "prior" method mentioned in "Improved Hidden-Markov-Model Method of Detecting Faults" (NPO-18982).

  1. Markov Chain Models for the Stochastic Modeling of Pitting Corrosion

    OpenAIRE

    Valor, A.; Caleyo, F.; Alfonso, L.; Velázquez, J. C.; Hallen, J. M.

    2013-01-01

    The stochastic nature of pitting corrosion of metallic structures has been widely recognized. It is assumed that this kind of deterioration retains no memory of the past, so only the current state of the damage influences its future development. This characteristic allows pitting corrosion to be categorized as a Markov process. In this paper, two different models of pitting corrosion, developed using Markov chains, are presented. Firstly, a continuous-time, nonhomogeneous linear growth (pure ...

  2. Nonparametric Estimation of Interval Reliability for Discrete-Time Semi-Markov Systems

    DEFF Research Database (Denmark)

    Georgiadis, Stylianos; Limnios, Nikolaos

    2016-01-01

    In this article, we consider a repairable discrete-time semi-Markov system with finite state space. The measure of the interval reliability is given as the probability of the system being operational over a given finite-length time interval. A nonparametric estimator is proposed for the interval...

  3. Monte Carlo methods for the reliability analysis of Markov systems

    International Nuclear Information System (INIS)

    Buslik, A.J.

    1985-01-01

    This paper presents Monte Carlo methods for the reliability analysis of Markov systems. Markov models are useful in treating dependencies between components. The present paper shows how the adjoint Monte Carlo method for the continuous time Markov process can be derived from the method for the discrete-time Markov process by a limiting process. The straightforward extensions to the treatment of mean unavailability (over a time interval) are given. System unavailabilities can also be estimated; this is done by making the system failed states absorbing, and not permitting repair from them. A forward Monte Carlo method is presented in which the weighting functions are related to the adjoint function. In particular, if the exact adjoint function is known then weighting factors can be constructed such that the exact answer can be obtained with a single Monte Carlo trial. Of course, if the exact adjoint function is known, there is no need to perform the Monte Carlo calculation. However, the formulation is useful since it gives insight into choices of the weight factors which will reduce the variance of the estimator

  4. Book Review: "Hidden Markov Models for Time Series: An ...

    African Journals Online (AJOL)

    Hidden Markov Models for Time Series: An Introduction using R. by Walter Zucchini and Iain L. MacDonald. Chapman & Hall (CRC Press), 2009. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT · http://dx.doi.org/10.4314/saaj.v10i1.61717 · AJOL African Journals Online.

  5. A Markov Model for Commen-Cause Failures

    DEFF Research Database (Denmark)

    Platz, Ole

    1984-01-01

    A continuous time four-state Markov chain is shown to cover several of the models that have been used for describing dependencies between failures of components in redundant systems. Among these are the models derived by Marshall and Olkin and by Freund and models for one-out-of-three and two...

  6. Quasi-stationary distributions for reducible absorbing Markov chains in discrete time

    NARCIS (Netherlands)

    van Doorn, Erik A.; Pollett, P.K.

    2009-01-01

    We consider discrete-time Markov chains with one coffin state and a finite set $S$ of transient states, and are interested in the limiting behaviour of such a chain as time $n \\to \\infty,$ conditional on survival up to $n$. It is known that, when $S$ is irreducible, the limiting conditional

  7. Semi-Markov Arnason-Schwarz models.

    Science.gov (United States)

    King, Ruth; Langrock, Roland

    2016-06-01

    We consider multi-state capture-recapture-recovery data where observed individuals are recorded in a set of possible discrete states. Traditionally, the Arnason-Schwarz model has been fitted to such data where the state process is modeled as a first-order Markov chain, though second-order models have also been proposed and fitted to data. However, low-order Markov models may not accurately represent the underlying biology. For example, specifying a (time-independent) first-order Markov process involves the assumption that the dwell time in each state (i.e., the duration of a stay in a given state) has a geometric distribution, and hence that the modal dwell time is one. Specifying time-dependent or higher-order processes provides additional flexibility, but at the expense of a potentially significant number of additional model parameters. We extend the Arnason-Schwarz model by specifying a semi-Markov model for the state process, where the dwell-time distribution is specified more generally, using, for example, a shifted Poisson or negative binomial distribution. A state expansion technique is applied in order to represent the resulting semi-Markov Arnason-Schwarz model in terms of a simpler and computationally tractable hidden Markov model. Semi-Markov Arnason-Schwarz models come with only a very modest increase in the number of parameters, yet permit a significantly more flexible state process. Model selection can be performed using standard procedures, and in particular via the use of information criteria. The semi-Markov approach allows for important biological inference to be drawn on the underlying state process, for example, on the times spent in the different states. The feasibility of the approach is demonstrated in a simulation study, before being applied to real data corresponding to house finches where the states correspond to the presence or absence of conjunctivitis. © 2015, The International Biometric Society.

  8. Error bounds for augmented truncations of discrete-time block-monotone Markov chains under subgeometric drift conditions

    OpenAIRE

    Masuyama, Hiroyuki

    2015-01-01

    This paper studies the last-column-block-augmented northwest-corner truncation (LC-block-augmented truncation, for short) of discrete-time block-monotone Markov chains under subgeometric drift conditions. The main result of this paper is to present an upper bound for the total variation distance between the stationary probability vectors of a block-monotone Markov chain and its LC-block-augmented truncation. The main result is extended to Markov chains that themselves may not be block monoton...

  9. A high-fidelity weather time series generator using the Markov Chain process on a piecewise level

    Science.gov (United States)

    Hersvik, K.; Endrerud, O.-E. V.

    2017-12-01

    A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.

  10. Neural Network Based Finite-Time Stabilization for Discrete-Time Markov Jump Nonlinear Systems with Time Delays

    Directory of Open Access Journals (Sweden)

    Fei Chen

    2013-01-01

    Full Text Available This paper deals with the finite-time stabilization problem for discrete-time Markov jump nonlinear systems with time delays and norm-bounded exogenous disturbance. The nonlinearities in different jump modes are parameterized by neural networks. Subsequently, a linear difference inclusion state space representation for a class of neural networks is established. Based on this, sufficient conditions are derived in terms of linear matrix inequalities to guarantee stochastic finite-time boundedness and stochastic finite-time stabilization of the closed-loop system. A numerical example is illustrated to verify the efficiency of the proposed technique.

  11. Markov Chain Model with Catastrophe to Determine Mean Time to Default of Credit Risky Assets

    Science.gov (United States)

    Dharmaraja, Selvamuthu; Pasricha, Puneet; Tardelli, Paola

    2017-11-01

    This article deals with the problem of probabilistic prediction of the time distance to default for a firm. To model the credit risk, the dynamics of an asset is described as a function of a homogeneous discrete time Markov chain subject to a catastrophe, the default. The behaviour of the Markov chain is investigated and the mean time to the default is expressed in a closed form. The methodology to estimate the parameters is given. Numerical results are provided to illustrate the applicability of the proposed model on real data and their analysis is discussed.

  12. Stochastic Games for Continuous-Time Jump Processes Under Finite-Horizon Payoff Criterion

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Qingda, E-mail: weiqd@hqu.edu.cn [Huaqiao University, School of Economics and Finance (China); Chen, Xian, E-mail: chenxian@amss.ac.cn [Peking University, School of Mathematical Sciences (China)

    2016-10-15

    In this paper we study two-person nonzero-sum games for continuous-time jump processes with the randomized history-dependent strategies under the finite-horizon payoff criterion. The state space is countable, and the transition rates and payoff functions are allowed to be unbounded from above and from below. Under the suitable conditions, we introduce a new topology for the set of all randomized Markov multi-strategies and establish its compactness and metrizability. Then by constructing the approximating sequences of the transition rates and payoff functions, we show that the optimal value function for each player is a unique solution to the corresponding optimality equation and obtain the existence of a randomized Markov Nash equilibrium. Furthermore, we illustrate the applications of our main results with a controlled birth and death system.

  13. Stochastic Games for Continuous-Time Jump Processes Under Finite-Horizon Payoff Criterion

    International Nuclear Information System (INIS)

    Wei, Qingda; Chen, Xian

    2016-01-01

    In this paper we study two-person nonzero-sum games for continuous-time jump processes with the randomized history-dependent strategies under the finite-horizon payoff criterion. The state space is countable, and the transition rates and payoff functions are allowed to be unbounded from above and from below. Under the suitable conditions, we introduce a new topology for the set of all randomized Markov multi-strategies and establish its compactness and metrizability. Then by constructing the approximating sequences of the transition rates and payoff functions, we show that the optimal value function for each player is a unique solution to the corresponding optimality equation and obtain the existence of a randomized Markov Nash equilibrium. Furthermore, we illustrate the applications of our main results with a controlled birth and death system.

  14. Markov chain modelling of pitting corrosion in underground pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Caleyo, F. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico)], E-mail: fcaleyo@gmail.com; Velazquez, J.C. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico); Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado, 10400 La Habana (Cuba); Hallen, J.M. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico)

    2009-09-15

    A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.

  15. Markov chain modelling of pitting corrosion in underground pipelines

    International Nuclear Information System (INIS)

    Caleyo, F.; Velazquez, J.C.; Valor, A.; Hallen, J.M.

    2009-01-01

    A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.

  16. Algebraic decay in self-similar Markov chains

    International Nuclear Information System (INIS)

    Hanson, J.D.; Cary, J.R.; Meiss, J.D.

    1984-10-01

    A continuous time Markov chain is used to model motion in the neighborhood of a critical noble invariant circle in an area-preserving map. States in the infinite chain represent successive rational approximants to the frequency of the invariant circle. The nonlinear integral equation for the first passage time distribution is solved exactly. The asymptotic distribution is a power law times a function periodic in the logarithm of the time. For parameters relevant to Hamiltonian systems the decay proceeds as t -4 05

  17. Markov chains with quasitoeplitz transition matrix: first zero hitting

    Directory of Open Access Journals (Sweden)

    Alexander M. Dukhovny

    1989-01-01

    Full Text Available This paper continues the investigation of Markov Chains with a quasitoeplitz transition matrix. Generating functions of first zero hitting probabilities and mean times are found by the solution of special Riemann boundary value problems on the unit circle. Duality is discussed.

  18. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series

    Science.gov (United States)

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  19. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series.

    Science.gov (United States)

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  20. On the record process of time-reversible spectrally-negative Markov additive processes

    NARCIS (Netherlands)

    J. Ivanovs; M.R.H. Mandjes (Michel)

    2009-01-01

    htmlabstractWe study the record process of a spectrally-negative Markov additive process (MAP). Assuming time-reversibility, a number of key quantities can be given explicitly. It is shown how these key quantities can be used when analyzing the distribution of the all-time maximum attained by MAPs

  1. Zero velocity interval detection based on a continuous hidden Markov model in micro inertial pedestrian navigation

    Science.gov (United States)

    Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli

    2018-06-01

    Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.

  2. Stochastic modeling of pitting corrosion in underground pipelines using Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Velazquez, J.C.; Caleyo, F.; Hallen, J.M.; Araujo, J.E. [Instituto Politecnico Nacional (IPN), Mexico D.F. (Mexico). Escuela Superior de Ingenieria Quimica e Industrias Extractivas (ESIQIE); Valor, A. [Universidad de La Habana, La Habana (Cuba)

    2009-07-01

    A non-homogenous, linear growth (pure birth) Markov process, with discrete states in continuous time, has been used to model external pitting corrosion in underground pipelines. The transition probability function for the pit depth is obtained from the analytical solution of the forward Kolmogorov equations for this process. The parameters of the transition probability function between depth states can be identified from the observed time evolution of the mean of the pit depth distribution. Monte Carlo simulations were used to predict the time evolution of the mean value of the pit depth distribution in soils with different physicochemical characteristics. The simulated distributions have been used to create an empirical Markov-chain-based stochastic model for predicting the evolution of pitting corrosion from the observed properties of the soil in contact with the pipeline. Real- life case studies, involving simulated and measured pit depth distributions are presented to illustrate the application of the proposed Markov chains model. (author)

  3. A new Markov-chain-related statistical approach for modelling synthetic wind power time series

    International Nuclear Information System (INIS)

    Pesch, T; Hake, J F; Schröders, S; Allelein, H J

    2015-01-01

    The integration of rising shares of volatile wind power in the generation mix is a major challenge for the future energy system. To address the uncertainties involved in wind power generation, models analysing and simulating the stochastic nature of this energy source are becoming increasingly important. One statistical approach that has been frequently used in the literature is the Markov chain approach. Recently, the method was identified as being of limited use for generating wind time series with time steps shorter than 15–40 min as it is not capable of reproducing the autocorrelation characteristics accurately. This paper presents a new Markov-chain-related statistical approach that is capable of solving this problem by introducing a variable second lag. Furthermore, additional features are presented that allow for the further adjustment of the generated synthetic time series. The influences of the model parameter settings are examined by meaningful parameter variations. The suitability of the approach is demonstrated by an application analysis with the example of the wind feed-in in Germany. It shows that—in contrast to conventional Markov chain approaches—the generated synthetic time series do not systematically underestimate the required storage capacity to balance wind power fluctuation. (paper)

  4. Markov Chain Models for the Stochastic Modeling of Pitting Corrosion

    Directory of Open Access Journals (Sweden)

    A. Valor

    2013-01-01

    Full Text Available The stochastic nature of pitting corrosion of metallic structures has been widely recognized. It is assumed that this kind of deterioration retains no memory of the past, so only the current state of the damage influences its future development. This characteristic allows pitting corrosion to be categorized as a Markov process. In this paper, two different models of pitting corrosion, developed using Markov chains, are presented. Firstly, a continuous-time, nonhomogeneous linear growth (pure birth Markov process is used to model external pitting corrosion in underground pipelines. A closed-form solution of the system of Kolmogorov's forward equations is used to describe the transition probability function in a discrete pit depth space. The transition probability function is identified by correlating the stochastic pit depth mean with the empirical deterministic mean. In the second model, the distribution of maximum pit depths in a pitting experiment is successfully modeled after the combination of two stochastic processes: pit initiation and pit growth. Pit generation is modeled as a nonhomogeneous Poisson process, in which induction time is simulated as the realization of a Weibull process. Pit growth is simulated using a nonhomogeneous Markov process. An analytical solution of Kolmogorov's system of equations is also found for the transition probabilities from the first Markov state. Extreme value statistics is employed to find the distribution of maximum pit depths.

  5. Markov chains of nonlinear Markov processes and an application to a winner-takes-all model for social conformity

    Energy Technology Data Exchange (ETDEWEB)

    Frank, T D [Center for the Ecological Study of Perception and Action, Department of Psychology, University of Connecticut, 406 Babbidge Road, Storrs, CT 06269 (United States)

    2008-07-18

    We discuss nonlinear Markov processes defined on discrete time points and discrete state spaces using Markov chains. In this context, special attention is paid to the distinction between linear and nonlinear Markov processes. We illustrate that the Chapman-Kolmogorov equation holds for nonlinear Markov processes by a winner-takes-all model for social conformity. (fast track communication)

  6. Markov chains of nonlinear Markov processes and an application to a winner-takes-all model for social conformity

    International Nuclear Information System (INIS)

    Frank, T D

    2008-01-01

    We discuss nonlinear Markov processes defined on discrete time points and discrete state spaces using Markov chains. In this context, special attention is paid to the distinction between linear and nonlinear Markov processes. We illustrate that the Chapman-Kolmogorov equation holds for nonlinear Markov processes by a winner-takes-all model for social conformity. (fast track communication)

  7. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    Science.gov (United States)

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  8. Stability Analysis of Networked Control Systems with Random Time Delays and Packet Dropouts Modeled by Markov Chains

    Directory of Open Access Journals (Sweden)

    Li Qiu

    2013-01-01

    unified Markov jump model. The random time delays and packet dropouts existed in feedback communication link are modeled by two independent Markov chains; the resulting closed-loop system is described by a new Markovian jump linear system (MJLS with Markov delays. Sufficient conditions of the stochastic stability for NCSs is obtained by constructing a novel Lyapunov functional, and the mode-dependent output feedback controller design method is presented based on linear matrix inequality (LMI technique. A numerical example is given to illustrate the effectiveness of the proposed method.

  9. Markov Chain Models for Stochastic Behavior in Resonance Overlap Regions

    Science.gov (United States)

    McCarthy, Morgan; Quillen, Alice

    2018-01-01

    We aim to predict lifetimes of particles in chaotic zoneswhere resonances overlap. A continuous-time Markov chain model isconstructed using mean motion resonance libration timescales toestimate transition times between resonances. The model is applied todiffusion in the co-rotation region of a planet. For particles begunat low eccentricity, the model is effective for early diffusion, butnot at later time when particles experience close encounters to the planet.

  10. Power plant reliability calculation with Markov chain models

    International Nuclear Information System (INIS)

    Senegacnik, A.; Tuma, M.

    1998-01-01

    In the paper power plant operation is modelled using continuous time Markov chains with discrete state space. The model is used to compute the power plant reliability and the importance and influence of individual states, as well as the transition probabilities between states. For comparison the model is fitted to data for coal and nuclear power plants recorded over several years. (orig.) [de

  11. CSL Model Checking Algorithms for Infinite-state Structured Markov chains

    NARCIS (Netherlands)

    Remke, Anne Katharina Ingrid; Haverkort, Boudewijn R.H.M.; Raskin, J.-F.; Thiagarajan, P.S.

    2007-01-01

    Jackson queueing networks (JQNs) are a very general class of queueing networks that find their application in a variety of settings. The state space of the continuous-time Markov chain (CTMC) that underlies such a JQN, is highly structured, however, of infinite size in as many dimensions as there

  12. A Modularized Efficient Framework for Non-Markov Time Series Estimation

    Science.gov (United States)

    Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.

    2018-06-01

    We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.

  13. Analysis of transtheoretical model of health behavioral changes in a nutrition intervention study--a continuous time Markov chain model with Bayesian approach.

    Science.gov (United States)

    Ma, Junsheng; Chan, Wenyaw; Tsai, Chu-Lin; Xiong, Momiao; Tilley, Barbara C

    2015-11-30

    Continuous time Markov chain (CTMC) models are often used to study the progression of chronic diseases in medical research but rarely applied to studies of the process of behavioral change. In studies of interventions to modify behaviors, a widely used psychosocial model is based on the transtheoretical model that often has more than three states (representing stages of change) and conceptually permits all possible instantaneous transitions. Very little attention is given to the study of the relationships between a CTMC model and associated covariates under the framework of transtheoretical model. We developed a Bayesian approach to evaluate the covariate effects on a CTMC model through a log-linear regression link. A simulation study of this approach showed that model parameters were accurately and precisely estimated. We analyzed an existing data set on stages of change in dietary intake from the Next Step Trial using the proposed method and the generalized multinomial logit model. We found that the generalized multinomial logit model was not suitable for these data because it ignores the unbalanced data structure and temporal correlation between successive measurements. Our analysis not only confirms that the nutrition intervention was effective but also provides information on how the intervention affected the transitions among the stages of change. We found that, compared with the control group, subjects in the intervention group, on average, spent substantively less time in the precontemplation stage and were more/less likely to move from an unhealthy/healthy state to a healthy/unhealthy state. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Algebraic decay in self-similar Markov chains

    International Nuclear Information System (INIS)

    Hanson, J.D.; Cary, J.R.; Meiss, J.D.

    1985-01-01

    A continuous-time Markov chain is used to model motion in the neighborhood of a critical invariant circle for a Hamiltonian map. States in the infinite chain represent successive rational approximants to the frequency of the invariant circle. For the case of a noble frequency, the chain is self-similar and the nonlinear integral equation for the first passage time distribution is solved exactly. The asymptotic distribution is a power law times a function periodic in the logarithm of the time. For parameters relevant to the critical noble circle, the decay proceeds as t/sup -4.05/

  15. Predicting hepatitis B monthly incidence rates using weighted Markov chains and time series methods.

    Science.gov (United States)

    Shahdoust, Maryam; Sadeghifar, Majid; Poorolajal, Jalal; Javanrooh, Niloofar; Amini, Payam

    2015-01-01

    Hepatitis B (HB) is a major global mortality. Accurately predicting the trend of the disease can provide an appropriate view to make health policy disease prevention. This paper aimed to apply three different to predict monthly incidence rates of HB. This historical cohort study was conducted on the HB incidence data of Hamadan Province, the west of Iran, from 2004 to 2012. Weighted Markov Chain (WMC) method based on Markov chain theory and two time series models including Holt Exponential Smoothing (HES) and SARIMA were applied on the data. The results of different applied methods were compared to correct percentages of predicted incidence rates. The monthly incidence rates were clustered into two clusters as state of Markov chain. The correct predicted percentage of the first and second clusters for WMC, HES and SARIMA methods was (100, 0), (84, 67) and (79, 47) respectively. The overall incidence rate of HBV is estimated to decrease over time. The comparison of results of the three models indicated that in respect to existing seasonality trend and non-stationarity, the HES had the most accurate prediction of the incidence rates.

  16. Continuous Markovian Logics

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Cardelli, Luca; Larsen, Kim Guldstrand

    2012-01-01

    Continuous Markovian Logic (CML) is a multimodal logic that expresses quantitative and qualitative properties of continuous-time labelled Markov processes with arbitrary (analytic) state-spaces, henceforth called continuous Markov processes (CMPs). The modalities of CML evaluate the rates...... of the exponentially distributed random variables that characterize the duration of the labeled transitions of a CMP. In this paper we present weak and strong complete axiomatizations for CML and prove a series of metaproperties, including the finite model property and the construction of canonical models. CML...... characterizes stochastic bisimilarity and it supports the definition of a quantified extension of the satisfiability relation that measures the "compatibility" between a model and a property. In this context, the metaproperties allows us to prove two robustness theorems for the logic stating that one can...

  17. Markov models for digraph panel data : Monte Carlo-based derivative estimation

    NARCIS (Netherlands)

    Schweinberger, Michael; Snijders, Tom A. B.

    2007-01-01

    A parametric, continuous-time Markov model for digraph panel data is considered. The parameter is estimated by the method of moments. A convenient method for estimating the variance-covariance matrix of the moment estimator relies on the delta method, requiring the Jacobian matrix-that is, the

  18. Theoretical restrictions on longest implicit time scales in Markov state models of biomolecular dynamics

    Science.gov (United States)

    Sinitskiy, Anton V.; Pande, Vijay S.

    2018-01-01

    Markov state models (MSMs) have been widely used to analyze computer simulations of various biomolecular systems. They can capture conformational transitions much slower than an average or maximal length of a single molecular dynamics (MD) trajectory from the set of trajectories used to build the MSM. A rule of thumb claiming that the slowest implicit time scale captured by an MSM should be comparable by the order of magnitude to the aggregate duration of all MD trajectories used to build this MSM has been known in the field. However, this rule has never been formally proved. In this work, we present analytical results for the slowest time scale in several types of MSMs, supporting the above rule. We conclude that the slowest implicit time scale equals the product of the aggregate sampling and four factors that quantify: (1) how much statistics on the conformational transitions corresponding to the longest implicit time scale is available, (2) how good the sampling of the destination Markov state is, (3) the gain in statistics from using a sliding window for counting transitions between Markov states, and (4) a bias in the estimate of the implicit time scale arising from finite sampling of the conformational transitions. We demonstrate that in many practically important cases all these four factors are on the order of unity, and we analyze possible scenarios that could lead to their significant deviation from unity. Overall, we provide for the first time analytical results on the slowest time scales captured by MSMs. These results can guide further practical applications of MSMs to biomolecular dynamics and allow for higher computational efficiency of simulations.

  19. Clarification of basic factorization identity is for the almost semi-continuous latticed Poisson processes on the Markov chain

    Directory of Open Access Journals (Sweden)

    Gerich M. S.

    2012-12-01

    Full Text Available Let ${xi(t, x(t}$ be a homogeneous semi-continuous lattice Poisson process on the Markov chain.The jumps of one sign are geometrically distributed, and jumps of the opposite sign are arbitrary latticed distribution. For a suchprocesses the relations for the components of two-sided matrix factorization are established.This relations define the moment genereting functions for extremumf of the process and their complements.

  20. Markov transition probability-based network from time series for characterizing experimental two-phase flow

    International Nuclear Information System (INIS)

    Gao Zhong-Ke; Hu Li-Dan; Jin Ning-De

    2013-01-01

    We generate a directed weighted complex network by a method based on Markov transition probability to represent an experimental two-phase flow. We first systematically carry out gas—liquid two-phase flow experiments for measuring the time series of flow signals. Then we construct directed weighted complex networks from various time series in terms of a network generation method based on Markov transition probability. We find that the generated network inherits the main features of the time series in the network structure. In particular, the networks from time series with different dynamics exhibit distinct topological properties. Finally, we construct two-phase flow directed weighted networks from experimental signals and associate the dynamic behavior of gas-liquid two-phase flow with the topological statistics of the generated networks. The results suggest that the topological statistics of two-phase flow networks allow quantitative characterization of the dynamic flow behavior in the transitions among different gas—liquid flow patterns. (general)

  1. Monte Carlo Simulation of Markov, Semi-Markov, and Generalized Semi- Markov Processes in Probabilistic Risk Assessment

    Science.gov (United States)

    English, Thomas

    2005-01-01

    A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.

  2. Analysis of Streamline Separation at Infinity Using Time-Discrete Markov Chains.

    Science.gov (United States)

    Reich, W; Scheuermann, G

    2012-12-01

    Existing methods for analyzing separation of streamlines are often restricted to a finite time or a local area. In our paper we introduce a new method that complements them by allowing an infinite-time-evaluation of steady planar vector fields. Our algorithm unifies combinatorial and probabilistic methods and introduces the concept of separation in time-discrete Markov-Chains. We compute particle distributions instead of the streamlines of single particles. We encode the flow into a map and then into a transition matrix for each time direction. Finally, we compare the results of our grid-independent algorithm to the popular Finite-Time-Lyapunov-Exponents and discuss the discrepancies.

  3. Derivation of Markov processes that violate detailed balance

    Science.gov (United States)

    Lee, Julian

    2018-03-01

    Time-reversal symmetry of the microscopic laws dictates that the equilibrium distribution of a stochastic process must obey the condition of detailed balance. However, cyclic Markov processes that do not admit equilibrium distributions with detailed balance are often used to model systems driven out of equilibrium by external agents. I show that for a Markov model without detailed balance, an extended Markov model can be constructed, which explicitly includes the degrees of freedom for the driving agent and satisfies the detailed balance condition. The original cyclic Markov model for the driven system is then recovered as an approximation at early times by summing over the degrees of freedom for the driving agent. I also show that the widely accepted expression for the entropy production in a cyclic Markov model is actually a time derivative of an entropy component in the extended model. Further, I present an analytic expression for the entropy component that is hidden in the cyclic Markov model.

  4. Markov set-chains

    CERN Document Server

    Hartfiel, Darald J

    1998-01-01

    In this study extending classical Markov chain theory to handle fluctuating transition matrices, the author develops a theory of Markov set-chains and provides numerous examples showing how that theory can be applied. Chapters are concluded with a discussion of related research. Readers who can benefit from this monograph are those interested in, or involved with, systems whose data is imprecise or that fluctuate with time. A background equivalent to a course in linear algebra and one in probability theory should be sufficient.

  5. A Markov chain Monte Carlo Expectation Maximization Algorithm for Statistical Analysis of DNA Sequence Evolution with Neighbor-Dependent Substitution Rates

    DEFF Research Database (Denmark)

    Hobolth, Asger

    2008-01-01

    -dimensional integrals required in the EM algorithm are estimated using MCMC sampling. The MCMC sampler requires simulation of sample paths from a continuous time Markov process, conditional on the beginning and ending states and the paths of the neighboring sites. An exact path sampling algorithm is developed......The evolution of DNA sequences can be described by discrete state continuous time Markov processes on a phylogenetic tree. We consider neighbor-dependent evolutionary models where the instantaneous rate of substitution at a site depends on the states of the neighboring sites. Neighbor......-dependent substitution models are analytically intractable and must be analyzed using either approximate or simulation-based methods. We describe statistical inference of neighbor-dependent models using a Markov chain Monte Carlo expectation maximization (MCMC-EM) algorithm. In the MCMC-EM algorithm, the high...

  6. Information-Theoretic Performance Analysis of Sensor Networks via Markov Modeling of Time Series Data.

    Science.gov (United States)

    Li, Yue; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Yue Li; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Wettergren, Thomas A; Li, Yue; Ray, Asok; Jha, Devesh K

    2018-06-01

    This paper presents information-theoretic performance analysis of passive sensor networks for detection of moving targets. The proposed method falls largely under the category of data-level information fusion in sensor networks. To this end, a measure of information contribution for sensors is formulated in a symbolic dynamics framework. The network information state is approximately represented as the largest principal component of the time series collected across the network. To quantify each sensor's contribution for generation of the information content, Markov machine models as well as x-Markov (pronounced as cross-Markov) machine models, conditioned on the network information state, are constructed; the difference between the conditional entropies of these machines is then treated as an approximate measure of information contribution by the respective sensors. The x-Markov models represent the conditional temporal statistics given the network information state. The proposed method has been validated on experimental data collected from a local area network of passive sensors for target detection, where the statistical characteristics of environmental disturbances are similar to those of the target signal in the sense of time scale and texture. A distinctive feature of the proposed algorithm is that the network decisions are independent of the behavior and identity of the individual sensors, which is desirable from computational perspectives. Results are presented to demonstrate the proposed method's efficacy to correctly identify the presence of a target with very low false-alarm rates. The performance of the underlying algorithm is compared with that of a recent data-driven, feature-level information fusion algorithm. It is shown that the proposed algorithm outperforms the other algorithm.

  7. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    Science.gov (United States)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  8. A Multilayer Hidden Markov Models-Based Method for Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Chongben Tao

    2013-01-01

    Full Text Available To achieve Human-Robot Interaction (HRI by using gestures, a continuous gesture recognition approach based on Multilayer Hidden Markov Models (MHMMs is proposed, which consists of two parts. One part is gesture spotting and segment module, the other part is continuous gesture recognition module. Firstly, a Kinect sensor is used to capture 3D acceleration and 3D angular velocity data of hand gestures. And then, a Feed-forward Neural Networks (FNNs and a threshold criterion are used for gesture spotting and segment, respectively. Afterwards, the segmented gesture signals are respectively preprocessed and vector symbolized by a sliding window and a K-means clustering method. Finally, symbolized data are sent into Lower Hidden Markov Models (LHMMs to identify individual gestures, and then, a Bayesian filter with sequential constraints among gestures in Upper Hidden Markov Models (UHMMs is used to correct recognition errors created in LHMMs. Five predefined gestures are used to interact with a Kinect mobile robot in experiments. The experimental results show that the proposed method not only has good effectiveness and accuracy, but also has favorable real-time performance.

  9. Reviving Markov processes and applications

    International Nuclear Information System (INIS)

    Cai, H.

    1988-01-01

    In this dissertation we study a procedure which restarts a Markov process when the process is killed by some arbitrary multiplicative functional. The regenerative nature of this revival procedure is characterized through a Markov renewal equation. An interesting duality between the revival procedure and the classical killing operation is found. Under the condition that the multiplicative functional possesses an intensity, the generators of the revival process can be written down explicitly. An intimate connection is also found between the perturbation of the sample path of a Markov process and the perturbation of a generator (in Kato's sense). The applications of the theory include the study of the processes like piecewise-deterministic Markov process, virtual waiting time process and the first entrance decomposition (taboo probability)

  10. Mapping absorption processes onto a Markov chain, conserving the mean first passage time

    International Nuclear Information System (INIS)

    Biswas, Katja

    2013-01-01

    The dynamics of a multidimensional system is projected onto a discrete state master equation using the transition rates W(k → k′; t, t + dt) between a set of states {k} represented by the regions {ζ k } in phase or discrete state space. Depending on the dynamics Γ i (t) of the original process and the choice of ζ k , the discretized process can be Markovian or non-Markovian. For absorption processes, it is shown that irrespective of these properties of the projection, a master equation with time-independent transition rates W-bar (k→k ' ) can be obtained, which conserves the total occupation time of the partitions of the phase or discrete state space of the original process. An expression for the transition probabilities p-bar (k ' |k) is derived based on either time-discrete measurements {t i } with variable time stepping Δ (i+1)i = t i+1 − t i or the theoretical knowledge at continuous times t. This allows computational methods of absorbing Markov chains to be used to obtain the mean first passage time (MFPT) of the system. To illustrate this approach, the procedure is applied to obtain the MFPT for the overdamped Brownian motion of particles subject to a system with dichotomous noise and the escape from an entropic barrier. The high accuracy of the simulation results confirms with the theory. (paper)

  11. Discrete-time semi-Markov modeling of human papillomavirus persistence

    Science.gov (United States)

    Mitchell, C. E.; Hudgens, M. G.; King, C. C.; Cu-Uvin, S.; Lo, Y.; Rompalo, A.; Sobel, J.; Smith, J. S.

    2011-01-01

    Multi-state modeling is often employed to describe the progression of a disease process. In epidemiological studies of certain diseases, the disease state is typically only observed at periodic clinical visits, producing incomplete longitudinal data. In this paper we consider fitting semi-Markov models to estimate the persistence of human papillomavirus (HPV) type-specific infection in studies where the status of HPV type(s) is assessed periodically. Simulation study results are presented indicating the semi-Markov estimator is more accurate than an estimator currently used in the HPV literature. The methods are illustrated using data from the HIV Epidemiology Research Study (HERS). PMID:21538985

  12. Multi-category micro-milling tool wear monitoring with continuous hidden Markov models

    Science.gov (United States)

    Zhu, Kunpeng; Wong, Yoke San; Hong, Geok Soon

    2009-02-01

    In-process monitoring of tool conditions is important in micro-machining due to the high precision requirement and high tool wear rate. Tool condition monitoring in micro-machining poses new challenges compared to conventional machining. In this paper, a multi-category classification approach is proposed for tool flank wear state identification in micro-milling. Continuous Hidden Markov models (HMMs) are adapted for modeling of the tool wear process in micro-milling, and estimation of the tool wear state given the cutting force features. For a noise-robust approach, the HMM outputs are connected via a medium filter to minimize the tool state before entry into the next state due to high noise level. A detailed study on the selection of HMM structures for tool condition monitoring (TCM) is presented. Case studies on the tool state estimation in the micro-milling of pure copper and steel demonstrate the effectiveness and potential of these methods.

  13. The exit-time problem for a Markov jump process

    Science.gov (United States)

    Burch, N.; D'Elia, M.; Lehoucq, R. B.

    2014-12-01

    The purpose of this paper is to consider the exit-time problem for a finite-range Markov jump process, i.e, the distance the particle can jump is bounded independent of its location. Such jump diffusions are expedient models for anomalous transport exhibiting super-diffusion or nonstandard normal diffusion. We refer to the associated deterministic equation as a volume-constrained nonlocal diffusion equation. The volume constraint is the nonlocal analogue of a boundary condition necessary to demonstrate that the nonlocal diffusion equation is well-posed and is consistent with the jump process. A critical aspect of the analysis is a variational formulation and a recently developed nonlocal vector calculus. This calculus allows us to pose nonlocal backward and forward Kolmogorov equations, the former equation granting the various moments of the exit-time distribution.

  14. Prognostics for Steam Generator Tube Rupture using Markov Chain model

    International Nuclear Information System (INIS)

    Kim, Gibeom; Heo, Gyunyoung; Kim, Hyeonmin

    2016-01-01

    This paper will describe the prognostics method for evaluating and forecasting the ageing effect and demonstrate the procedure of prognostics for the Steam Generator Tube Rupture (SGTR) accident. Authors will propose the data-driven method so called MCMC (Markov Chain Monte Carlo) which is preferred to the physical-model method in terms of flexibility and availability. Degradation data is represented as growth of burst probability over time. Markov chain model is performed based on transition probability of state. And the state must be discrete variable. Therefore, burst probability that is continuous variable have to be changed into discrete variable to apply Markov chain model to the degradation data. The Markov chain model which is one of prognostics methods was described and the pilot demonstration for a SGTR accident was performed as a case study. The Markov chain model is strong since it is possible to be performed without physical models as long as enough data are available. However, in the case of the discrete Markov chain used in this study, there must be loss of information while the given data is discretized and assigned to the finite number of states. In this process, original information might not be reflected on prediction sufficiently. This should be noted as the limitation of discrete models. Now we will be studying on other prognostics methods such as GPM (General Path Model) which is also data-driven method as well as the particle filer which belongs to physical-model method and conducting comparison analysis

  15. Spectral methods for quantum Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Szehr, Oleg

    2014-05-08

    The aim of this project is to contribute to our understanding of quantum time evolutions, whereby we focus on quantum Markov chains. The latter constitute a natural generalization of the ubiquitous concept of a classical Markov chain to describe evolutions of quantum mechanical systems. We contribute to the theory of such processes by introducing novel methods that allow us to relate the eigenvalue spectrum of the transition map to convergence as well as stability properties of the Markov chain.

  16. Spectral methods for quantum Markov chains

    International Nuclear Information System (INIS)

    Szehr, Oleg

    2014-01-01

    The aim of this project is to contribute to our understanding of quantum time evolutions, whereby we focus on quantum Markov chains. The latter constitute a natural generalization of the ubiquitous concept of a classical Markov chain to describe evolutions of quantum mechanical systems. We contribute to the theory of such processes by introducing novel methods that allow us to relate the eigenvalue spectrum of the transition map to convergence as well as stability properties of the Markov chain.

  17. A scaling analysis of a cat and mouse Markov chain

    NARCIS (Netherlands)

    Litvak, Nelli; Robert, Philippe

    2012-01-01

    If ($C_n$) a Markov chain on a discrete state space $S$, a Markov chain ($C_n, M_n$) on the product space $S \\times S$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain behaves like the original Markov chain and the second component changes only when both

  18. Discrete time Markov chains (DTMC) susceptible infected susceptible (SIS) epidemic model with two pathogens in two patches

    Science.gov (United States)

    Lismawati, Eka; Respatiwulan; Widyaningsih, Purnami

    2017-06-01

    The SIS epidemic model describes the pattern of disease spread with characteristics that recovered individuals can be infected more than once. The number of susceptible and infected individuals every time follows the discrete time Markov process. It can be represented by the discrete time Markov chains (DTMC) SIS. The DTMC SIS epidemic model can be developed for two pathogens in two patches. The aims of this paper are to reconstruct and to apply the DTMC SIS epidemic model with two pathogens in two patches. The model was presented as transition probabilities. The application of the model obtain that the number of susceptible individuals decreases while the number of infected individuals increases for each pathogen in each patch.

  19. Markov and mixed models with applications

    DEFF Research Database (Denmark)

    Mortensen, Stig Bousgaard

    This thesis deals with mathematical and statistical models with focus on applications in pharmacokinetic and pharmacodynamic (PK/PD) modelling. These models are today an important aspect of the drug development in the pharmaceutical industry and continued research in statistical methodology within...... or uncontrollable factors in an individual. Modelling using SDEs also provides new tools for estimation of unknown inputs to a system and is illustrated with an application to estimation of insulin secretion rates in diabetic patients. Models for the eect of a drug is a broader area since drugs may affect...... for non-parametric estimation of Markov processes are proposed to give a detailed description of the sleep process during the night. Statistically the Markov models considered for sleep states are closely related to the PK models based on SDEs as both models share the Markov property. When the models...

  20. Continuous Change Detection and Classification Using Hidden Markov Model: A Case Study for Monitoring Urban Encroachment onto Farmland in Beijing

    Directory of Open Access Journals (Sweden)

    Yuan Yuan

    2015-11-01

    Full Text Available In this paper, we propose a novel method to continuously monitor land cover change using satellite image time series, which can extract comprehensive change information including change time, location, and “from-to” information. This method is based on a hidden Markov model (HMM trained for each land cover class. Assuming a pixel’s initial class has been obtained, likelihoods of the corresponding model are calculated on incoming time series extracted with a temporal sliding window. By observing the likelihood change over the windows, land cover change can be precisely detected from the dramatic drop of likelihood. The established HMMs are then used for identifying the land cover class after the change. As a case study, the proposed method is applied to monitoring urban encroachment onto farmland in Beijing using 10-year MODIS time series from 2001 to 2010. The performance is evaluated on a validation set for different model structures and thresholds. Compared with other change detection methods, the proposed method shows superior change detection accuracy. In addition, it is also more computationally efficient.

  1. Multi-state Markov models for disease progression in the presence of informative examination times: an application to hepatitis C.

    Science.gov (United States)

    Sweeting, M J; Farewell, V T; De Angelis, D

    2010-05-20

    In many chronic diseases it is important to understand the rate at which patients progress from infection through a series of defined disease states to a clinical outcome, e.g. cirrhosis in hepatitis C virus (HCV)-infected individuals or AIDS in HIV-infected individuals. Typically data are obtained from longitudinal studies, which often are observational in nature, and where disease state is observed only at selected examinations throughout follow-up. Transition times between disease states are therefore interval censored. Multi-state Markov models are commonly used to analyze such data, but rely on the assumption that the examination times are non-informative, and hence the examination process is ignorable in a likelihood-based analysis. In this paper we develop a Markov model that relaxes this assumption through the premise that the examination process is ignorable only after conditioning on a more regularly observed auxiliary variable. This situation arises in a study of HCV disease progression, where liver biopsies (the examinations) are sparse, irregular, and potentially informative with respect to the transition times. We use additional information on liver function tests (LFTs), commonly collected throughout follow-up, to inform current disease state and to assume an ignorable examination process. The model developed has a similar structure to a hidden Markov model and accommodates both the series of LFT measurements and the partially latent series of disease states. We show through simulation how this model compares with the commonly used ignorable Markov model, and a Markov model that assumes the examination process is non-ignorable. Copyright 2010 John Wiley & Sons, Ltd.

  2. Approximating Markov Chains: What and why

    International Nuclear Information System (INIS)

    Pincus, S.

    1996-01-01

    Much of the current study of dynamical systems is focused on geometry (e.g., chaos and bifurcations) and ergodic theory. Yet dynamical systems were originally motivated by an attempt to open-quote open-quote solve,close-quote close-quote or at least understand, a discrete-time analogue of differential equations. As such, numerical, analytical solution techniques for dynamical systems would seem desirable. We discuss an approach that provides such techniques, the approximation of dynamical systems by suitable finite state Markov Chains. Steady state distributions for these Markov Chains, a straightforward calculation, will converge to the true dynamical system steady state distribution, with appropriate limit theorems indicated. Thus (i) approximation by a computable, linear map holds the promise of vastly faster steady state solutions for nonlinear, multidimensional differential equations; (ii) the solution procedure is unaffected by the presence or absence of a probability density function for the attractor, entirely skirting singularity, fractal/multifractal, and renormalization considerations. The theoretical machinery underpinning this development also implies that under very general conditions, steady state measures are weakly continuous with control parameter evolution. This means that even though a system may change periodicity, or become chaotic in its limiting behavior, such statistical parameters as the mean, standard deviation, and tail probabilities change continuously, not abruptly with system evolution. copyright 1996 American Institute of Physics

  3. Decisive Markov Chains

    OpenAIRE

    Abdulla, Parosh Aziz; Henda, Noomene Ben; Mayr, Richard

    2007-01-01

    We consider qualitative and quantitative verification problems for infinite-state Markov chains. We call a Markov chain decisive w.r.t. a given set of target states F if it almost certainly eventually reaches either F or a state from which F can no longer be reached. While all finite Markov chains are trivially decisive (for every set F), this also holds for many classes of infinite Markov chains. Infinite Markov chains which contain a finite attractor are decisive w.r.t. every set F. In part...

  4. H2-control and the separation principle for discrete-time jump systems with the Markov chain in a general state space

    Science.gov (United States)

    Figueiredo, Danilo Zucolli; Costa, Oswaldo Luiz do Valle

    2017-10-01

    This paper deals with the H2 optimal control problem of discrete-time Markov jump linear systems (MJLS) considering the case in which the Markov chain takes values in a general Borel space ?. It is assumed that the controller has access only to an output variable and to the jump parameter. The goal, in this case, is to design a dynamic Markov jump controller such that the H2-norm of the closed-loop system is minimised. It is shown that the H2-norm can be written as the sum of two H2-norms, such that one of them does not depend on the control, and the other one is obtained from the optimal filter for an infinite-horizon filtering problem. This result can be seen as a separation principle for MJLS with Markov chain in a Borel space ? considering the infinite time horizon case.

  5. Finite-Time Nonfragile Synchronization of Stochastic Complex Dynamical Networks with Semi-Markov Switching Outer Coupling

    Directory of Open Access Journals (Sweden)

    Rathinasamy Sakthivel

    2018-01-01

    Full Text Available The problem of robust nonfragile synchronization is investigated in this paper for a class of complex dynamical networks subject to semi-Markov jumping outer coupling, time-varying coupling delay, randomly occurring gain variation, and stochastic noise over a desired finite-time interval. In particular, the network topology is assumed to follow a semi-Markov process such that it may switch from one to another at different instants. In this paper, the random gain variation is represented by a stochastic variable that is assumed to satisfy the Bernoulli distribution with white sequences. Based on these hypotheses and the Lyapunov-Krasovskii stability theory, a new finite-time stochastic synchronization criterion is established for the considered network in terms of linear matrix inequalities. Moreover, the control design parameters that guarantee the required criterion are computed by solving a set of linear matrix inequality constraints. An illustrative example is finally given to show the effectiveness and advantages of the developed analytical results.

  6. Canonical Structure and Orthogonality of Forces and Currents in Irreversible Markov Chains

    Science.gov (United States)

    Kaiser, Marcus; Jack, Robert L.; Zimmer, Johannes

    2018-03-01

    We discuss a canonical structure that provides a unifying description of dynamical large deviations for irreversible finite state Markov chains (continuous time), Onsager theory, and Macroscopic Fluctuation Theory (MFT). For Markov chains, this theory involves a non-linear relation between probability currents and their conjugate forces. Within this framework, we show how the forces can be split into two components, which are orthogonal to each other, in a generalised sense. This splitting allows a decomposition of the pathwise rate function into three terms, which have physical interpretations in terms of dissipation and convergence to equilibrium. Similar decompositions hold for rate functions at level 2 and level 2.5. These results clarify how bounds on entropy production and fluctuation theorems emerge from the underlying dynamical rules. We discuss how these results for Markov chains are related to similar structures within MFT, which describes hydrodynamic limits of such microscopic models.

  7. Stencil method: a Markov model for transport in porous media

    Science.gov (United States)

    Delgoshaie, A. H.; Tchelepi, H.; Jenny, P.

    2016-12-01

    In porous media the transport of fluid is dominated by flow-field heterogeneity resulting from the underlying transmissibility field. Since the transmissibility is highly uncertain, many realizations of a geological model are used to describe the statistics of the transport phenomena in a Monte Carlo framework. One possible way to avoid the high computational cost of physics-based Monte Carlo simulations is to model the velocity field as a Markov process and use Markov Chain Monte Carlo. In previous works multiple Markov models for discrete velocity processes have been proposed. These models can be divided into two general classes of Markov models in time and Markov models in space. Both of these choices have been shown to be effective to some extent. However some studies have suggested that the Markov property cannot be confirmed for a temporal Markov process; Therefore there is not a consensus about the validity and value of Markov models in time. Moreover, previous spacial Markov models have only been used for modeling transport on structured networks and can not be readily applied to model transport in unstructured networks. In this work we propose a novel approach for constructing a Markov model in time (stencil method) for a discrete velocity process. The results form the stencil method are compared to previously proposed spacial Markov models for structured networks. The stencil method is also applied to unstructured networks and can successfully describe the dispersion of particles in this setting. Our conclusion is that both temporal Markov models and spacial Markov models for discrete velocity processes can be valid for a range of model parameters. Moreover, we show that the stencil model can be more efficient in many practical settings and is suited to model dispersion both on structured and unstructured networks.

  8. Automatic earthquake detection and classification with continuous hidden Markov models: a possible tool for monitoring Las Canadas caldera in Tenerife

    Energy Technology Data Exchange (ETDEWEB)

    Beyreuther, Moritz; Wassermann, Joachim [Department of Earth and Environmental Sciences (Geophys. Observatory), Ludwig Maximilians Universitaet Muenchen, D-80333 (Germany); Carniel, Roberto [Dipartimento di Georisorse e Territorio Universitat Degli Studi di Udine, I-33100 (Italy)], E-mail: roberto.carniel@uniud.it

    2008-10-01

    A possible interaction of (volcano-) tectonic earthquakes with the continuous seismic noise recorded in the volcanic island of Tenerife was recently suggested, but existing catalogues seem to be far from being self consistent, calling for the development of automatic detection and classification algorithms. In this work we propose the adoption of a methodology based on Hidden Markov Models (HMMs), widely used already in other fields, such as speech classification.

  9. A relation between non-Markov and Markov processes

    International Nuclear Information System (INIS)

    Hara, H.

    1980-01-01

    With the aid of a transformation technique, it is shown that some memory effects in the non-Markov processes can be eliminated. In other words, some non-Markov processes are rewritten in a form obtained by the random walk process; the Markov process. To this end, two model processes which have some memory or correlation in the random walk process are introduced. An explanation of the memory in the processes is given. (orig.)

  10. Stochastic demand patterns for Markov service facilities with neutral and active periods

    International Nuclear Information System (INIS)

    Csenki, Attila

    2009-01-01

    In an earlier paper, a closed form expression was obtained for the joint interval reliability of a Markov system with a partitioned state space S=U union D, i.e. for the probability that the system will reside in the set of up states U throughout the union of some specific disjoint time intervals I l =[θ l ,θ l +ζ l ],l=1,...,k. The deterministic time intervals I l formed a demand pattern specifying the desired active periods. In the present paper, we admit stochastic demand patterns by assuming that the lengths of the active periods, ζ l , as well as the lengths of the neutral periods, θ l -(θ l-1 +ζ l-1 ), are random. We explore two mechanisms for modelling random demand: (1) by alternating renewal processes; (2) by sojourn times of some continuous time Markov chain with a partitioned state space. The first construction results in an expression in terms of a revised version of the moment generating functions of the sojourns of the alternating renewal process. The second construction involves the probability that a Markov chain follows certain patterns of visits to some groups of states and yields an expression using Kronecker matrix operations. The model of a small computer system is analysed to exemplify the ideas

  11. On almost-periodic points of a topological Markov chain

    International Nuclear Information System (INIS)

    Bogatyi, Semeon A; Redkozubov, Vadim V

    2012-01-01

    We prove that a transitive topological Markov chain has almost-periodic points of all D-periods. Moreover, every D-period is realized by continuously many distinct minimal sets. We give a simple constructive proof of the result which asserts that any transitive topological Markov chain has periodic points of almost all periods, and study the structure of the finite set of positive integers that are not periods.

  12. Solution of the Markov chain for the dead time problem

    International Nuclear Information System (INIS)

    Degweker, S.B.

    1997-01-01

    A method for solving the equation for the Markov chain, describing the effect of a non-extendible dead time on the statistics of time correlated pulses, is discussed. The equation, which was derived in an earlier paper, describes a non-linear process and is not amenable to exact solution. The present method consists of representing the probability generating function as a factorial cumulant expansion and neglecting factorial cumulants beyond the second. This results in a closed set of non-linear equations for the factorial moments. Stationary solutions of these equations, which are of interest for calculating the count rate, are obtained iteratively. The method is applied to the variable dead time counter technique for estimation of system parameters in passive neutron assay of Pu and reactor noise analysis. Comparisons of results by this method with Monte Carlo calculations are presented. (author)

  13. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    Science.gov (United States)

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  14. Hidden Markov model tracking of continuous gravitational waves from young supernova remnants

    Science.gov (United States)

    Sun, L.; Melatos, A.; Suvorova, S.; Moran, W.; Evans, R. J.

    2018-02-01

    Searches for persistent gravitational radiation from nonpulsating neutron stars in young supernova remnants are computationally challenging because of rapid stellar braking. We describe a practical, efficient, semicoherent search based on a hidden Markov model tracking scheme, solved by the Viterbi algorithm, combined with a maximum likelihood matched filter, the F statistic. The scheme is well suited to analyzing data from advanced detectors like the Advanced Laser Interferometer Gravitational Wave Observatory (Advanced LIGO). It can track rapid phase evolution from secular stellar braking and stochastic timing noise torques simultaneously without searching second- and higher-order derivatives of the signal frequency, providing an economical alternative to stack-slide-based semicoherent algorithms. One implementation tracks the signal frequency alone. A second implementation tracks the signal frequency and its first time derivative. It improves the sensitivity by a factor of a few upon the first implementation, but the cost increases by 2 to 3 orders of magnitude.

  15. Model Checking Structured Infinite Markov Chains

    NARCIS (Netherlands)

    Remke, Anne Katharina Ingrid

    2008-01-01

    In the past probabilistic model checking hast mostly been restricted to finite state models. This thesis explores the possibilities of model checking with continuous stochastic logic (CSL) on infinite-state Markov chains. We present an in-depth treatment of model checking algorithms for two special

  16. Observation uncertainty in reversible Markov chains.

    Science.gov (United States)

    Metzner, Philipp; Weber, Marcus; Schütte, Christof

    2010-09-01

    In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .

  17. Perturbed Markov chains

    OpenAIRE

    Solan, Eilon; Vieille, Nicolas

    2015-01-01

    We study irreducible time-homogenous Markov chains with finite state space in discrete time. We obtain results on the sensitivity of the stationary distribution and other statistical quantities with respect to perturbations of the transition matrix. We define a new closeness relation between transition matrices, and use graph-theoretic techniques, in contrast with the matrix analysis techniques previously used.

  18. The Markov process admits a consistent steady-state thermodynamic formalism

    Science.gov (United States)

    Peng, Liangrong; Zhu, Yi; Hong, Liu

    2018-01-01

    The search for a unified formulation for describing various non-equilibrium processes is a central task of modern non-equilibrium thermodynamics. In this paper, a novel steady-state thermodynamic formalism was established for general Markov processes described by the Chapman-Kolmogorov equation. Furthermore, corresponding formalisms of steady-state thermodynamics for the master equation and Fokker-Planck equation could be rigorously derived in mathematics. To be concrete, we proved that (1) in the limit of continuous time, the steady-state thermodynamic formalism for the Chapman-Kolmogorov equation fully agrees with that for the master equation; (2) a similar one-to-one correspondence could be established rigorously between the master equation and Fokker-Planck equation in the limit of large system size; (3) when a Markov process is restrained to one-step jump, the steady-state thermodynamic formalism for the Fokker-Planck equation with discrete state variables also goes to that for master equations, as the discretization step gets smaller and smaller. Our analysis indicated that general Markov processes admit a unified and self-consistent non-equilibrium steady-state thermodynamic formalism, regardless of underlying detailed models.

  19. Adjoint sensitivity analysis procedure of Markov chains with applications on reliability of IFMIF accelerator-system facilities

    Energy Technology Data Exchange (ETDEWEB)

    Balan, I.

    2005-05-01

    This work presents the implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for the Continuous Time, Discrete Space Markov chains (CTMC), as an alternative to the other computational expensive methods. In order to develop this procedure as an end product in reliability studies, the reliability of the physical systems is analyzed using a coupled Fault-Tree - Markov chain technique, i.e. the abstraction of the physical system is performed using as the high level interface the Fault-Tree and afterwards this one is automatically converted into a Markov chain. The resulting differential equations based on the Markov chain model are solved in order to evaluate the system reliability. Further sensitivity analyses using ASAP applied to CTMC equations are performed to study the influence of uncertainties in input data to the reliability measures and to get the confidence in the final reliability results. The methods to generate the Markov chain and the ASAP for the Markov chain equations have been implemented into the new computer code system QUEFT/MARKOMAGS/MCADJSEN for reliability and sensitivity analysis of physical systems. The validation of this code system has been carried out by using simple problems for which analytical solutions can be obtained. Typical sensitivity results show that the numerical solution using ASAP is robust, stable and accurate. The method and the code system developed during this work can be used further as an efficient and flexible tool to evaluate the sensitivities of reliability measures for any physical system analyzed using the Markov chain. Reliability and sensitivity analyses using these methods have been performed during this work for the IFMIF Accelerator System Facilities. The reliability studies using Markov chain have been concentrated around the availability of the main subsystems of this complex physical system for a typical mission time. The sensitivity studies for two typical responses using ASAP have been

  20. Markov chains analytic and Monte Carlo computations

    CERN Document Server

    Graham, Carl

    2014-01-01

    Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec

  1. An introduction to continuous-time stochastic processes theory, models, and applications to finance, biology, and medicine

    CERN Document Server

    Capasso, Vincenzo

    2015-01-01

    This textbook, now in its third edition, offers a rigorous and self-contained introduction to the theory of continuous-time stochastic processes, stochastic integrals, and stochastic differential equations. Expertly balancing theory and applications, the work features concrete examples of modeling real-world problems from biology, medicine, industrial applications, finance, and insurance using stochastic methods. No previous knowledge of stochastic processes is required. Key topics include: * Markov processes * Stochastic differential equations * Arbitrage-free markets and financial derivatives * Insurance risk * Population dynamics, and epidemics * Agent-based models New to the Third Edition: * Infinitely divisible distributions * Random measures * Levy processes * Fractional Brownian motion * Ergodic theory * Karhunen-Loeve expansion * Additional applications * Additional  exercises * Smoluchowski  approximation of  Langevin systems An Introduction to Continuous-Time Stochastic Processes, Third Editio...

  2. Quantum Markov Chain Mixing and Dissipative Engineering

    DEFF Research Database (Denmark)

    Kastoryano, Michael James

    2012-01-01

    This thesis is the fruit of investigations on the extension of ideas of Markov chain mixing to the quantum setting, and its application to problems of dissipative engineering. A Markov chain describes a statistical process where the probability of future events depends only on the state...... of the system at the present point in time, but not on the history of events. Very many important processes in nature are of this type, therefore a good understanding of their behaviour has turned out to be very fruitful for science. Markov chains always have a non-empty set of limiting distributions...... (stationary states). The aim of Markov chain mixing is to obtain (upper and/or lower) bounds on the number of steps it takes for the Markov chain to reach a stationary state. The natural quantum extensions of these notions are density matrices and quantum channels. We set out to develop a general mathematical...

  3. First and second order Markov chain models for synthetic generation of wind speed time series

    International Nuclear Information System (INIS)

    Shamshad, A.; Bawadi, M.A.; Wan Hussin, W.M.A.; Majid, T.A.; Sanusi, S.A.M.

    2005-01-01

    Hourly wind speed time series data of two meteorological stations in Malaysia have been used for stochastic generation of wind speed data using the transition matrix approach of the Markov chain process. The transition probability matrices have been formed using two different approaches: the first approach involves the use of the first order transition probability matrix of a Markov chain, and the second involves the use of a second order transition probability matrix that uses the current and preceding values to describe the next wind speed value. The algorithm to generate the wind speed time series from the transition probability matrices is described. Uniform random number generators have been used for transition between successive time states and within state wind speed values. The ability of each approach to retain the statistical properties of the generated speed is compared with the observed ones. The main statistical properties used for this purpose are mean, standard deviation, median, percentiles, Weibull distribution parameters, autocorrelations and spectral density of wind speed values. The comparison of the observed wind speed and the synthetically generated ones shows that the statistical characteristics are satisfactorily preserved

  4. Markov-modulated infinite-server queues driven by a common background process

    OpenAIRE

    Mandjes , Michel; De Turck , Koen

    2016-01-01

    International audience; This paper studies a system with multiple infinite-server queues which are modulated by a common background process. If this background process, being modeled as a finite-state continuous-time Markov chain, is in state j, then the arrival rate into the i-th queue is λi,j, whereas the service times of customers present in this queue are exponentially distributed with mean µ −1 i,j ; at each of the individual queues all customers present are served in parallel (thus refl...

  5. A Graph-Algorithmic Approach for the Study of Metastability in Markov Chains

    Science.gov (United States)

    Gan, Tingyue; Cameron, Maria

    2017-06-01

    Large continuous-time Markov chains with exponentially small transition rates arise in modeling complex systems in physics, chemistry, and biology. We propose a constructive graph-algorithmic approach to determine the sequence of critical timescales at which the qualitative behavior of a given Markov chain changes, and give an effective description of the dynamics on each of them. This approach is valid for both time-reversible and time-irreversible Markov processes, with or without symmetry. Central to this approach are two graph algorithms, Algorithm 1 and Algorithm 2, for obtaining the sequences of the critical timescales and the hierarchies of Typical Transition Graphs or T-graphs indicating the most likely transitions in the system without and with symmetry, respectively. The sequence of critical timescales includes the subsequence of the reciprocals of the real parts of eigenvalues. Under a certain assumption, we prove sharp asymptotic estimates for eigenvalues (including pre-factors) and show how one can extract them from the output of Algorithm 1. We discuss the relationship between Algorithms 1 and 2 and explain how one needs to interpret the output of Algorithm 1 if it is applied in the case with symmetry instead of Algorithm 2. Finally, we analyze an example motivated by R. D. Astumian's model of the dynamics of kinesin, a molecular motor, by means of Algorithm 2.

  6. Pitch angle scattering of relativistic electrons from stationary magnetic waves: Continuous Markov process and quasilinear theory

    International Nuclear Information System (INIS)

    Lemons, Don S.

    2012-01-01

    We develop a Markov process theory of charged particle scattering from stationary, transverse, magnetic waves. We examine approximations that lead to quasilinear theory, in particular the resonant diffusion approximation. We find that, when appropriate, the resonant diffusion approximation simplifies the result of the weak turbulence approximation without significant further restricting the regime of applicability. We also explore a theory generated by expanding drift and diffusion rates in terms of a presumed small correlation time. This small correlation time expansion leads to results valid for relatively small pitch angle and large wave energy density - a regime that may govern pitch angle scattering of high-energy electrons into the geomagnetic loss cone.

  7. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...... and efficient framework for estimation. These advantages are used to for instance estimate stochastic volatility models with leverage effect or with Student-t distributed errors. We also model changing time series characteristics of the US inflation rate by considering a heteroskedastic ARFIMA model where...

  8. Clinical trial optimization: Monte Carlo simulation Markov model for planning clinical trials recruitment.

    Science.gov (United States)

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2007-05-01

    The patient recruitment process of clinical trials is an essential element which needs to be designed properly. In this paper we describe different simulation models under continuous and discrete time assumptions for the design of recruitment in clinical trials. The results of hypothetical examples of clinical trial recruitments are presented. The recruitment time is calculated and the number of recruited patients is quantified for a given time and probability of recruitment. The expected delay and the effective recruitment durations are estimated using both continuous and discrete time modeling. The proposed type of Monte Carlo simulation Markov models will enable optimization of the recruitment process and the estimation and the calibration of its parameters to aid the proposed clinical trials. A continuous time simulation may minimize the duration of the recruitment and, consequently, the total duration of the trial.

  9. Memetic Approaches for Optimizing Hidden Markov Models: A Case Study in Time Series Prediction

    Science.gov (United States)

    Bui, Lam Thu; Barlow, Michael

    We propose a methodology for employing memetics (local search) within the framework of evolutionary algorithms to optimize parameters of hidden markov models. With this proposal, the rate and frequency of using local search are automatically changed over time either at a population or individual level. At the population level, we allow the rate of using local search to decay over time to zero (at the final generation). At the individual level, each individual is equipped with information of when it will do local search and for how long. This information evolves over time alongside the main elements of the chromosome representing the individual.

  10. A Multistep Extending Truncation Method towards Model Construction of Infinite-State Markov Chains

    Directory of Open Access Journals (Sweden)

    Kemin Wang

    2014-01-01

    Full Text Available The model checking of Infinite-State Continuous Time Markov Chains will inevitably encounter the state explosion problem when constructing the CTMCs model; our method is to get a truncated model of the infinite one; to get a sufficient truncated model to meet the model checking of Continuous Stochastic Logic based system properties, we propose a multistep extending advanced truncation method towards model construction of CTMCs and implement it in the INFAMY model checker; the experiment results show that our method is effective.

  11. Distinguishing patterns in the dynamics of long-term medication use by Markov analysis: beyond persistence

    Directory of Open Access Journals (Sweden)

    Lammers Jan-Willem J

    2007-07-01

    Full Text Available Abstract Background In order to accurately distinguish gaps of varying length in drug treatment for chronic conditions from discontinuation without resuming therapy, short-term observation does not suffice. Thus, the use of inhalation corticosteroids (ICS in the long-term, during a ten-year period is investigated. To describe medication use as a continuum, taking into account the timeliness and consistency of refilling, a Markov model is proposed. Methods Patients, that filled at least one prescription in 1993, were selected from the PHARMO medical record linkage system (RLS containing >95% prescription dispensings per patient originating from community pharmacy records of 6 medium-sized cities in the Netherlands. The probabilities of continuous use, the refilling of at least one ICS prescription in each year of follow-up, and medication free periods were assessed by Markov analysis. Stratified analysis according to new use was performed. Results The transition probabilities of the refilling of at least one ICS prescription in the subsequent year of follow-up, were assessed for each year of follow-up and for the total study period. The change of transition probabilities in time was evaluated, e.g. the probability of continuing ICS use of starters in the first two years (51% of follow-up increased to more than 70% in the following years. The probabilities of different patterns of medication use were assessed: continuous use (7.7%, cumulative medication gaps (1–8 years 69.1% and discontinuing (23.2% during ten-year follow-up for new users. New users had lower probability of continuous use (7.7% and more variability in ICS refill patterns than previous users (56%. Conclusion In addition to well-established methods in epidemiology to ascertain compliance and persistence, a Markov model could be useful to further specify the variety of possible patterns of medication use within the continuum of adherence. This Markov model describes variation in

  12. Perturbation approach to scaled type Markov renewal processes with infinite mean

    OpenAIRE

    Pajor-Gyulai, Zsolt; Szász, Domokos

    2010-01-01

    Scaled type Markov renewal processes generalize classical renewal processes: renewal times come from a one parameter family of probability laws and the sequence of the parameters is the trajectory of an ergodic Markov chain. Our primary interest here is the asymptotic distribution of the Markovian parameter at time t \\to \\infty. The limit, of course, depends on the stationary distribution of the Markov chain. The results, however, are essentially different depending on whether the expectation...

  13. A Novel Generation Method for the PV Power Time Series Combining the Decomposition Technique and Markov Chain Theory

    DEFF Research Database (Denmark)

    Xu, Shenzhi; Ai, Xiaomeng; Fang, Jiakun

    2017-01-01

    Photovoltaic (PV) power generation has made considerable developments in recent years. But its intermittent and volatility of its output has seriously affected the security operation of the power system. In order to better understand the PV generation and provide sufficient data support...... for analysis the impacts, a novel generation method for PV power time series combining decomposition technique and Markov chain theory is presented in this paper. It digs important factors from historical data from existing PV plants and then reproduce new data with similar patterns. In detail, the proposed...... method first decomposes the PV power time series into ideal output curve, amplitude parameter series and random fluctuating component three parts. Then generating daily ideal output curve by the extraction of typical daily data, amplitude parameter series based on the Markov chain Monte Carlo (MCMC...

  14. Clinical Prediction Performance of Glaucoma Progression Using a 2-Dimensional Continuous-Time Hidden Markov Model with Structural and Functional Measurements.

    Science.gov (United States)

    Song, Youngseok; Ishikawa, Hiroshi; Wu, Mengfei; Liu, Yu-Ying; Lucy, Katie A; Lavinsky, Fabio; Liu, Mengling; Wollstein, Gadi; Schuman, Joel S

    2018-03-20

    Previously, we introduced a state-based 2-dimensional continuous-time hidden Markov model (2D CT HMM) to model the pattern of detected glaucoma changes using structural and functional information simultaneously. The purpose of this study was to evaluate the detected glaucoma change prediction performance of the model in a real clinical setting using a retrospective longitudinal dataset. Longitudinal, retrospective study. One hundred thirty-four eyes from 134 participants diagnosed with glaucoma or as glaucoma suspects (average follow-up, 4.4±1.2 years; average number of visits, 7.1±1.8). A 2D CT HMM model was trained using OCT (Cirrus HD-OCT; Zeiss, Dublin, CA) average circumpapillary retinal nerve fiber layer (cRNFL) thickness and visual field index (VFI) or mean deviation (MD; Humphrey Field Analyzer; Zeiss). The model was trained using a subset of the data (107 of 134 eyes [80%]) including all visits except for the last visit, which was used to test the prediction performance (training set). Additionally, the remaining 27 eyes were used for secondary performance testing as an independent group (validation set). The 2D CT HMM predicts 1 of 4 possible detected state changes based on 1 input state. Prediction accuracy was assessed as the percentage of correct prediction against the patient's actual recorded state. In addition, deviations of the predicted long-term detected change paths from the actual detected change paths were measured. Baseline mean ± standard deviation age was 61.9±11.4 years, VFI was 90.7±17.4, MD was -3.50±6.04 dB, and cRNFL thickness was 74.9±12.2 μm. The accuracy of detected glaucoma change prediction using the training set was comparable with the validation set (57.0% and 68.0%, respectively). Prediction deviation from the actual detected change path showed stability throughout patient follow-up. The 2D CT HMM demonstrated promising prediction performance in detecting glaucoma change performance in a simulated clinical setting

  15. Markov's theorem and algorithmically non-recognizable combinatorial manifolds

    International Nuclear Information System (INIS)

    Shtan'ko, M A

    2004-01-01

    We prove the theorem of Markov on the existence of an algorithmically non-recognizable combinatorial n-dimensional manifold for every n≥4. We construct for the first time a concrete manifold which is algorithmically non-recognizable. A strengthened form of Markov's theorem is proved using the combinatorial methods of regular neighbourhoods and handle theory. The proofs coincide for all n≥4. We use Borisov's group with insoluble word problem. It has two generators and twelve relations. The use of this group forms the base for proving the strengthened form of Markov's theorem

  16. Prediction of inspection intervals using the Markov analysis; Prediccion de intervalos de inspeccion utilizando analisis de Markov

    Energy Technology Data Exchange (ETDEWEB)

    Rea, R.; Arellano, J. [IIE, Calle Reforma 113, Col. Palmira, Cuernavaca, Morelos (Mexico)]. e-mail: rrea@iie.org.mx

    2005-07-01

    To solve the unmanageable number of states of Markov of systems that have a great number of components, it is intends a modification to the method of Markov, denominated Markov truncated analysis, in which is assumed that it is worthless the dependence among faults of components. With it the number of states is increased in a lineal way (not exponential) with the number of components of the system, simplifying the analysis vastly. As example, the proposed method was applied to the system HPCS of the CLV considering its 18 main components. It thinks about that each component can take three states: operational, with hidden fault and with revealed fault. Additionally, it takes into account the configuration of the system HPCS by means of a block diagram of dependability to estimate their unavailability at level system. The results of the model here proposed are compared with other methods and approaches used to simplify the Markov analysis. It also intends the modification of the intervals of inspection of three components of the system HPCS. This finishes with base in the developed Markov model and in the maximum time allowed by the code ASME (NUREG-1482) to inspect components of systems that are in reservation in nuclear power plants. (Author)

  17. Belief Bisimulation for Hidden Markov Models Logical Characterisation and Decision Algorithm

    DEFF Research Database (Denmark)

    Jansen, David N.; Nielson, Flemming; Zhang, Lijun

    2012-01-01

    This paper establishes connections between logical equivalences and bisimulation relations for hidden Markov models (HMM). Both standard and belief state bisimulations are considered. We also present decision algorithms for the bisimilarities. For standard bisimilarity, an extension of the usual...... partition refinement algorithm is enough. Belief bisimilarity, being a relation on the continuous space of belief states, cannot be described directly. Instead, we show how to generate a linear equation system in time cubic in the number of states....

  18. Hidden Markov Item Response Theory Models for Responses and Response Times.

    Science.gov (United States)

    Molenaar, Dylan; Oberski, Daniel; Vermunt, Jeroen; De Boeck, Paul

    2016-01-01

    Current approaches to model responses and response times to psychometric tests solely focus on between-subject differences in speed and ability. Within subjects, speed and ability are assumed to be constants. Violations of this assumption are generally absorbed in the residual of the model. As a result, within-subject departures from the between-subject speed and ability level remain undetected. These departures may be of interest to the researcher as they reflect differences in the response processes adopted on the items of a test. In this article, we propose a dynamic approach for responses and response times based on hidden Markov modeling to account for within-subject differences in responses and response times. A simulation study is conducted to demonstrate acceptable parameter recovery and acceptable performance of various fit indices in distinguishing between different models. In addition, both a confirmatory and an exploratory application are presented to demonstrate the practical value of the modeling approach.

  19. A Markov Chain Model for Contagion

    Directory of Open Access Journals (Sweden)

    Angelos Dassios

    2014-11-01

    Full Text Available We introduce a bivariate Markov chain counting process with contagion for modelling the clustering arrival of loss claims with delayed settlement for an insurance company. It is a general continuous-time model framework that also has the potential to be applicable to modelling the clustering arrival of events, such as jumps, bankruptcies, crises and catastrophes in finance, insurance and economics with both internal contagion risk and external common risk. Key distributional properties, such as the moments and probability generating functions, for this process are derived. Some special cases with explicit results and numerical examples and the motivation for further actuarial applications are also discussed. The model can be considered a generalisation of the dynamic contagion process introduced by Dassios and Zhao (2011.

  20. MARKOV GRAPHS OF ONE–DIMENSIONAL DYNAMICAL SYSTEMS AND THEIR DISCRETE ANALOGUES AND THEIR DISCRETE ANALOGUES

    Directory of Open Access Journals (Sweden)

    SERGIY KOZERENKO

    2016-04-01

    Full Text Available One feature of the famous Sharkovsky’s theorem is that it can be proved using digraphs of a special type (the so–called Markov graphs. The most general definition assigns a Markov graph to every continuous map from the topological graph to itself. We show that this definition is too broad, i.e. every finite digraph can be viewed as a Markov graph of some one–dimensional dynamical system on a tree. We therefore consider discrete analogues of Markov graphs for vertex maps on combinatorial trees and characterize all maps on trees whose discrete Markov graphs are of the following types: complete, complete bipartite, the disjoint union of cycles, with every arc being a loop.

  1. Markov decision processes: a tool for sequential decision making under uncertainty.

    Science.gov (United States)

    Alagoz, Oguzhan; Hsu, Heather; Schaefer, Andrew J; Roberts, Mark S

    2010-01-01

    We provide a tutorial on the construction and evaluation of Markov decision processes (MDPs), which are powerful analytical tools used for sequential decision making under uncertainty that have been widely used in many industrial and manufacturing applications but are underutilized in medical decision making (MDM). We demonstrate the use of an MDP to solve a sequential clinical treatment problem under uncertainty. Markov decision processes generalize standard Markov models in that a decision process is embedded in the model and multiple decisions are made over time. Furthermore, they have significant advantages over standard decision analysis. We compare MDPs to standard Markov-based simulation models by solving the problem of the optimal timing of living-donor liver transplantation using both methods. Both models result in the same optimal transplantation policy and the same total life expectancies for the same patient and living donor. The computation time for solving the MDP model is significantly smaller than that for solving the Markov model. We briefly describe the growing literature of MDPs applied to medical decisions.

  2. Renewal characterization of Markov modulated Poisson processes

    Directory of Open Access Journals (Sweden)

    Marcel F. Neuts

    1989-01-01

    Full Text Available A Markov Modulated Poisson Process (MMPP M(t defined on a Markov chain J(t is a pure jump process where jumps of M(t occur according to a Poisson process with intensity λi whenever the Markov chain J(t is in state i. M(t is called strongly renewal (SR if M(t is a renewal process for an arbitrary initial probability vector of J(t with full support on P={i:λi>0}. M(t is called weakly renewal (WR if there exists an initial probability vector of J(t such that the resulting MMPP is a renewal process. The purpose of this paper is to develop general characterization theorems for the class SR and some sufficiency theorems for the class WR in terms of the first passage times of the bivariate Markov chain [J(t,M(t]. Relevance to the lumpability of J(t is also studied.

  3. Travel Cost Inference from Sparse, Spatio-Temporally Correlated Time Series Using Markov Models

    DEFF Research Database (Denmark)

    Yang, Bin; Guo, Chenjuan; Jensen, Christian S.

    2013-01-01

    of such time series offers insight into the underlying system and enables prediction of system behavior. While the techniques presented in the paper apply more generally, we consider the case of transportation systems and aim to predict travel cost from GPS tracking data from probe vehicles. Specifically, each...... road segment has an associated travel-cost time series, which is derived from GPS data. We use spatio-temporal hidden Markov models (STHMM) to model correlations among different traffic time series. We provide algorithms that are able to learn the parameters of an STHMM while contending...... with the sparsity, spatio-temporal correlation, and heterogeneity of the time series. Using the resulting STHMM, near future travel costs in the transportation network, e.g., travel time or greenhouse gas emissions, can be inferred, enabling a variety of routing services, e.g., eco-routing. Empirical studies...

  4. A Markov Chain approach for deriving the statistics of time-correlated pulses in the presence of non-extendible dead time

    International Nuclear Information System (INIS)

    Degweker, S.B.

    1996-01-01

    The problem of deriving the statistics of time-correlated detector pulses in the presence of a non-extendible dead time is studied by constructing a Markov Chain to describe the process. Expressions for the transition matrix are derived for problems in the passive neutron assay of Pu and (zero-power) reactor noise. Perturbative and numerical solutions of the master equations are discussed for a simple problem in the passive neutron assay of Pu. Expressions for the mean count rate and variance in a given interval are derived. (Author)

  5. Growth of preferential attachment random graphs via continuous ...

    Indian Academy of Sciences (India)

    Preferential attachment processes have a long history dating back at least to Yule ... We remark that some connections to branching and continuous-time Markov ..... convenience, we provide a short proof of Lemma 2.1 in the general form in ...

  6. Estimation with Right-Censored Observations Under A Semi-Markov Model.

    Science.gov (United States)

    Zhao, Lihui; Hu, X Joan

    2013-06-01

    The semi-Markov process often provides a better framework than the classical Markov process for the analysis of events with multiple states. The purpose of this paper is twofold. First, we show that in the presence of right censoring, when the right end-point of the support of the censoring time is strictly less than the right end-point of the support of the semi-Markov kernel, the transition probability of the semi-Markov process is nonidentifiable, and the estimators proposed in the literature are inconsistent in general. We derive the set of all attainable values for the transition probability based on the censored data, and we propose a nonparametric inference procedure for the transition probability using this set. Second, the conventional approach to constructing confidence bands is not applicable for the semi-Markov kernel and the sojourn time distribution. We propose new perturbation resampling methods to construct these confidence bands. Different weights and transformations are explored in the construction. We use simulation to examine our proposals and illustrate them with hospitalization data from a recent cancer survivor study.

  7. Markov Chain Ontology Analysis (MCOA).

    Science.gov (United States)

    Frost, H Robert; McCray, Alexa T

    2012-02-03

    Biomedical ontologies have become an increasingly critical lens through which researchers analyze the genomic, clinical and bibliographic data that fuels scientific research. Of particular relevance are methods, such as enrichment analysis, that quantify the importance of ontology classes relative to a collection of domain data. Current analytical techniques, however, remain limited in their ability to handle many important types of structural complexity encountered in real biological systems including class overlaps, continuously valued data, inter-instance relationships, non-hierarchical relationships between classes, semantic distance and sparse data. In this paper, we describe a methodology called Markov Chain Ontology Analysis (MCOA) and illustrate its use through a MCOA-based enrichment analysis application based on a generative model of gene activation. MCOA models the classes in an ontology, the instances from an associated dataset and all directional inter-class, class-to-instance and inter-instance relationships as a single finite ergodic Markov chain. The adjusted transition probability matrix for this Markov chain enables the calculation of eigenvector values that quantify the importance of each ontology class relative to other classes and the associated data set members. On both controlled Gene Ontology (GO) data sets created with Escherichia coli, Drosophila melanogaster and Homo sapiens annotations and real gene expression data extracted from the Gene Expression Omnibus (GEO), the MCOA enrichment analysis approach provides the best performance of comparable state-of-the-art methods. A methodology based on Markov chain models and network analytic metrics can help detect the relevant signal within large, highly interdependent and noisy data sets and, for applications such as enrichment analysis, has been shown to generate superior performance on both real and simulated data relative to existing state-of-the-art approaches.

  8. Computing Fault-Containment Times of Self-Stabilizing Algorithms Using Lumped Markov Chains

    Directory of Open Access Journals (Sweden)

    Volker Turau

    2018-05-01

    Full Text Available The analysis of self-stabilizing algorithms is often limited to the worst case stabilization time starting from an arbitrary state, i.e., a state resulting from a sequence of faults. Considering the fact that these algorithms are intended to provide fault tolerance in the long run, this is not the most relevant metric. A common situation is that a running system is an a legitimate state when hit by a single fault. This event has a much higher probability than multiple concurrent faults. Therefore, the worst case time to recover from a single fault is more relevant than the recovery time from a large number of faults. This paper presents techniques to derive upper bounds for the mean time to recover from a single fault for self-stabilizing algorithms based on Markov chains in combination with lumping. To illustrate the applicability of the techniques they are applied to a new self-stabilizing coloring algorithm.

  9. The algebra of the general Markov model on phylogenetic trees and networks.

    Science.gov (United States)

    Sumner, J G; Holland, B R; Jarvis, P D

    2012-04-01

    It is known that the Kimura 3ST model of sequence evolution on phylogenetic trees can be extended quite naturally to arbitrary split systems. However, this extension relies heavily on mathematical peculiarities of the associated Hadamard transformation, and providing an analogous augmentation of the general Markov model has thus far been elusive. In this paper, we rectify this shortcoming by showing how to extend the general Markov model on trees to include incompatible edges; and even further to more general network models. This is achieved by exploring the algebra of the generators of the continuous-time Markov chain together with the “splitting” operator that generates the branching process on phylogenetic trees. For simplicity, we proceed by discussing the two state case and then show that our results are easily extended to more states with little complication. Intriguingly, upon restriction of the two state general Markov model to the parameter space of the binary symmetric model, our extension is indistinguishable from the Hadamard approach only on trees; as soon as any incompatible splits are introduced the two approaches give rise to differing probability distributions with disparate structure. Through exploration of a simple example, we give an argument that our extension to more general networks has desirable properties that the previous approaches do not share. In particular, our construction allows for convergent evolution of previously divergent lineages; a property that is of significant interest for biological applications.

  10. Optimal State Estimation for Discrete-Time Markov Jump Systems with Missing Observations

    Directory of Open Access Journals (Sweden)

    Qing Sun

    2014-01-01

    Full Text Available This paper is concerned with the optimal linear estimation for a class of direct-time Markov jump systems with missing observations. An observer-based approach of fault detection and isolation (FDI is investigated as a detection mechanic of fault case. For systems with known information, a conditional prediction of observations is applied and fault observations are replaced and isolated; then, an FDI linear minimum mean square error estimation (LMMSE can be developed by comprehensive utilizing of the correct information offered by systems. A recursive equation of filtering based on the geometric arguments can be obtained. Meanwhile, a stability of the state estimator will be guaranteed under appropriate assumption.

  11. Combined risk assessment of nonstationary monthly water quality based on Markov chain and time-varying copula.

    Science.gov (United States)

    Shi, Wei; Xia, Jun

    2017-02-01

    Water quality risk management is a global hot research linkage with the sustainable water resource development. Ammonium nitrogen (NH 3 -N) and permanganate index (COD Mn ) as the focus indicators in Huai River Basin, are selected to reveal their joint transition laws based on Markov theory. The time-varying moments model with either time or land cover index as explanatory variables is applied to build the time-varying marginal distributions of water quality time series. Time-varying copula model, which takes the non-stationarity in the marginal distribution and/or the time variation in dependence structure between water quality series into consideration, is constructed to describe a bivariate frequency analysis for NH 3 -N and COD Mn series at the same monitoring gauge. The larger first-order Markov joint transition probability indicates water quality state Class V w , Class IV and Class III will occur easily in the water body of Bengbu Sluice. Both marginal distribution and copula models are nonstationary, and the explanatory variable time yields better performance than land cover index in describing the non-stationarities in the marginal distributions. In modelling the dependence structure changes, time-varying copula has a better fitting performance than the copula with the constant or the time-trend dependence parameter. The largest synchronous encounter risk probability of NH 3 -N and COD Mn simultaneously reaching Class V is 50.61%, while the asynchronous encounter risk probability is largest when NH 3 -N and COD Mn is inferior to class V and class IV water quality standards, respectively.

  12. Performability assessment by model checking of Markov reward models

    NARCIS (Netherlands)

    Baier, Christel; Cloth, L.; Haverkort, Boudewijn R.H.M.; Hermanns, H.; Katoen, Joost P.

    2010-01-01

    This paper describes efficient procedures for model checking Markov reward models, that allow us to evaluate, among others, the performability of computer-communication systems. We present the logic CSRL (Continuous Stochastic Reward Logic) to specify performability measures. It provides flexibility

  13. A Markov chain Monte Carlo Expectation Maximization Algorithm for Statistical Analysis of DNA Sequence Evolution with Neighbor-Dependent Substitution Rates

    DEFF Research Database (Denmark)

    Hobolth, Asger

    2008-01-01

    The evolution of DNA sequences can be described by discrete state continuous time Markov processes on a phylogenetic tree. We consider neighbor-dependent evolutionary models where the instantaneous rate of substitution at a site depends on the states of the neighboring sites. Neighbor...

  14. Joint Markov Blankets in Feature Sets Extracted from Wavelet Packet Decompositions

    Directory of Open Access Journals (Sweden)

    Gert Van Dijck

    2011-07-01

    Full Text Available Since two decades, wavelet packet decompositions have been shown effective as a generic approach to feature extraction from time series and images for the prediction of a target variable. Redundancies exist between the wavelet coefficients and between the energy features that are derived from the wavelet coefficients. We assess these redundancies in wavelet packet decompositions by means of the Markov blanket filtering theory. We introduce the concept of joint Markov blankets. It is shown that joint Markov blankets are a natural extension of Markov blankets, which are defined for single features, to a set of features. We show that these joint Markov blankets exist in feature sets consisting of the wavelet coefficients. Furthermore, we prove that wavelet energy features from the highest frequency resolution level form a joint Markov blanket for all other wavelet energy features. The joint Markov blanket theory indicates that one can expect an increase of classification accuracy with the increase of the frequency resolution level of the energy features.

  15. Composable Markov Building Blocks

    NARCIS (Netherlands)

    Evers, S.; Fokkinga, M.M.; Apers, Peter M.G.; Prade, H.; Subrahmanian, V.S.

    2007-01-01

    In situations where disjunct parts of the same process are described by their own first-order Markov models and only one model applies at a time (activity in one model coincides with non-activity in the other models), these models can be joined together into one. Under certain conditions, nearly all

  16. Phasic Triplet Markov Chains.

    Science.gov (United States)

    El Yazid Boudaren, Mohamed; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar

    2014-11-01

    Hidden Markov chains have been shown to be inadequate for data modeling under some complex conditions. In this work, we address the problem of statistical modeling of phenomena involving two heterogeneous system states. Such phenomena may arise in biology or communications, among other fields. Namely, we consider that a sequence of meaningful words is to be searched within a whole observation that also contains arbitrary one-by-one symbols. Moreover, a word may be interrupted at some site to be carried on later. Applying plain hidden Markov chains to such data, while ignoring their specificity, yields unsatisfactory results. The Phasic triplet Markov chain, proposed in this paper, overcomes this difficulty by means of an auxiliary underlying process in accordance with the triplet Markov chains theory. Related Bayesian restoration techniques and parameters estimation procedures according to the new model are then described. Finally, to assess the performance of the proposed model against the conventional hidden Markov chain model, experiments are conducted on synthetic and real data.

  17. QRS complex detection based on continuous density hidden Markov models using univariate observations

    Science.gov (United States)

    Sotelo, S.; Arenas, W.; Altuve, M.

    2018-04-01

    In the electrocardiogram (ECG), the detection of QRS complexes is a fundamental step in the ECG signal processing chain since it allows the determination of other characteristics waves of the ECG and provides information about heart rate variability. In this work, an automatic QRS complex detector based on continuous density hidden Markov models (HMM) is proposed. HMM were trained using univariate observation sequences taken either from QRS complexes or their derivatives. The detection approach is based on the log-likelihood comparison of the observation sequence with a fixed threshold. A sliding window was used to obtain the observation sequence to be evaluated by the model. The threshold was optimized by receiver operating characteristic curves. Sensitivity (Sen), specificity (Spc) and F1 score were used to evaluate the detection performance. The approach was validated using ECG recordings from the MIT-BIH Arrhythmia database. A 6-fold cross-validation shows that the best detection performance was achieved with 2 states HMM trained with QRS complexes sequences (Sen = 0.668, Spc = 0.360 and F1 = 0.309). We concluded that these univariate sequences provide enough information to characterize the QRS complex dynamics from HMM. Future works are directed to the use of multivariate observations to increase the detection performance.

  18. Automatic creation of Markov models for reliability assessment of safety instrumented systems

    International Nuclear Information System (INIS)

    Guo Haitao; Yang Xianhui

    2008-01-01

    After the release of new international functional safety standards like IEC 61508, people care more for the safety and availability of safety instrumented systems. Markov analysis is a powerful and flexible technique to assess the reliability measurements of safety instrumented systems, but it is fallible and time-consuming to create Markov models manually. This paper presents a new technique to automatically create Markov models for reliability assessment of safety instrumented systems. Many safety related factors, such as failure modes, self-diagnostic, restorations, common cause and voting, are included in Markov models. A framework is generated first based on voting, failure modes and self-diagnostic. Then, repairs and common-cause failures are incorporated into the framework to build a complete Markov model. Eventual simplification of Markov models can be done by state merging. Examples given in this paper show how explosively the size of Markov model increases as the system becomes a little more complicated as well as the advancement of automatic creation of Markov models

  19. Tornadoes and related damage costs: statistical modelling with a semi-Markov approach

    Directory of Open Access Journals (Sweden)

    Guglielmo D’Amico

    2016-09-01

    Full Text Available We propose a statistical approach to modelling for predicting and simulating occurrences of tornadoes and accumulated cost distributions over a time interval. This is achieved by modelling the tornado intensity, measured with the Fujita scale, as a stochastic process. Since the Fujita scale divides tornado intensity into six states, it is possible to model the tornado intensity by using Markov and semi-Markov models. We demonstrate that the semi-Markov approach is able to reproduce the duration effect that is detected in tornado occurrence. The superiority of the semi-Markov model as compared to the Markov chain model is also affirmed by means of a statistical test of hypothesis. As an application, we compute the expected value and the variance of the costs generated by the tornadoes over a given time interval in a given area. The paper contributes to the literature by demonstrating that semi-Markov models represent an effective tool for physical analysis of tornadoes as well as for the estimation of the economic damages to human things.

  20. Composable Markov Building Blocks

    NARCIS (Netherlands)

    Evers, S.; Fokkinga, M.M.; Apers, Peter M.G.

    2007-01-01

    In situations where disjunct parts of the same process are described by their own first-order Markov models, these models can be joined together under the constraint that there can only be one activity at a time, i.e. the activities of one model coincide with non-activity in the other models. Under

  1. Non-stationary Markov chains

    OpenAIRE

    Mallak, Saed

    1996-01-01

    Ankara : Department of Mathematics and Institute of Engineering and Sciences of Bilkent University, 1996. Thesis (Master's) -- Bilkent University, 1996. Includes bibliographical references leaves leaf 29 In thi.s work, we studierl the Ergodicilv of Non-Stationary .Markov chains. We gave several e.xainples with different cases. We proved that given a sec[uence of Markov chains such that the limit of this sec|uence is an Ergodic Markov chain, then the limit of the combination ...

  2. NonMarkov Ito Processes with 1- state memory

    Science.gov (United States)

    McCauley, Joseph L.

    2010-08-01

    A Markov process, by definition, cannot depend on any previous state other than the last observed state. An Ito process implies the Fokker-Planck and Kolmogorov backward time partial differential eqns. for transition densities, which in turn imply the Chapman-Kolmogorov eqn., but without requiring the Markov condition. We present a class of Ito process superficially resembling Markov processes, but with 1-state memory. In finance, such processes would obey the efficient market hypothesis up through the level of pair correlations. These stochastic processes have been mislabeled in recent literature as 'nonlinear Markov processes'. Inspired by Doob and Feller, who pointed out that the ChapmanKolmogorov eqn. is not restricted to Markov processes, we exhibit a Gaussian Ito transition density with 1-state memory in the drift coefficient that satisfies both of Kolmogorov's partial differential eqns. and also the Chapman-Kolmogorov eqn. In addition, we show that three of the examples from McKean's seminal 1966 paper are also nonMarkov Ito processes. Last, we show that the transition density of the generalized Black-Scholes type partial differential eqn. describes a martingale, and satisfies the ChapmanKolmogorov eqn. This leads to the shortest-known proof that the Green function of the Black-Scholes eqn. with variable diffusion coefficient provides the so-called martingale measure of option pricing.

  3. Inhomogeneous Markov Models for Describing Driving Patterns

    DEFF Research Database (Denmark)

    Iversen, Emil Banning; Møller, Jan K.; Morales, Juan Miguel

    2017-01-01

    . Specifically, an inhomogeneous Markov model that captures the diurnal variation in the use of a vehicle is presented. The model is defined by the time-varying probabilities of starting and ending a trip, and is justified due to the uncertainty associated with the use of the vehicle. The model is fitted to data...... collected from the actual utilization of a vehicle. Inhomogeneous Markov models imply a large number of parameters. The number of parameters in the proposed model is reduced using B-splines....

  4. Inhomogeneous Markov Models for Describing Driving Patterns

    DEFF Research Database (Denmark)

    Iversen, Jan Emil Banning; Møller, Jan Kloppenborg; Morales González, Juan Miguel

    . Specically, an inhomogeneous Markov model that captures the diurnal variation in the use of a vehicle is presented. The model is dened by the time-varying probabilities of starting and ending a trip and is justied due to the uncertainty associated with the use of the vehicle. The model is tted to data...... collected from the actual utilization of a vehicle. Inhomogeneous Markov models imply a large number of parameters. The number of parameters in the proposed model is reduced using B-splines....

  5. Event-Triggered Asynchronous Guaranteed Cost Control for Markov Jump Discrete-Time Neural Networks With Distributed Delay and Channel Fading.

    Science.gov (United States)

    Yan, Huaicheng; Zhang, Hao; Yang, Fuwen; Zhan, Xisheng; Peng, Chen

    2017-08-18

    This paper is concerned with the guaranteed cost control problem for a class of Markov jump discrete-time neural networks (NNs) with event-triggered mechanism, asynchronous jumping, and fading channels. The Markov jump NNs are introduced to be close to reality, where the modes of the NNs and guaranteed cost controller are determined by two mutually independent Markov chains. The asynchronous phenomenon is considered, which increases the difficulty of designing required mode-dependent controller. The event-triggered mechanism is designed by comparing the relative measurement error with the last triggered state at the process of data transmission, which is used to eliminate dispensable transmission and reduce the networked energy consumption. In addition, the signal fading is considered for the effect of signal reflection and shadow in wireless networks, which is modeled by the novel Rice fading models. Some novel sufficient conditions are obtained to guarantee that the closed-loop system reaches a specified cost value under the designed jumping state feedback control law in terms of linear matrix inequalities. Finally, some simulation results are provided to illustrate the effectiveness of the proposed method.

  6. The application of Markov decision process in restaurant delivery robot

    Science.gov (United States)

    Wang, Yong; Hu, Zhen; Wang, Ying

    2017-05-01

    As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional path planning algorithm is not very ideal. To solve this problem, this paper proposes the Markov dynamic state immediate reward (MDR) path planning algorithm according to the traditional Markov decision process. First of all, it uses MDR to plan a global path, then navigates along this path. When the sensor detects there is no obstructions in front state, increase its immediate state reward value; when the sensor detects there is an obstacle in front, plan a global path that can avoid obstacle with the current position as the new starting point and reduce its state immediate reward value. This continues until the target is reached. When the robot learns for a period of time, it can avoid those places where obstacles are often present when planning the path. By analyzing the simulation experiment, the algorithm has achieved good results in the global path planning under the dynamic environment.

  7. Elements of automata theory and the theory of Markov chains. [Self-organizing control systems

    Energy Technology Data Exchange (ETDEWEB)

    Lind, M

    1975-03-01

    Selected topics from automata theory and the theory of Markov chains are treated. In particular, finite-memory automata are discussed in detail, and the results are used to formulate an automation model of a class of continuous systems. Stochastic automata are introduced as a natural generalization of the deterministic automaton. Markov chains are shown to be closely related to stochastic automata. Results from Markov chain theory are thereby directly applicable to analysis of stochastic automata. This report provides the theoretical foundation for the investigation in Riso Report No. 315 of a class of self-organizing control systems. (25 figures) (auth)

  8. Bayesian Markov chain Monte Carlo Inversion of Time-Lapse Geophysical Data To Characterize the Vadose Zone

    DEFF Research Database (Denmark)

    Scholer, Marie; Irving, James; Zibar, Majken Caroline Looms

    Geophysical methods have the potential to provide valuable information on hydrological properties in the unsaturated zone. In particular, time-lapse geophysical data, when coupled with a hydrological model and inverted stochastically, may allow for the effective estimation of subsurface hydraulic...... parameters and their corresponding uncertainties. In this study, we use a Bayesian Markov-chain-Monte-Carlo (MCMC) inversion approach to investigate how much information regarding vadose zone hydraulic properties can be retrieved from time-lapse crosshole GPR data collected at the Arrenaes field site...

  9. The Markov chain method for solving dead time problems in the space dependent model of reactor noise

    International Nuclear Information System (INIS)

    Degweker, S.B.

    1997-01-01

    The discrete time Markov chain approach for deriving the statistics of time-correlated pulses, in the presence of a non-extending dead time, is extended to include the effect of space energy distribution of the neutron field. Equations for the singlet and doublet densities of follower neutrons are derived by neglecting correlations beyond the second order. These equations are solved by the modal method. It is shown that in the unimodal approximation, the equations reduce to the point model equations with suitably defined parameters. (author)

  10. Criterion of Semi-Markov Dependent Risk Model

    Institute of Scientific and Technical Information of China (English)

    Xiao Yun MO; Xiang Qun YANG

    2014-01-01

    A rigorous definition of semi-Markov dependent risk model is given. This model is a generalization of the Markov dependent risk model. A criterion and necessary conditions of semi-Markov dependent risk model are obtained. The results clarify relations between elements among semi-Markov dependent risk model more clear and are applicable for Markov dependent risk model.

  11. Benchmarking of a Markov multizone model of contaminant transport.

    Science.gov (United States)

    Jones, Rachael M; Nicas, Mark

    2014-10-01

    A Markov chain model previously applied to the simulation of advection and diffusion process of gaseous contaminants is extended to three-dimensional transport of particulates in indoor environments. The model framework and assumptions are described. The performance of the Markov model is benchmarked against simple conventional models of contaminant transport. The Markov model is able to replicate elutriation predictions of particle deposition with distance from a point source, and the stirred settling of respirable particles. Comparisons with turbulent eddy diffusion models indicate that the Markov model exhibits numerical diffusion in the first seconds after release, but over time accurately predicts mean lateral dispersion. The Markov model exhibits some instability with grid length aspect when turbulence is incorporated by way of the turbulent diffusion coefficient, and advection is present. However, the magnitude of prediction error may be tolerable for some applications and can be avoided by incorporating turbulence by way of fluctuating velocity (e.g. turbulence intensity). © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  12. On Characterisation of Markov Processes Via Martingale Problems

    Indian Academy of Sciences (India)

    This extension is used to improve on a criterion for a probability measure to be invariant for the semigroup associated with the Markov process. We also give examples of martingale problems that are well-posed in the class of solutions which are continuous in probability but for which no r.c.l.l. solution exists.

  13. The spectral method and ergodic theorems for general Markov chains

    International Nuclear Information System (INIS)

    Nagaev, S V

    2015-01-01

    We study the ergodic properties of Markov chains with an arbitrary state space and prove a geometric ergodic theorem. The method of the proof is new: it may be described as an operator method. Our main result is an ergodic theorem for Harris-Markov chains in the case when the return time to some fixed set has finite expectation. Our conditions for the transition function are more general than those used by Athreya-Ney and Nummelin. Unlike them, we impose restrictions not on the original transition function but on the transition function of an embedded Markov chain constructed from the return times to the fixed set mentioned above. The proof uses the spectral theory of linear operators on a Banach space

  14. Gold price effect on stock market: A Markov switching vector error correction approach

    Science.gov (United States)

    Wai, Phoong Seuk; Ismail, Mohd Tahir; Kun, Sek Siok

    2014-06-01

    Gold is a popular precious metal where the demand is driven not only for practical use but also as a popular investments commodity. While stock market represents a country growth, thus gold price effect on stock market behavior as interest in the study. Markov Switching Vector Error Correction Models are applied to analysis the relationship between gold price and stock market changes since real financial data always exhibit regime switching, jumps or missing data through time. Besides, there are numerous specifications of Markov Switching Vector Error Correction Models and this paper will compare the intercept adjusted Markov Switching Vector Error Correction Model and intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model to determine the best model representation in capturing the transition of the time series. Results have shown that gold price has a positive relationship with Malaysia, Thailand and Indonesia stock market and a two regime intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model is able to provide the more significance and reliable result compare to intercept adjusted Markov Switching Vector Error Correction Models.

  15. Prediction of inspection intervals using the Markov analysis

    International Nuclear Information System (INIS)

    Rea, R.; Arellano, J.

    2005-01-01

    To solve the unmanageable number of states of Markov of systems that have a great number of components, it is intends a modification to the method of Markov, denominated Markov truncated analysis, in which is assumed that it is worthless the dependence among faults of components. With it the number of states is increased in a lineal way (not exponential) with the number of components of the system, simplifying the analysis vastly. As example, the proposed method was applied to the system HPCS of the CLV considering its 18 main components. It thinks about that each component can take three states: operational, with hidden fault and with revealed fault. Additionally, it takes into account the configuration of the system HPCS by means of a block diagram of dependability to estimate their unavailability at level system. The results of the model here proposed are compared with other methods and approaches used to simplify the Markov analysis. It also intends the modification of the intervals of inspection of three components of the system HPCS. This finishes with base in the developed Markov model and in the maximum time allowed by the code ASME (NUREG-1482) to inspect components of systems that are in reservation in nuclear power plants. (Author)

  16. Descriptive and predictive evaluation of high resolution Markov chain precipitation models

    DEFF Research Database (Denmark)

    Sørup, Hjalte Jomo Danielsen; Madsen, Henrik; Arnbjerg-Nielsen, Karsten

    2012-01-01

    A time series of tipping bucket recordings of very high temporal and volumetric resolution precipitation is modelled using Markov chain models. Both first and second‐order Markov models as well as seasonal and diurnal models are investigated and evaluated using likelihood based techniques. The fi...

  17. Classification of customer lifetime value models using Markov chain

    Science.gov (United States)

    Permana, Dony; Pasaribu, Udjianna S.; Indratno, Sapto W.; Suprayogi

    2017-10-01

    A firm’s potential reward in future time from a customer can be determined by customer lifetime value (CLV). There are some mathematic methods to calculate it. One method is using Markov chain stochastic model. Here, a customer is assumed through some states. Transition inter the states follow Markovian properties. If we are given some states for a customer and the relationships inter states, then we can make some Markov models to describe the properties of the customer. As Markov models, CLV is defined as a vector contains CLV for a customer in the first state. In this paper we make a classification of Markov Models to calculate CLV. Start from two states of customer model, we make develop in many states models. The development a model is based on weaknesses in previous model. Some last models can be expected to describe how real characters of customers in a firm.

  18. Modeling Dyadic Processes Using Hidden Markov Models: A Time Series Approach to Mother-Infant Interactions during Infant Immunization

    Science.gov (United States)

    Stifter, Cynthia A.; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at 2 and 6?months of age, used hidden Markov modelling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a…

  19. Exact goodness-of-fit tests for Markov chains.

    Science.gov (United States)

    Besag, J; Mondal, D

    2013-06-01

    Goodness-of-fit tests are useful in assessing whether a statistical model is consistent with available data. However, the usual χ² asymptotics often fail, either because of the paucity of the data or because a nonstandard test statistic is of interest. In this article, we describe exact goodness-of-fit tests for first- and higher order Markov chains, with particular attention given to time-reversible ones. The tests are obtained by conditioning on the sufficient statistics for the transition probabilities and are implemented by simple Monte Carlo sampling or by Markov chain Monte Carlo. They apply both to single and to multiple sequences and allow a free choice of test statistic. Three examples are given. The first concerns multiple sequences of dry and wet January days for the years 1948-1983 at Snoqualmie Falls, Washington State, and suggests that standard analysis may be misleading. The second one is for a four-state DNA sequence and lends support to the original conclusion that a second-order Markov chain provides an adequate fit to the data. The last one is six-state atomistic data arising in molecular conformational dynamics simulation of solvated alanine dipeptide and points to strong evidence against a first-order reversible Markov chain at 6 picosecond time steps. © 2013, The International Biometric Society.

  20. Musical Markov Chains

    Science.gov (United States)

    Volchenkov, Dima; Dawin, Jean René

    A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.

  1. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  2. Flux through a Markov chain

    International Nuclear Information System (INIS)

    Floriani, Elena; Lima, Ricardo; Ourrad, Ouerdia; Spinelli, Lionel

    2016-01-01

    Highlights: • The flux through a Markov chain of a conserved quantity (mass) is studied. • Mass is supplied by an external source and ends in the absorbing states of the chain. • Meaningful for modeling open systems whose dynamics has a Markov property. • The analytical expression of mass distribution is given for a constant source. • The expression of mass distribution is given for periodic or random sources. - Abstract: In this paper we study the flux through a finite Markov chain of a quantity, that we will call mass, which moves through the states of the chain according to the Markov transition probabilities. Mass is supplied by an external source and accumulates in the absorbing states of the chain. We believe that studying how this conserved quantity evolves through the transient (non-absorbing) states of the chain could be useful for the modelization of open systems whose dynamics has a Markov property.

  3. Social security as Markov equilibrium in OLG models: A note

    DEFF Research Database (Denmark)

    Gonzalez Eiras, Martin

    2011-01-01

    I refine and extend the Markov perfect equilibrium of the social security policy game in Forni (2005) for the special case of logarithmic utility. Under the restriction that the policy function be continuous, instead of differentiable, the equilibrium is globally well defined and its dynamics...

  4. Optimization of hospital ward resources with patient relocation using Markov chain modeling

    DEFF Research Database (Denmark)

    Andersen, Anders Reenberg; Nielsen, Bo Friis; Reinhardt, Line Blander

    2017-01-01

    available to the hospital. Patient flow is modeled using a homogeneous continuous-time Markov chain and optimization is conducted using a local search heuristic. Our model accounts for patient relocation, which has not been done analytically in literature with similar scope. The study objective is to ensure...... are distributed. Furthermore, our heuristic is found to efficiently derive the optimal solution. Applying our model to the hospital case, we found that relocation of daily arrivals can be reduced by 11.7% by re-distributing beds that are already available to the hospital....

  5. The explicit form of the rate function for semi-Markov processes and its contractions

    Science.gov (United States)

    Sughiyama, Yuki; Kobayashi, Testuya J.

    2018-03-01

    We derive the explicit form of the rate function for semi-Markov processes. Here, the ‘random time change trick’ plays an essential role. Also, by exploiting the contraction principle of large deviation theory to the explicit form, we show that the fluctuation theorem (Gallavotti-Cohen symmetry) holds for semi-Markov cases. Furthermore, we elucidate that our rate function is an extension of the level 2.5 rate function for Markov processes to semi-Markov cases.

  6. Robust filtering and prediction for systems with embedded finite-state Markov-Chain dynamics

    International Nuclear Information System (INIS)

    Pate, E.B.

    1986-01-01

    This research developed new methodologies for the design of robust near-optimal filters/predictors for a class of system models that exhibit embedded finite-state Markov-chain dynamics. These methodologies are developed through the concepts and methods of stochastic model building (including time-series analysis), game theory, decision theory, and filtering/prediction for linear dynamic systems. The methodology is based on the relationship between the robustness of a class of time-series models and quantization which is applied to the time series as part of the model identification process. This relationship is exploited by utilizing the concept of an equivalence, through invariance of spectra, between the class of Markov-chain models and the class of autoregressive moving average (ARMA) models. This spectral equivalence permits a straightforward implementation of the desirable robust properties of the Markov-chain approximation in a class of models which may be applied in linear-recursive form in a linear Kalman filter/predictor structure. The linear filter/predictor structure is shown to provide asymptotically optimal estimates of states which represent one or more integrations of the Markov-chain state. The development of a new saddle-point theorem for a game based on the Markov-chain model structure gives rise to a technique for determining a worst case Markov-chain process, upon which a robust filter/predictor design if based

  7. Monotone measures of ergodicity for Markov chains

    Directory of Open Access Journals (Sweden)

    J. Keilson

    1998-01-01

    Full Text Available The following paper, first written in 1974, was never published other than as part of an internal research series. Its lack of publication is unrelated to the merits of the paper and the paper is of current importance by virtue of its relation to the relaxation time. A systematic discussion is provided of the approach of a finite Markov chain to ergodicity by proving the monotonicity of an important set of norms, each measures of egodicity, whether or not time reversibility is present. The paper is of particular interest because the discussion of the relaxation time of a finite Markov chain [2] has only been clean for time reversible chains, a small subset of the chains of interest. This restriction is not present here. Indeed, a new relaxation time quoted quantifies the relaxation time for all finite ergodic chains (cf. the discussion of Q1(t below Equation (1.7]. This relaxation time was developed by Keilson with A. Roy in his thesis [6], yet to be published.

  8. Markov chain model helps predict pitting corrosion depth and rate in underground pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Caleyo, F.; Velazquez, J.C.; Hallen, J. M. [ESIQIE, Instituto Politecnico Nacional, Mexico D. F. (Mexico); Esquivel-Amezcua, A. [PEMEX PEP Region Sur, Villahermosa, Tabasco (Mexico); Valor, A. [Universidad de la Habana, Vedado, La Habana (Cuba)

    2010-07-01

    Recent reports place pipeline corrosion costs in North America at seven billion dollars per year. Pitting corrosion causes the higher percentage of failures among other corrosion mechanisms. This has motivated multiple modelling studies to be focused on corrosion pitting of underground pipelines. In this study, a continuous-time, non-homogenous pure birth Markov chain serves to model external pitting corrosion in buried pipelines. The analytical solution of Kolmogorov's forward equations for this type of Markov process gives the transition probability function in a discrete space of pit depths. The transition probability function can be completely identified by making a correlation between the stochastic pit depth mean and the deterministic mean obtained experimentally. The model proposed in this study can be applied to pitting corrosion data from repeated in-line pipeline inspections. Case studies presented in this work show how pipeline inspection and maintenance planning can be improved by using the proposed Markovian model for pitting corrosion.

  9. Semi-Markov models for interval censored transient cognitive states with back transitions and a competing risk.

    Science.gov (United States)

    Wei, Shaoceng; Kryscio, Richard J

    2016-12-01

    Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient cognitive states and death as a competing risk. Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript, we apply a semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. © The Author(s) 2014.

  10. Markov chain aggregation and its applications to combinatorial reaction networks.

    Science.gov (United States)

    Ganguly, Arnab; Petrov, Tatjana; Koeppl, Heinz

    2014-09-01

    We consider a continuous-time Markov chain (CTMC) whose state space is partitioned into aggregates, and each aggregate is assigned a probability measure. A sufficient condition for defining a CTMC over the aggregates is presented as a variant of weak lumpability, which also characterizes that the measure over the original process can be recovered from that of the aggregated one. We show how the applicability of de-aggregation depends on the initial distribution. The application section is devoted to illustrate how the developed theory aids in reducing CTMC models of biochemical systems particularly in connection to protein-protein interactions. We assume that the model is written by a biologist in form of site-graph-rewrite rules. Site-graph-rewrite rules compactly express that, often, only a local context of a protein (instead of a full molecular species) needs to be in a certain configuration in order to trigger a reaction event. This observation leads to suitable aggregate Markov chains with smaller state spaces, thereby providing sufficient reduction in computational complexity. This is further exemplified in two case studies: simple unbounded polymerization and early EGFR/insulin crosstalk.

  11. Tornadoes and related damage costs: statistical modeling with a semi-Markov approach

    OpenAIRE

    Corini, Chiara; D'Amico, Guglielmo; Petroni, Filippo; Prattico, Flavio; Manca, Raimondo

    2015-01-01

    We propose a statistical approach to tornadoes modeling for predicting and simulating occurrences of tornadoes and accumulated cost distributions over a time interval. This is achieved by modeling the tornadoes intensity, measured with the Fujita scale, as a stochastic process. Since the Fujita scale divides tornadoes intensity into six states, it is possible to model the tornadoes intensity by using Markov and semi-Markov models. We demonstrate that the semi-Markov approach is able to reprod...

  12. Markov Tail Chains

    OpenAIRE

    janssen, Anja; Segers, Johan

    2013-01-01

    The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in Rd. We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In ...

  13. Weighted-indexed semi-Markov models for modeling financial returns

    International Nuclear Information System (INIS)

    D’Amico, Guglielmo; Petroni, Filippo

    2012-01-01

    In this paper we propose a new stochastic model based on a generalization of semi-Markov chains for studying the high frequency price dynamics of traded stocks. We assume that the financial returns are described by a weighted-indexed semi-Markov chain model. We show, through Monte Carlo simulations, that the model is able to reproduce important stylized facts of financial time series such as the first-passage-time distributions and the persistence of volatility. The model is applied to data from the Italian and German stock markets from 1 January 2007 until the end of December 2010. (paper)

  14. An Application of Graph Theory in Markov Chains Reliability Analysis

    Directory of Open Access Journals (Sweden)

    Pavel Skalny

    2014-01-01

    Full Text Available The paper presents reliability analysis which was realized for an industrial company. The aim of the paper is to present the usage of discrete time Markov chains and the flow in network approach. Discrete Markov chains a well-known method of stochastic modelling describes the issue. The method is suitable for many systems occurring in practice where we can easily distinguish various amount of states. Markov chains are used to describe transitions between the states of the process. The industrial process is described as a graph network. The maximal flow in the network corresponds to the production. The Ford-Fulkerson algorithm is used to quantify the production for each state. The combination of both methods are utilized to quantify the expected value of the amount of manufactured products for the given time period.

  15. Using multi-state markov models to identify credit card risk

    Directory of Open Access Journals (Sweden)

    Daniel Evangelista Régis

    2016-06-01

    Full Text Available Abstract The main interest of this work is to analyze the application of multi-state Markov models to evaluate credit card risk by investigating the characteristics of different state transitions in client-institution relationships over time, thereby generating score models for various purposes. We also used logistic regression models to compare the results with those obtained using multi-state Markov models. The models were applied to an actual database of a Brazilian financial institution. In this application, multi-state Markov models performed better than logistic regression models in predicting default risk, and logistic regression models performed better in predicting cancellation risk.

  16. On using continuoas Markov processes for unit service life evaluation taking as an example the RBMK-1000 gate-regulating valve

    International Nuclear Information System (INIS)

    Klemin, A.I.; Emel'yanov, V.S.; Rabchun, A.V.

    1984-01-01

    A technique is sugfested for estimating service life indices of equipment based on describing the process of the equipment ageing by continuous Markov diffusion process. It is noted that a number of problems on estimating durability indices of products is reduced to problems of estimating characteristics of the time of the first attainment of the preset boundary (boundaries) by a random process describing the ageing of a product. The methods of statistic estimation of the drift and diffusion coefficient in the continuous Markov diffusion process are considered formulae for their point and interval estimates are presented. A special description is given for a case of a stationary process and determining in this case mathematical expectation and dispersion of the time of the first attainment of a boundary (boundaries). The method of numerical simulation of the diffusion process with constant drift and diffusion coefficients is also described; results obtained on the basis of such a simulation are discussed. An example of using the suggested technique for quantitative estimate of the service life for the RBMK-1000 gate-regulating value is given

  17. On Probabilistic Automata in Continuous Time

    DEFF Research Database (Denmark)

    Eisentraut, Christian; Hermanns, Holger; Zhang, Lijun

    2010-01-01

    We develop a compositional behavioural model that integrates a variation of probabilistic automata into a conservative extension of interactive Markov chains. The model is rich enough to embody the semantics of generalised stochastic Petri nets. We define strong and weak bisimulations and discuss...

  18. Generalized Markov branching models

    OpenAIRE

    Li, Junping

    2005-01-01

    In this thesis, we first considered a modified Markov branching process incorporating both state-independent immigration and resurrection. After establishing the criteria for regularity and uniqueness, explicit expressions for the extinction probability and mean extinction time are presented. The criteria for recurrence and ergodicity are also established. In addition, an explicit expression for the equilibrium distribution is presented.\\ud \\ud We then moved on to investigate the basic proper...

  19. Martingales and Markov chains solved exercises and elements of theory

    CERN Document Server

    Baldi, Paolo; Priouret, Pierre

    2002-01-01

    CONDITIONAL EXPECTATIONSIntroductionDefinition and First PropertiesConditional Expectations and Conditional LawsExercisesSolutionsSTOCHASTIC PROCESSESGeneral FactsStopping TimesExercisesSolutionsMARTINGALESFirst DefinitionsFirst PropertiesThe Stopping TheoremMaximal InequalitiesSquare Integral MartingalesConvergence TheoremsRegular MartingalesExercisesProblemsSolutionsMARKOV CHAINSTransition Matrices, Markov ChainsConstruction and ExistenceComputations on the Canonical ChainPotential OperatorsPassage ProblemsRecurrence, TransienceRecurrent Irreducible ChainsPeriodicityExercisesProblemsSolution

  20. A reward semi-Markov process with memory for wind speed modeling

    Science.gov (United States)

    Petroni, F.; D'Amico, G.; Prattico, F.

    2012-04-01

    -order Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. The primary goal of this analysis is the study of the time history of the wind in order to assess its reliability as a source of power and to determine the associated storage levels required. In order to assess this issue we use a probabilistic model based on indexed semi-Markov process [4] to which a reward structure is attached. Our model is used to calculate the expected energy produced by a given turbine and its variability expressed by the variance of the process. Our results can be used to compare different wind farms based on their reward and also on the risk of missed production due to the intrinsic variability of the wind speed process. The model is used to generate synthetic time series for wind speed by means of Monte Carlo simulations and backtesting procedure is used to compare results on first and second oder moments of rewards between real and synthetic data. [1] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic gen- eration of wind speed time series, Energy 30 (2005) 693-708. [2] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating wind speed time series at Tangiers, Morocco, Re- newable Energy 29 (2004) 1407-1418. [3] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling of wind using first-order Markov chain and Weibull distribu- tion, Renewable Energy 28 (2003) 1787-1802. [4]F. Petroni, G. D'Amico, F. Prattico, Indexed semi-Markov process for wind speed modeling. To be submitted.

  1. Fields From Markov Chains

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2005-01-01

    A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly.......A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly....

  2. Probabilistic forecasting of wind power at the minute time-scale with Markov-switching autoregressive models

    DEFF Research Database (Denmark)

    Pinson, Pierre; Madsen, Henrik

    2008-01-01

    Better modelling and forecasting of very short-term power fluctuations at large offshore wind farms may significantly enhance control and management strategies of their power output. The paper introduces a new methodology for modelling and forecasting such very short-term fluctuations. The proposed...... consists in 1-step ahead forecasting exercise on time-series of wind generation with a time resolution of 10 minute. The quality of the introduced forecasting methodology and its interest for better understanding power fluctuations are finally discussed....... methodology is based on a Markov-switching autoregressive model with time-varying coefficients. An advantage of the method is that one can easily derive full predictive densities. The quality of this methodology is demonstrated from the test case of 2 large offshore wind farms in Denmark. The exercise...

  3. rEMM: Extensible Markov Model for Data Stream Clustering in R

    Directory of Open Access Journals (Sweden)

    Michael Hahsler

    2010-10-01

    Full Text Available Clustering streams of continuously arriving data has become an important application of data mining in recent years and efficient algorithms have been proposed by several researchers. However, clustering alone neglects the fact that data in a data stream is not only characterized by the proximity of data points which is used by clustering, but also by a temporal component. The extensible Markov model (EMM adds the temporal component to data stream clustering by superimposing a dynamically adapting Markov chain. In this paper we introduce the implementation of the R extension package rEMM which implements EMM and we discuss some examples and applications.

  4. Study of the seismic activity in central Ionian Islands via semi-Markov modelling

    Science.gov (United States)

    Pertsinidou, Christina Elisavet; Tsaklidis, George; Papadimitriou, Eleftheria

    2017-06-01

    The seismicity of the central Ionian Islands ( M ≥ 5.2, 1911-2014) is studied via a semi-Markov chain which is investigated in terms of the destination probabilities (occurrence probabilities). The interevent times are considered to follow geometric (in which case the semi-Markov model reduces to a Markov model) or Pareto distributions. The study of the destination probabilities is useful for forecasting purposes because they can provide the more probable earthquake magnitude and occurrence time. Using the first half of the data sample for the estimation procedure and the other half for forecasting purposes it is found that the time windows obtained by the destination probabilities include 72.9% of the observed earthquake occurrence times (for all magnitudes) and 71.4% for the larger ( M ≥ 6.0) ones.

  5. Two Person Zero-Sum Semi-Markov Games with Unknown Holding Times Distribution on One Side: A Discounted Payoff Criterion

    International Nuclear Information System (INIS)

    Minjarez-Sosa, J. Adolfo; Luque-Vasquez, Fernando

    2008-01-01

    This paper deals with two person zero-sum semi-Markov games with a possibly unbounded payoff function, under a discounted payoff criterion. Assuming that the distribution of the holding times H is unknown for one of the players, we combine suitable methods of statistical estimation of H with control procedures to construct an asymptotically discount optimal pair of strategies

  6. Robust Estimation for Discrete Markov System with Time-Varying Delay and Missing Measurements

    Directory of Open Access Journals (Sweden)

    Jia You

    2013-01-01

    Full Text Available This paper addresses the ℋ∞ filtering problem for time-delayed Markov jump systems (MJSs with intermittent measurements. Within network environment, missing measurements are taken into account, since the communication channel is supposed to be imperfect. A Bernoulli process is utilized to describe the phenomenon of the missing measurements. The original system is transformed into an input-output form consisting of two interconnected subsystems. Based on scaled small gain (SSG theorem and proposed Lyapunov-Krasovskii functional (LKF, the scaled small gains of the subsystems are analyzed, respectively. New conditions for the existence of the ℋ∞ filters are established, and the corresponding ℋ∞ filter design scheme is proposed. Finally, a simulation example is provided to demonstrate the effectiveness of the proposed approach.

  7. Reliability analysis of Markov history-dependent repairable systems with neglected failures

    International Nuclear Information System (INIS)

    Du, Shijia; Zeng, Zhiguo; Cui, Lirong; Kang, Rui

    2017-01-01

    Markov history-dependent repairable systems refer to the Markov repairable systems in which some states are changeable and dependent on recent evolutional history of the system. In practice, many Markov history-dependent repairable systems are subjected to neglected failures, i.e., some failures do not affect system performances if they can be repaired promptly. In this paper, we develop a model based on the theory of aggregated stochastic processes to describe the history-dependent behavior and the effect of neglected failures on the Markov history-dependent repairable systems. Based on the developed model, instantaneous and steady-state availabilities are derived to characterize the reliability of the system. Four reliability-related time distributions, i.e., distribution for the k th working period, distribution for the k th failure period, distribution for the real working time in an effective working period, distribution for the neglected failure time in an effective working period, are also derived to provide a more comprehensive description of the system's reliability. Thanks to the power of the theory of aggregated stochastic processes, closed-form expressions are obtained for all the reliability indexes and time distributions. Finally, the developed indexes and analysis methods are demonstrated by a numerical example. - Highlights: • Markovian history-dependent repairable systems with neglected failures is modeled. • Aggregated stochastic processes are used to derive reliability indexes and time distributions. • Closed-form expressions are derived for the considered indexes and distributions.

  8. Quadratic Variation by Markov Chains

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Horel, Guillaume

    We introduce a novel estimator of the quadratic variation that is based on the the- ory of Markov chains. The estimator is motivated by some general results concerning filtering contaminated semimartingales. Specifically, we show that filtering can in prin- ciple remove the effects of market...... microstructure noise in a general framework where little is assumed about the noise. For the practical implementation, we adopt the dis- crete Markov chain model that is well suited for the analysis of financial high-frequency prices. The Markov chain framework facilitates simple expressions and elegant analyti...

  9. Confluence reduction for Markov automata

    NARCIS (Netherlands)

    Timmer, Mark; Katoen, Joost P.; van de Pol, Jaco; Stoelinga, Mariëlle Ida Antoinette

    2016-01-01

    Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. As expected, the state space explosion threatens the analysability of these models. We therefore introduce confluence reduction for Markov automata, a powerful reduction

  10. Pemodelan Markov Switching Autoregressive

    OpenAIRE

    Ariyani, Fiqria Devi; Warsito, Budi; Yasin, Hasbi

    2014-01-01

    Transition from depreciation to appreciation of exchange rate is one of regime switching that ignored by classic time series model, such as ARIMA, ARCH, or GARCH. Therefore, economic variables are modeled by Markov Switching Autoregressive (MSAR) which consider the regime switching. MLE is not applicable to parameters estimation because regime is an unobservable variable. So that filtering and smoothing process are applied to see the regime probabilities of observation. Using this model, tran...

  11. Developing a statistically powerful measure for quartet tree inference using phylogenetic identities and Markov invariants.

    Science.gov (United States)

    Sumner, Jeremy G; Taylor, Amelia; Holland, Barbara R; Jarvis, Peter D

    2017-12-01

    Recently there has been renewed interest in phylogenetic inference methods based on phylogenetic invariants, alongside the related Markov invariants. Broadly speaking, both these approaches give rise to polynomial functions of sequence site patterns that, in expectation value, either vanish for particular evolutionary trees (in the case of phylogenetic invariants) or have well understood transformation properties (in the case of Markov invariants). While both approaches have been valued for their intrinsic mathematical interest, it is not clear how they relate to each other, and to what extent they can be used as practical tools for inference of phylogenetic trees. In this paper, by focusing on the special case of binary sequence data and quartets of taxa, we are able to view these two different polynomial-based approaches within a common framework. To motivate the discussion, we present three desirable statistical properties that we argue any invariant-based phylogenetic method should satisfy: (1) sensible behaviour under reordering of input sequences; (2) stability as the taxa evolve independently according to a Markov process; and (3) explicit dependence on the assumption of a continuous-time process. Motivated by these statistical properties, we develop and explore several new phylogenetic inference methods. In particular, we develop a statistically bias-corrected version of the Markov invariants approach which satisfies all three properties. We also extend previous work by showing that the phylogenetic invariants can be implemented in such a way as to satisfy property (3). A simulation study shows that, in comparison to other methods, our new proposed approach based on bias-corrected Markov invariants is extremely powerful for phylogenetic inference. The binary case is of particular theoretical interest as-in this case only-the Markov invariants can be expressed as linear combinations of the phylogenetic invariants. A wider implication of this is that, for

  12. Classification Using Markov Blanket for Feature Selection

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Luo, Jian

    2009-01-01

    Selecting relevant features is in demand when a large data set is of interest in a classification task. It produces a tractable number of features that are sufficient and possibly improve the classification performance. This paper studies a statistical method of Markov blanket induction algorithm...... for filtering features and then applies a classifier using the Markov blanket predictors. The Markov blanket contains a minimal subset of relevant features that yields optimal classification performance. We experimentally demonstrate the improved performance of several classifiers using a Markov blanket...... induction as a feature selection method. In addition, we point out an important assumption behind the Markov blanket induction algorithm and show its effect on the classification performance....

  13. Consistency and refinement for Interval Markov Chains

    DEFF Research Database (Denmark)

    Delahaye, Benoit; Larsen, Kim Guldstrand; Legay, Axel

    2012-01-01

    Interval Markov Chains (IMC), or Markov Chains with probability intervals in the transition matrix, are the base of a classic specification theory for probabilistic systems [18]. The standard semantics of IMCs assigns to a specification the set of all Markov Chains that satisfy its interval...

  14. The semi-Markov process. Generalizations and calculation rules for application in the analysis of systems

    International Nuclear Information System (INIS)

    Hirschmann, H.

    1983-06-01

    The consequences of the basic assumptions of the semi-Markov process as defined from a homogeneous renewal process with a stationary Markov condition are reviewed. The notion of the semi-Markov process is generalized by its redefinition from a nonstationary Markov renewal process. For both the nongeneralized and the generalized case a representation of the first order conditional state probabilities is derived in terms of the transition probabilities of the Markov renewal process. Some useful calculation rules (regeneration rules) are derived for the conditional state probabilities of the semi-Markov process. Compared to the semi-Markov process in its usual definition the generalized process allows the analysis of a larger class of systems. For instance systems with arbitrarily distributed lifetimes of their components can be described. There is also a chance to describe systems which are modified during time by forces or manipulations from outside. (Auth.)

  15. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    Science.gov (United States)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  16. The Impact of Short-Sale Constraints on Asset Allocation Strategies via the Backward Markov Chain Approximation Method

    OpenAIRE

    Carl Chiarella; Chih-Ying Hsiao

    2005-01-01

    This paper considers an asset allocation strategy over a finite period under investment uncertainty and short-sale constraints as a continuous time stochastic control problem. Investment uncertainty is characterised by a stochastic interest rate and inflation risk. If there are no short-sale constraints, the optimal asset allocation strategy can be solved analytically. We consider several kinds of short-sale constraints and employ the backward Markov chain approximation method to explore the ...

  17. Lithofacies cyclicity determination in the guaduas formation (Colombia using Markov chains

    Directory of Open Access Journals (Sweden)

    Jorge Eliecer Mariño Martinez

    2016-07-01

    Full Text Available Statistical embedded Markov Chain processes were used to analyze facies transitions and to determine the stacking pattern of the lithofacies of the Guaduas Formation. Twelve Lithofacies were found and characterized based on lithology and sedimentary structures in four stratigraphic sections. The findings were compared with a previous assemblage of lithofacies, interpretations of sedimentary environments, and depositional systems.  As a result, four depositional Systems were established. Through the statistical analyses of facies transitions it was found that tidal facies are prevalent in the Socota section, especially in the upper part, whereas in the Sogamoso, Umbita and Peñas de Sutatausa sections, fluvial facies are prevalent in the upper part of the sections, and follow a regressive sequence with more continental deposits around the upper part of the sections. For each of these sections the Markov Chain transition matrices illustrates a strong interaction between tidal facies and fluvial facies, specially in the Peñas de Sutatausa matrix, where facies 6, made up of tidal deposits, appears several times. From the facies model and Markov Chain analyses, it is evident that the Guaduas Formation is a cyclic sequence in which the Markov facies repetitions are consistent with the lithofacies analyses conducted in previous stratigraphic studies. The results reveal that the Markov Chain statistical process can be used to predict stratigraphy in order to correlate contiguous geologically unexplored areas in the Guaduas Formation, where much work relating to correlation and the continuity of coal beds has yet to be done.    Determinacion de la ciclicidad de las facies en la formacion Guaduas (Colombia usando las cadenas de Markov Resumen Se utilizaron los procesos estadísticos de las cadenas de Markov para analizar las transiciones de facies y para determinar el patrón de apilamiento de las litofacies de la formación Guaduas. Se encontraron y

  18. Noise can speed convergence in Markov chains.

    Science.gov (United States)

    Franzke, Brandon; Kosko, Bart

    2011-10-01

    A new theorem shows that noise can speed convergence to equilibrium in discrete finite-state Markov chains. The noise applies to the state density and helps the Markov chain explore improbable regions of the state space. The theorem ensures that a stochastic-resonance noise benefit exists for states that obey a vector-norm inequality. Such noise leads to faster convergence because the noise reduces the norm components. A corollary shows that a noise benefit still occurs if the system states obey an alternate norm inequality. This leads to a noise-benefit algorithm that requires knowledge of the steady state. An alternative blind algorithm uses only past state information to achieve a weaker noise benefit. Simulations illustrate the predicted noise benefits in three well-known Markov models. The first model is a two-parameter Ehrenfest diffusion model that shows how noise benefits can occur in the class of birth-death processes. The second model is a Wright-Fisher model of genotype drift in population genetics. The third model is a chemical reaction network of zeolite crystallization. A fourth simulation shows a convergence rate increase of 64% for states that satisfy the theorem and an increase of 53% for states that satisfy the corollary. A final simulation shows that even suboptimal noise can speed convergence if the noise applies over successive time cycles. Noise benefits tend to be sharpest in Markov models that do not converge quickly and that do not have strong absorbing states.

  19. Nonlinear Markov processes: Deterministic case

    International Nuclear Information System (INIS)

    Frank, T.D.

    2008-01-01

    Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution

  20. A Martingale Decomposition of Discrete Markov Chains

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard

    We consider a multivariate time series whose increments are given from a homogeneous Markov chain. We show that the martingale component of this process can be extracted by a filtering method and establish the corresponding martingale decomposition in closed-form. This representation is useful fo...

  1. Efficient Modelling and Generation of Markov Automata

    NARCIS (Netherlands)

    Koutny, M.; Timmer, Mark; Ulidowski, I.; Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the

  2. 438 Optimal Number of States in Hidden Markov Models and its ...

    African Journals Online (AJOL)

    In this paper, Hidden Markov Model is applied to model human movements as to .... emit either discrete information or a continuous data derived from a Probability .... For each hidden state in the test set, the probability = ... by applying the Kullback-Leibler distance (Juang & Rabiner, 1985) which ..... One Size Does Not Fit.

  3. Extracting Markov Models of Peptide Conformational Dynamics from Simulation Data.

    Science.gov (United States)

    Schultheis, Verena; Hirschberger, Thomas; Carstens, Heiko; Tavan, Paul

    2005-07-01

    A high-dimensional time series obtained by simulating a complex and stochastic dynamical system (like a peptide in solution) may code an underlying multiple-state Markov process. We present a computational approach to most plausibly identify and reconstruct this process from the simulated trajectory. Using a mixture of normal distributions we first construct a maximum likelihood estimate of the point density associated with this time series and thus obtain a density-oriented partition of the data space. This discretization allows us to estimate the transfer operator as a matrix of moderate dimension at sufficient statistics. A nonlinear dynamics involving that matrix and, alternatively, a deterministic coarse-graining procedure are employed to construct respective hierarchies of Markov models, from which the model most plausibly mapping the generating stochastic process is selected by consideration of certain observables. Within both procedures the data are classified in terms of prototypical points, the conformations, marking the various Markov states. As a typical example, the approach is applied to analyze the conformational dynamics of a tripeptide in solution. The corresponding high-dimensional time series has been obtained from an extended molecular dynamics simulation.

  4. Modeling dyadic processes using Hidden Markov Models: A time series approach to mother-infant interactions during infant immunization.

    Science.gov (United States)

    Stifter, Cynthia A; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at two and six months of age, used hidden Markov modeling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a 4-state model for the dyadic responses to a two-month inoculation whereas a 6-state model best described the dyadic process at six months. Two of the states at two months and three of the states at six months suggested a progression from high intensity crying to no crying with parents using vestibular and auditory soothing methods. The use of feeding and/or pacifying to soothe the infant characterized one two-month state and two six-month states. These data indicate that with maturation and experience, the mother-infant dyad is becoming more organized around the soothing interaction. Using hidden Markov modeling to describe individual differences, as well as normative processes, is also presented and discussed.

  5. ANALYSING ACCEPTANCE SAMPLING PLANS BY MARKOV CHAINS

    Directory of Open Access Journals (Sweden)

    Mohammad Mirabi

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: In this research, a Markov analysis of acceptance sampling plans in a single stage and in two stages is proposed, based on the quality of the items inspected. In a stage of this policy, if the number of defective items in a sample of inspected items is more than the upper threshold, the batch is rejected. However, the batch is accepted if the number of defective items is less than the lower threshold. Nonetheless, when the number of defective items falls between the upper and lower thresholds, the decision-making process continues to inspect the items and collect further samples. The primary objective is to determine the optimal values of the upper and lower thresholds using a Markov process to minimise the total cost associated with a batch acceptance policy. A solution method is presented, along with a numerical demonstration of the application of the proposed methodology.

    AFRIKAANSE OPSOMMING: In hierdie navorsing word ’n Markov-ontleding gedoen van aannamemonsternemingsplanne wat plaasvind in ’n enkele stap of in twee stappe na gelang van die kwaliteit van die items wat geïnspekteer word. Indien die eerste monster toon dat die aantal defektiewe items ’n boonste grens oorskry, word die lot afgekeur. Indien die eerste monster toon dat die aantal defektiewe items minder is as ’n onderste grens, word die lot aanvaar. Indien die eerste monster toon dat die aantal defektiewe items in die gebied tussen die boonste en onderste grense lê, word die besluitnemingsproses voortgesit en verdere monsters word geneem. Die primêre doel is om die optimale waardes van die booonste en onderste grense te bepaal deur gebruik te maak van ’n Markov-proses sodat die totale koste verbonde aan die proses geminimiseer kan word. ’n Oplossing word daarna voorgehou tesame met ’n numeriese voorbeeld van die toepassing van die voorgestelde oplossing.

  6. Analysis and design of Markov jump systems with complex transition probabilities

    CERN Document Server

    Zhang, Lixian; Shi, Peng; Zhu, Yanzheng

    2016-01-01

    The book addresses the control issues such as stability analysis, control synthesis and filter design of Markov jump systems with the above three types of TPs, and thus is mainly divided into three parts. Part I studies the Markov jump systems with partially unknown TPs. Different methodologies with different conservatism for the basic stability and stabilization problems are developed and compared. Then the problems of state estimation, the control of systems with time-varying delays, the case involved with both partially unknown TPs and uncertain TPs in a composite way are also tackled. Part II deals with the Markov jump systems with piecewise homogeneous TPs. Methodologies that can effectively handle control problems in the scenario are developed, including the one coping with the asynchronous switching phenomenon between the currently activated system mode and the controller/filter to be designed. Part III focuses on the Markov jump systems with memory TPs. The concept of σ-mean square stability is propo...

  7. Continuous time Boolean modeling for biological signaling: application of Gillespie algorithm.

    Science.gov (United States)

    Stoll, Gautier; Viara, Eric; Barillot, Emmanuel; Calzone, Laurence

    2012-08-29

    Mathematical modeling is used as a Systems Biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and predict the effect of perturbations. This article presents an algorithm for modeling biological networks in a discrete framework with continuous time. There exist two major types of mathematical modeling approaches: (1) quantitative modeling, representing various chemical species concentrations by real numbers, mainly based on differential equations and chemical kinetics formalism; (2) and qualitative modeling, representing chemical species concentrations or activities by a finite set of discrete values. Both approaches answer particular (and often different) biological questions. Qualitative modeling approach permits a simple and less detailed description of the biological systems, efficiently describes stable state identification but remains inconvenient in describing the transient kinetics leading to these states. In this context, time is represented by discrete steps. Quantitative modeling, on the other hand, can describe more accurately the dynamical behavior of biological processes as it follows the evolution of concentration or activities of chemical species as a function of time, but requires an important amount of information on the parameters difficult to find in the literature. Here, we propose a modeling framework based on a qualitative approach that is intrinsically continuous in time. The algorithm presented in this article fills the gap between qualitative and quantitative modeling. It is based on continuous time Markov process applied on a Boolean state space. In order to describe the temporal evolution of the biological process we wish to model, we explicitly specify the transition rates for each node. For that purpose, we built a language that can be seen as a generalization of Boolean equations. Mathematically, this approach can be translated in a set of ordinary differential

  8. Saddlepoint expansions for sums of Markov dependent variables on a continuous state space

    DEFF Research Database (Denmark)

    Jensen, J.L.

    1991-01-01

    Based on the conjugate kernel studied in Iscoe et al. (1985) we derive saddlepoint expansions for either the density or distribution function of a sum f(X1)+...+f(Xn), where the Xi's constitute a Markov chain. The chain is assumed to satisfy a strong recurrence condition which makes the results...... here very similar to the classical results for i.i.d. variables. In particular we establish also conditions under which the expansions hold uniformly over the range of the saddlepoint. Expansions are also derived for sums of the form f(X1, X0)+f(X2, X1)+...+f(Xn, Xn-1) although the uniformity result...

  9. A Novel Method for Decoding Any High-Order Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Fei Ye

    2014-01-01

    Full Text Available This paper proposes a novel method for decoding any high-order hidden Markov model. First, the high-order hidden Markov model is transformed into an equivalent first-order hidden Markov model by Hadar’s transformation. Next, the optimal state sequence of the equivalent first-order hidden Markov model is recognized by the existing Viterbi algorithm of the first-order hidden Markov model. Finally, the optimal state sequence of the high-order hidden Markov model is inferred from the optimal state sequence of the equivalent first-order hidden Markov model. This method provides a unified algorithm framework for decoding hidden Markov models including the first-order hidden Markov model and any high-order hidden Markov model.

  10. Conditions for the Solvability of the Linear Programming Formulation for Constrained Discounted Markov Decision Processes

    Energy Technology Data Exchange (ETDEWEB)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Institut de Mathématiques de Bordeaux, INRIA Bordeaux Sud Ouest, Team: CQFD, and IMB (France); Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es [UNED, Department of Statistics and Operations Research (Spain)

    2016-08-15

    We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.

  11. Finite Markov processes and their applications

    CERN Document Server

    Iosifescu, Marius

    2007-01-01

    A self-contained treatment of finite Markov chains and processes, this text covers both theory and applications. Author Marius Iosifescu, vice president of the Romanian Academy and director of its Center for Mathematical Statistics, begins with a review of relevant aspects of probability theory and linear algebra. Experienced readers may start with the second chapter, a treatment of fundamental concepts of homogeneous finite Markov chain theory that offers examples of applicable models.The text advances to studies of two basic types of homogeneous finite Markov chains: absorbing and ergodic ch

  12. Time-domain induced polarization - an analysis of Cole-Cole parameter resolution and correlation using Markov Chain Monte Carlo inversion

    DEFF Research Database (Denmark)

    Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben

    2017-01-01

    The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo......-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations...

  13. Perturbation theory for Markov chains via Wasserstein distance

    NARCIS (Netherlands)

    Rudolf, Daniel; Schweizer, Nikolaus

    2017-01-01

    Perturbation theory for Markov chains addresses the question of how small differences in the transition probabilities of Markov chains are reflected in differences between their distributions. We prove powerful and flexible bounds on the distance of the nth step distributions of two Markov chains

  14. Irreversible Markov chains in spin models: Topological excitations

    Science.gov (United States)

    Lei, Ze; Krauth, Werner

    2018-01-01

    We analyze the convergence of the irreversible event-chain Monte Carlo algorithm for continuous spin models in the presence of topological excitations. In the two-dimensional XY model, we show that the local nature of the Markov-chain dynamics leads to slow decay of vortex-antivortex correlations while spin waves decorrelate very quickly. Using a Fréchet description of the maximum vortex-antivortex distance, we quantify the contributions of topological excitations to the equilibrium correlations, and show that they vary from a dynamical critical exponent z∼ 2 at the critical temperature to z∼ 0 in the limit of zero temperature. We confirm the event-chain algorithm's fast relaxation (corresponding to z = 0) of spin waves in the harmonic approximation to the XY model. Mixing times (describing the approach towards equilibrium from the least favorable initial state) however remain much larger than equilibrium correlation times at low temperatures. We also describe the respective influence of topological monopole-antimonopole excitations and of spin waves on the event-chain dynamics in the three-dimensional Heisenberg model.

  15. Approximate quantum Markov chains

    CERN Document Server

    Sutter, David

    2018-01-01

    This book is an introduction to quantum Markov chains and explains how this concept is connected to the question of how well a lost quantum mechanical system can be recovered from a correlated subsystem. To achieve this goal, we strengthen the data-processing inequality such that it reveals a statement about the reconstruction of lost information. The main difficulty in order to understand the behavior of quantum Markov chains arises from the fact that quantum mechanical operators do not commute in general. As a result we start by explaining two techniques of how to deal with non-commuting matrices: the spectral pinching method and complex interpolation theory. Once the reader is familiar with these techniques a novel inequality is presented that extends the celebrated Golden-Thompson inequality to arbitrarily many matrices. This inequality is the key ingredient in understanding approximate quantum Markov chains and it answers a question from matrix analysis that was open since 1973, i.e., if Lieb's triple ma...

  16. RESEARCH ABSORBING STATES OF THE SYSTEM USING MARKOV CHAINS AND FUNDAMENTAL MATRIX

    Directory of Open Access Journals (Sweden)

    Тетяна Мефодіївна ОЛЕХ

    2016-02-01

    Full Text Available The article discusses the use Markov chains to research models that reflect the essential properties of systems, including methods of measuring the parameters of projects and assess their effectiveness. In the study carried out by its decomposition system for certain discrete state and create a diagram of transitions between these states. Specificity displays various objects Markov homogeneous chains with discrete states and discrete time determined by the method of calculation of transition probabilities. A model of success criteria for absorbing state system that is universal for all projects. A breakdown of passages to the matrix submatrices. The variation elements under matrix Q n with growth linked to the definition of important quantitative characteristics of absorbing circuits: 1 the probability of achieving the status of absorbing any given; 2 the mean number of steps needed to achieve the absorbing state; 3 the mean time that the system spends in each state to hit irreversible system in absorbing state. Built fundamental matrix that allowed calculating the different characteristics of the system. Considered fundamental matrix for supposedly modeled absorbing Markov chain, which gives the forecast for the behavior of the system in the future regardless of the absolute value of the time elapsed from the starting point. This property illustrates the fundamental matrix Markov process that characterizes it as a process without aftereffect.

  17. Description of quantum-mechanical motion by using the formalism of non-Markov stochastic process

    International Nuclear Information System (INIS)

    Skorobogatov, G.A.; Svertilov, S.I.

    1999-01-01

    The principle possibilities of mathematical modeling of quantum mechanical motion by the theory of a real stochastic processes is considered. The set of equations corresponding to the simplest case of a two-level system undergoing transitions under the influence of electromagnetic field are obtained. It is shown that quantum-mechanical processes are purely discrete processes of non-Markovian type. They are continuous processes in the space of probability amplitudes and posses the properties of quantum Markovity. The formulation of quantum mechanics in terms of the theory of stochastic processes is necessary for its generalization on small space-time intervals [ru

  18. Fitting Hidden Markov Models to Psychological Data

    Directory of Open Access Journals (Sweden)

    Ingmar Visser

    2002-01-01

    Full Text Available Markov models have been used extensively in psychology of learning. Applications of hidden Markov models are rare however. This is partially due to the fact that comprehensive statistics for model selection and model assessment are lacking in the psychological literature. We present model selection and model assessment statistics that are particularly useful in applying hidden Markov models in psychology. These statistics are presented and evaluated by simulation studies for a toy example. We compare AIC, BIC and related criteria and introduce a prediction error measure for assessing goodness-of-fit. In a simulation study, two methods of fitting equality constraints are compared. In two illustrative examples with experimental data we apply selection criteria, fit models with constraints and assess goodness-of-fit. First, data from a concept identification task is analyzed. Hidden Markov models provide a flexible approach to analyzing such data when compared to other modeling methods. Second, a novel application of hidden Markov models in implicit learning is presented. Hidden Markov models are used in this context to quantify knowledge that subjects express in an implicit learning task. This method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.

  19. Use of Markov chains for forecasting labor requirements in black coal mines

    Energy Technology Data Exchange (ETDEWEB)

    Penar, L.; Przybyla, H.

    1987-01-01

    Increasing mining depth, deterioration of mining conditions and technology development are causes of changes in labor requirements. In mines with stable coal output these changes in most cases are of a qualitative character, in mines with an increasing or decreasing coal output they are of a quantitative character. Methods for forecasting personnel needs, in particular professional requirements, are discussed. Quantitative and qualitative changes are accurately described by heterogenous Markov chains. A structure consisting of interdependent variables is the subject of a forecast. Changes that occur within the structure of time units is the subject of investigations. For a homogenous Markov chain probabilities of a transition from the i-state to the j-state are determined (the probabilities being time independent). For a heterogenous Markov chain probabilities of a transition from the i-state to the j-state are non-conditioned. The method was developed for the ODRA 1325 computers. 8 refs.

  20. Geometric allocation approaches in Markov chain Monte Carlo

    International Nuclear Information System (INIS)

    Todo, S; Suwa, H

    2013-01-01

    The Markov chain Monte Carlo method is a versatile tool in statistical physics to evaluate multi-dimensional integrals numerically. For the method to work effectively, we must consider the following key issues: the choice of ensemble, the selection of candidate states, the optimization of transition kernel, algorithm for choosing a configuration according to the transition probabilities. We show that the unconventional approaches based on the geometric allocation of probabilities or weights can improve the dynamics and scaling of the Monte Carlo simulation in several aspects. Particularly, the approach using the irreversible kernel can reduce or sometimes completely eliminate the rejection of trial move in the Markov chain. We also discuss how the space-time interchange technique together with Walker's method of aliases can reduce the computational time especially for the case where the number of candidates is large, such as models with long-range interactions

  1. Zipf exponent of trajectory distribution in the hidden Markov model

    Science.gov (United States)

    Bochkarev, V. V.; Lerner, E. Yu

    2014-03-01

    This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different.

  2. Zipf exponent of trajectory distribution in the hidden Markov model

    International Nuclear Information System (INIS)

    Bochkarev, V V; Lerner, E Yu

    2014-01-01

    This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different

  3. Probabilistic Reachability for Parametric Markov Models

    DEFF Research Database (Denmark)

    Hahn, Ernst Moritz; Hermanns, Holger; Zhang, Lijun

    2011-01-01

    Given a parametric Markov model, we consider the problem of computing the rational function expressing the probability of reaching a given set of states. To attack this principal problem, Daws has suggested to first convert the Markov chain into a finite automaton, from which a regular expression...

  4. A Stochastic Hybrid Systems framework for analysis of Markov reward models

    International Nuclear Information System (INIS)

    Dhople, S.V.; DeVille, L.; Domínguez-García, A.D.

    2014-01-01

    In this paper, we propose a framework to analyze Markov reward models, which are commonly used in system performability analysis. The framework builds on a set of analytical tools developed for a class of stochastic processes referred to as Stochastic Hybrid Systems (SHS). The state space of an SHS is comprised of: (i) a discrete state that describes the possible configurations/modes that a system can adopt, which includes the nominal (non-faulty) operational mode, but also those operational modes that arise due to component faults, and (ii) a continuous state that describes the reward. Discrete state transitions are stochastic, and governed by transition rates that are (in general) a function of time and the value of the continuous state. The evolution of the continuous state is described by a stochastic differential equation and reward measures are defined as functions of the continuous state. Additionally, each transition is associated with a reset map that defines the mapping between the pre- and post-transition values of the discrete and continuous states; these mappings enable the definition of impulses and losses in the reward. The proposed SHS-based framework unifies the analysis of a variety of previously studied reward models. We illustrate the application of the framework to performability analysis via analytical and numerical examples

  5. Critical Age-Dependent Branching Markov Processes and their ...

    Indian Academy of Sciences (India)

    This paper studies: (i) the long-time behaviour of the empirical distribution of age and normalized position of an age-dependent critical branching Markov process conditioned on non-extinction; and (ii) the super-process limit of a sequence of age-dependent critical branching Brownian motions.

  6. Risk Minimization for Insurance Products via F-Doubly Stochastic Markov Chains

    Directory of Open Access Journals (Sweden)

    Francesca Biagini

    2016-07-01

    Full Text Available We study risk-minimization for a large class of insurance contracts. Given that the individual progress in time of visiting an insurance policy’s states follows an F -doubly stochastic Markov chain, we describe different state-dependent types of insurance benefits. These cover single payments at maturity, annuity-type payments and payments at the time of a transition. Based on the intensity of the F -doubly stochastic Markov chain, we provide the Galtchouk-Kunita-Watanabe decomposition for a general insurance contract and specify risk-minimizing strategies in a Brownian financial market setting. The results are further illustrated explicitly within an affine structure for the intensity.

  7. Portfolio Optimization in a Semi-Markov Modulated Market

    International Nuclear Information System (INIS)

    Ghosh, Mrinal K.; Goswami, Anindya; Kumar, Suresh K.

    2009-01-01

    We address a portfolio optimization problem in a semi-Markov modulated market. We study both the terminal expected utility optimization on finite time horizon and the risk-sensitive portfolio optimization on finite and infinite time horizon. We obtain optimal portfolios in relevant cases. A numerical procedure is also developed to compute the optimal expected terminal utility for finite horizon problem

  8. Study on the systematic approach of Markov modeling for dependability analysis of complex fault-tolerant features with voting logics

    International Nuclear Information System (INIS)

    Son, Kwang Seop; Kim, Dong Hoon; Kim, Chang Hwoi; Kang, Hyun Gook

    2016-01-01

    The Markov analysis is a technique for modeling system state transitions and calculating the probability of reaching various system states. While it is a proper tool for modeling complex system designs involving timing, sequencing, repair, redundancy, and fault tolerance, as the complexity or size of the system increases, so does the number of states of interest, leading to difficulty in constructing and solving the Markov model. This paper introduces a systematic approach of Markov modeling to analyze the dependability of a complex fault-tolerant system. This method is based on the decomposition of the system into independent subsystem sets, and the system-level failure rate and the unavailability rate for the decomposed subsystems. A Markov model for the target system is easily constructed using the system-level failure and unavailability rates for the subsystems, which can be treated separately. This approach can decrease the number of states to consider simultaneously in the target system by building Markov models of the independent subsystems stage by stage, and results in an exact solution for the Markov model of the whole target system. To apply this method we construct a Markov model for the reactor protection system found in nuclear power plants, a system configured with four identical channels and various fault-tolerant architectures. The results show that the proposed method in this study treats the complex architecture of the system in an efficient manner using the merits of the Markov model, such as a time dependent analysis and a sequential process analysis. - Highlights: • Systematic approach of Markov modeling for system dependability analysis is proposed based on the independent subsystem set, its failure rate and unavailability rate. • As an application example, we construct the Markov model for the digital reactor protection system configured with four identical and independent channels, and various fault-tolerant architectures. • The

  9. [Application of Markov model in post-marketing pharmacoeconomic evaluation of traditional Chinese medicine].

    Science.gov (United States)

    Wang, Xin; Su, Xia; Sun, Wentao; Xie, Yanming; Wang, Yongyan

    2011-10-01

    In post-marketing study of traditional Chinese medicine (TCM), pharmacoeconomic evaluation has an important applied significance. However, the economic literatures of TCM have been unable to fully and accurately reflect the unique overall outcomes of treatment with TCM. For the special nature of TCM itself, we recommend that Markov model could be introduced into post-marketing pharmacoeconomic evaluation of TCM, and also explore the feasibility of model application. Markov model can extrapolate the study time horizon, suit with effectiveness indicators of TCM, and provide measurable comprehensive outcome. In addition, Markov model can promote the development of TCM quality of life scale and the methodology of post-marketing pharmacoeconomic evaluation.

  10. Markov-modulated and feedback fluid queues

    NARCIS (Netherlands)

    Scheinhardt, Willem R.W.

    1998-01-01

    In the last twenty years the field of Markov-modulated fluid queues has received considerable attention. In these models a fluid reservoir receives and/or releases fluid at rates which depend on the actual state of a background Markov chain. In the first chapter of this thesis we give a short

  11. Bounding spectral gaps of Markov chains: a novel exact multi-decomposition technique

    International Nuclear Information System (INIS)

    Destainville, N

    2003-01-01

    We propose an exact technique to calculate lower bounds of spectral gaps of discrete time reversible Markov chains on finite state sets. Spectral gaps are a common tool for evaluating convergence rates of Markov chains. As an illustration, we successfully use this technique to evaluate the 'absorption time' of the 'Backgammon model', a paradigmatic model for glassy dynamics. We also discuss the application of this technique to the 'contingency table problem', a notoriously difficult problem from probability theory. The interest of this technique is that it connects spectral gaps, which are quantities related to dynamics, with static quantities, calculated at equilibrium

  12. Multivariate longitudinal data analysis with mixed effects hidden Markov models.

    Science.gov (United States)

    Raffa, Jesse D; Dubin, Joel A

    2015-09-01

    Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.

  13. Reliability analysis of nuclear component cooling water system using semi-Markov process model

    International Nuclear Information System (INIS)

    Veeramany, Arun; Pandey, Mahesh D.

    2011-01-01

    Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.

  14. From Brownian Dynamics to Markov Chain: An Ion Channel Example

    KAUST Repository

    Chen, Wan

    2014-02-27

    A discrete rate theory for multi-ion channels is presented, in which the continuous dynamics of ion diffusion is reduced to transitions between Markovian discrete states. In an open channel, the ion permeation process involves three types of events: an ion entering the channel, an ion escaping from the channel, or an ion hopping between different energy minima in the channel. The continuous dynamics leads to a hierarchy of Fokker-Planck equations, indexed by channel occupancy. From these the mean escape times and splitting probabilities (denoting from which side an ion has escaped) can be calculated. By equating these with the corresponding expressions from the Markov model, one can determine the Markovian transition rates. The theory is illustrated with a two-ion one-well channel. The stationary probability of states is compared with that from both Brownian dynamics simulation and the hierarchical Fokker-Planck equations. The conductivity of the channel is also studied, and the optimal geometry maximizing ion flux is computed. © 2014 Society for Industrial and Applied Mathematics.

  15. Maximizing Entropy over Markov Processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2013-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code....

  16. Maximizing entropy over Markov processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2014-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code. © 2014 Elsevier...

  17. A scaling analysis of a cat and mouse Markov chain

    NARCIS (Netherlands)

    Litvak, Nelli; Robert, Philippe

    Motivated by an original on-line page-ranking algorithm, starting from an arbitrary Markov chain $(C_n)$ on a discrete state space ${\\cal S}$, a Markov chain $(C_n,M_n)$ on the product space ${\\cal S}^2$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain

  18. Dynamic modeling of presence of occupants using inhomogeneous Markov chains

    DEFF Research Database (Denmark)

    Andersen, Philip Hvidthøft Delff; Iversen, Anne; Madsen, Henrik

    2014-01-01

    on time of day, and by use of a filter of the observations it is able to capture per-employee sequence dynamics. Simulations using this method are compared with simulations using homogeneous Markov chains and show far better ability to reproduce key properties of the data. The method is based...... on inhomogeneous Markov chains with where the transition probabilities are estimated using generalized linear models with polynomials, B-splines, and a filter of passed observations as inputs. For treating the dispersion of the data series, a hierarchical model structure is used where one model is for low presence...

  19. Probability distributions for Markov chain based quantum walks

    Science.gov (United States)

    Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.

    2018-01-01

    We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.

  20. Monitoring Farmland Loss Caused by Urbanization in Beijing from Modis Time Series Using Hierarchical Hidden Markov Model

    Science.gov (United States)

    Yuan, Y.; Meng, Y.; Chen, Y. X.; Jiang, C.; Yue, A. Z.

    2018-04-01

    In this study, we proposed a method to map urban encroachment onto farmland using satellite image time series (SITS) based on the hierarchical hidden Markov model (HHMM). In this method, the farmland change process is decomposed into three hierarchical levels, i.e., the land cover level, the vegetation phenology level, and the SITS level. Then a three-level HHMM is constructed to model the multi-level semantic structure of farmland change process. Once the HHMM is established, a change from farmland to built-up could be detected by inferring the underlying state sequence that is most likely to generate the input time series. The performance of the method is evaluated on MODIS time series in Beijing. Results on both simulated and real datasets demonstrate that our method improves the change detection accuracy compared with the HMM-based method.

  1. Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories

    Science.gov (United States)

    Matsunaga, Y.; Sugita, Y.

    2018-06-01

    A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.

  2. Robust Dynamics and Control of a Partially Observed Markov Chain

    International Nuclear Information System (INIS)

    Elliott, R. J.; Malcolm, W. P.; Moore, J. P.

    2007-01-01

    In a seminal paper, Martin Clark (Communications Systems and Random Process Theory, Darlington, 1977, pp. 721-734, 1978) showed how the filtered dynamics giving the optimal estimate of a Markov chain observed in Gaussian noise can be expressed using an ordinary differential equation. These results offer substantial benefits in filtering and in control, often simplifying the analysis and an in some settings providing numerical benefits, see, for example Malcolm et al. (J. Appl. Math. Stoch. Anal., 2007, to appear).Clark's method uses a gauge transformation and, in effect, solves the Wonham-Zakai equation using variation of constants. In this article, we consider the optimal control of a partially observed Markov chain. This problem is discussed in Elliott et al. (Hidden Markov Models Estimation and Control, Applications of Mathematics Series, vol. 29, 1995). The innovation in our results is that the robust dynamics of Clark are used to compute forward in time dynamics for a simplified adjoint process. A stochastic minimum principle is established

  3. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  4. Constructing Dynamic Event Trees from Markov Models

    International Nuclear Information System (INIS)

    Paolo Bucci; Jason Kirschenbaum; Tunc Aldemir; Curtis Smith; Ted Wood

    2006-01-01

    In the probabilistic risk assessment (PRA) of process plants, Markov models can be used to model accurately the complex dynamic interactions between plant physical process variables (e.g., temperature, pressure, etc.) and the instrumentation and control system that monitors and manages the process. One limitation of this approach that has prevented its use in nuclear power plant PRAs is the difficulty of integrating the results of a Markov analysis into an existing PRA. In this paper, we explore a new approach to the generation of failure scenarios and their compilation into dynamic event trees from a Markov model of the system. These event trees can be integrated into an existing PRA using software tools such as SAPHIRE. To implement our approach, we first construct a discrete-time Markov chain modeling the system of interest by: (a) partitioning the process variable state space into magnitude intervals (cells), (b) using analytical equations or a system simulator to determine the transition probabilities between the cells through the cell-to-cell mapping technique, and, (c) using given failure/repair data for all the components of interest. The Markov transition matrix thus generated can be thought of as a process model describing the stochastic dynamic behavior of the finite-state system. We can therefore search the state space starting from a set of initial states to explore all possible paths to failure (scenarios) with associated probabilities. We can also construct event trees of arbitrary depth by tracing paths from a chosen initiating event and recording the following events while keeping track of the probabilities associated with each branch in the tree. As an example of our approach, we use the simple level control system often used as benchmark in the literature with one process variable (liquid level in a tank), and three control units: a drain unit and two supply units. Each unit includes a separate level sensor to observe the liquid level in the tank

  5. Markov process of muscle motors

    International Nuclear Information System (INIS)

    Kondratiev, Yu; Pechersky, E; Pirogov, S

    2008-01-01

    We study a Markov random process describing muscle molecular motor behaviour. Every motor is either bound up with a thin filament or unbound. In the bound state the motor creates a force proportional to its displacement from the neutral position. In both states the motor spends an exponential time depending on the state. The thin filament moves at a velocity proportional to the average of all displacements of all motors. We assume that the time which a motor stays in the bound state does not depend on its displacement. Then one can find an exact solution of a nonlinear equation appearing in the limit of an infinite number of motors

  6. Inhomogeneous Markov point processes by transformation

    DEFF Research Database (Denmark)

    Jensen, Eva B. Vedel; Nielsen, Linda Stougaard

    2000-01-01

    We construct parametrized models for point processes, allowing for both inhomogeneity and interaction. The inhomogeneity is obtained by applying parametrized transformations to homogeneous Markov point processes. An interesting model class, which can be constructed by this transformation approach......, is that of exponential inhomogeneous Markov point processes. Statistical inference For such processes is discussed in some detail....

  7. Markov Networks in Evolutionary Computation

    CERN Document Server

    Shakya, Siddhartha

    2012-01-01

    Markov networks and other probabilistic graphical modes have recently received an upsurge in attention from Evolutionary computation community, particularly in the area of Estimation of distribution algorithms (EDAs).  EDAs have arisen as one of the most successful experiences in the application of machine learning methods in optimization, mainly due to their efficiency to solve complex real-world optimization problems and their suitability for theoretical analysis. This book focuses on the different steps involved in the conception, implementation and application of EDAs that use Markov networks, and undirected models in general. It can serve as a general introduction to EDAs but covers also an important current void in the study of these algorithms by explaining the specificities and benefits of modeling optimization problems by means of undirected probabilistic models. All major developments to date in the progressive introduction of Markov networks based EDAs are reviewed in the book. Hot current researc...

  8. Distributed synthesis in continuous time

    DEFF Research Database (Denmark)

    Hermanns, Holger; Krčál, Jan; Vester, Steen

    2016-01-01

    We introduce a formalism modelling communication of distributed agents strictly in continuous-time. Within this framework, we study the problem of synthesising local strategies for individual agents such that a specified set of goal states is reached, or reached with at least a given probability....... The flow of time is modelled explicitly based on continuous-time randomness, with two natural implications: First, the non-determinism stemming from interleaving disappears. Second, when we restrict to a subclass of non-urgent models, the quantitative value problem for two players can be solved in EXPTIME....... Indeed, the explicit continuous time enables players to communicate their states by delaying synchronisation (which is unrestricted for non-urgent models). In general, the problems are undecidable already for two players in the quantitative case and three players in the qualitative case. The qualitative...

  9. Spectral analysis of multi-dimensional self-similar Markov processes

    International Nuclear Information System (INIS)

    Modarresi, N; Rezakhah, S

    2010-01-01

    In this paper we consider a discrete scale invariant (DSI) process {X(t), t in R + } with scale l > 1. We consider a fixed number of observations in every scale, say T, and acquire our samples at discrete points α k , k in W, where α is obtained by the equality l = α T and W = {0, 1, ...}. We thus provide a discrete time scale invariant (DT-SI) process X(.) with the parameter space {α k , k in W}. We find the spectral representation of the covariance function of such a DT-SI process. By providing the harmonic-like representation of multi-dimensional self-similar processes, spectral density functions of them are presented. We assume that the process {X(t), t in R + } is also Markov in the wide sense and provide a discrete time scale invariant Markov (DT-SIM) process with the above scheme of sampling. We present an example of the DT-SIM process, simple Brownian motion, by the above sampling scheme and verify our results. Finally, we find the spectral density matrix of such a DT-SIM process and show that its associated T-dimensional self-similar Markov process is fully specified by {R H j (1), R j H (0), j = 0, 1, ..., T - 1}, where R H j (τ) is the covariance function of jth and (j + τ)th observations of the process.

  10. Markov trace on the Yokonuma-Hecke algebra

    International Nuclear Information System (INIS)

    Juyumaya, J.

    2002-11-01

    The objective of this note is to prove that there exists a Markov trace on the Yokonuma-Hecke algebra. A motivation to define a Markov trace is to get polynomial invariants for knots in the sense of Jones construction. (author)

  11. The Inventory System Management under Uncertain Conditions and Time Value of Money

    Directory of Open Access Journals (Sweden)

    Mehri Nasrabadi

    2016-05-01

    Full Text Available This study develops a inventory model to determine ordering policy for deteriorating items with shortages under markovian inflationary conditions. Markov processes include process whose future behavior cannot be accurately predicted from its past behavior (except the current or present behavior and which involves random chance or probability. Behavior of business or economy, flow of traffic, progress of an epidemic, all are examples of Markov processes. Since the far previous inflation rate don’t have a great impact on the current inflation rate, so, It is logical to consider changes of the inflation rate as a markov process. In addition, It is assumed that the cost of the items changes as a ContinuousTime - Markov Process too. The inventory model is described by differential equations over the time horizon along with the present value method. The objective is minimization of the expected present value of costs over the time horizon. The numerical example and a sensitivity analysis are provided to analyze the effect of changes in the values of the different parameters on the optimal solution.

  12. Stochastic Dynamics through Hierarchically Embedded Markov Chains.

    Science.gov (United States)

    Vasconcelos, Vítor V; Santos, Fernando P; Santos, Francisco C; Pacheco, Jorge M

    2017-02-03

    Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects-such as mutations in evolutionary dynamics and a random exploration of choices in social systems-including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.

  13. The Independence of Markov's Principle in Type Theory

    DEFF Research Database (Denmark)

    Coquand, Thierry; Mannaa, Bassel

    2017-01-01

    for the generic point of this model. Instead we design an extension of type theory, which intuitively extends type theory by the addition of a generic point of Cantor space. We then show the consistency of this extension by a normalization argument. Markov's principle does not hold in this extension......In this paper, we show that Markov's principle is not derivable in dependent type theory with natural numbers and one universe. One way to prove this would be to remark that Markov's principle does not hold in a sheaf model of type theory over Cantor space, since Markov's principle does not hold......, and it follows that it cannot be proved in type theory....

  14. Bounding spectral gaps of Markov chains: a novel exact multi-decomposition technique

    Energy Technology Data Exchange (ETDEWEB)

    Destainville, N [Laboratoire de Physique Theorique - IRSAMC, CNRS/Universite Paul Sabatier, 118, route de Narbonne, 31062 Toulouse Cedex 04 (France)

    2003-04-04

    We propose an exact technique to calculate lower bounds of spectral gaps of discrete time reversible Markov chains on finite state sets. Spectral gaps are a common tool for evaluating convergence rates of Markov chains. As an illustration, we successfully use this technique to evaluate the 'absorption time' of the 'Backgammon model', a paradigmatic model for glassy dynamics. We also discuss the application of this technique to the 'contingency table problem', a notoriously difficult problem from probability theory. The interest of this technique is that it connects spectral gaps, which are quantities related to dynamics, with static quantities, calculated at equilibrium.

  15. Applying a Markov approach as a Lean Thinking analysis of waste elimination in a Rice Production Process

    Directory of Open Access Journals (Sweden)

    Eldon Glen Caldwell Marin

    2015-01-01

    Full Text Available The Markov Chains Model was proposed to analyze stochastic events when recursive cycles occur; for example, when rework in a continuous flow production affects the overall performance. Typically, the analysis of rework and scrap is done through a wasted material cost perspective and not from the perspective of waste capacity that reduces throughput and economic value added (EVA. Also, we can not find many cases of this application in agro-industrial production in Latin America, given the complexity of the calculations and the need for robust applications. This scientific work presents the results of a quasi-experimental research approach in order to explain how to apply DOE methods and Markov analysis in a rice production process located in Central America, evaluating the global effects of a single reduction in rework and scrap in a part of the whole line. The results show that in this case it is possible to evaluate benefits from Global Throughput and EVA perspective and not only from the saving costs perspective, finding a relationship between operational indicators and corporate performance. However, it was found that it is necessary to analyze the markov chains configuration with many rework points, also it is still relevant to take into account the effects on takt time and not only scrap´s costs.

  16. H∞ Filtering for Discrete Markov Jump Singular Systems with Mode-Dependent Time Delay Based on T-S Fuzzy Model

    Directory of Open Access Journals (Sweden)

    Cheng Gong

    2014-01-01

    Full Text Available This paper investigates the H∞ filtering problem of discrete singular Markov jump systems (SMJSs with mode-dependent time delay based on T-S fuzzy model. First, by Lyapunov-Krasovskii functional approach, a delay-dependent sufficient condition on H∞-disturbance attenuation is presented, in which both stability and prescribed H∞ performance are required to be achieved for the filtering-error systems. Then, based on the condition, the delay-dependent H∞ filter design scheme for SMJSs with mode-dependent time delay based on T-S fuzzy model is developed in term of linear matrix inequality (LMI. Finally, an example is given to illustrate the effectiveness of the result.

  17. Generated dynamics of Markov and quantum processes

    CERN Document Server

    Janßen, Martin

    2016-01-01

    This book presents Markov and quantum processes as two sides of a coin called generated stochastic processes. It deals with quantum processes as reversible stochastic processes generated by one-step unitary operators, while Markov processes are irreversible stochastic processes generated by one-step stochastic operators. The characteristic feature of quantum processes are oscillations, interference, lots of stationary states in bounded systems and possible asymptotic stationary scattering states in open systems, while the characteristic feature of Markov processes are relaxations to a single stationary state. Quantum processes apply to systems where all variables, that control reversibility, are taken as relevant variables, while Markov processes emerge when some of those variables cannot be followed and are thus irrelevant for the dynamic description. Their absence renders the dynamic irreversible. A further aim is to demonstrate that almost any subdiscipline of theoretical physics can conceptually be put in...

  18. ''adding'' algorithm for the Markov chain formalism for radiation transfer

    International Nuclear Information System (INIS)

    Esposito, L.W.

    1979-01-01

    The Markov chain radiative transfer method of Esposito and House has been shown to be both efficient and accurate for calculation of the diffuse reflection from a homogeneous scattering planetary atmosphere. The use of a new algorithm similar to the ''adding'' formula of Hansen and Travis extends the application of this formalism to an arbitrarily deep atmosphere. The basic idea for this algorithm is to consider a preceding calculation as a single state of a new Markov chain. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. The time required for the algorithm is comparable to that for a doubling calculation for a homogeneous atmosphere, but for a non-homogeneous atmosphere the new method is considerably faster than the standard ''adding'' routine. As with he standard ''adding'' method, the information on the internal radiation field is lost during the calculation. This method retains the advantage of the earlier Markov chain method that the time required is relatively insensitive to the number of illumination angles or observation angles for which the diffuse reflection is calculated. A technical write-up giving fuller details of the algorithm and a sample code are available from the author

  19. Markov Models for Handwriting Recognition

    CERN Document Server

    Plotz, Thomas

    2011-01-01

    Since their first inception, automatic reading systems have evolved substantially, yet the recognition of handwriting remains an open research problem due to its substantial variation in appearance. With the introduction of Markovian models to the field, a promising modeling and recognition paradigm was established for automatic handwriting recognition. However, no standard procedures for building Markov model-based recognizers have yet been established. This text provides a comprehensive overview of the application of Markov models in the field of handwriting recognition, covering both hidden

  20. Time-dependent earthquake hazard evaluation in seismogenic systems using mixed Markov Chains: An application to the Japan area

    Science.gov (United States)

    Herrera, C.; Nava, F. A.; Lomnitz, C.

    2006-08-01

    A previous work introduced a new method for seismic hazard evaluation in a system (a geographic area with distinct, but related seismogenic regions) based on modeling the transition probabilities of states (patterns of presence or absence of seismicity, with magnitude greater or equal to a threshold magnitude Mr, in the regions of the system, during a time interval Δt) as a Markov chain. Application of this direct method to the Japan area gave very good results. Given that the most important limitation of the direct method is the relative scarcity of large magnitude events, we decided to explore the possibility that seismicity with magnitude M ≥ Mmr contains information about the future occurrence of earthquakes with M ≥ Mmr > Mmr. This mixed Markov chain method estimates the probabilities of occurrence of a system state for M ≥ MMr on the basis of the observed state for M ≥ Mmr in the previous Δt. Application of the mixed method to the area of Japan gives better hazard estimations than the direct method; in particular for large earthquakes. As part of this study, the problem of performance evaluation of hazard estimation methods is addressed, leading to the use of grading functions.

  1. Continuous feedback fluid queues

    NARCIS (Netherlands)

    Scheinhardt, Willem R.W.; van Foreest, N.D.; Mandjes, M.R.H.

    2003-01-01

    We investigate a fluid buffer which is modulated by a stochastic background process, while the momentary behavior of the background process depends on the current buffer level in a continuous way. Loosely speaking the feedback is such that the background process behaves `as a Markov process' with

  2. Markov Stochastic Technique to Determine Galactic Cosmic Ray ...

    Indian Academy of Sciences (India)

    A new numerical model of particle propagation in the Galaxy has been developed, which allows the study of cosmic-ray production and propagation in 2D. The model has been used to solve cosmic ray diffusive transport equation with a complete network of nuclear interactions using the time backward Markov stochastic ...

  3. Markov processes characterization and convergence

    CERN Document Server

    Ethier, Stewart N

    2009-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists."[A]nyone who works with Markov processes whose state space is uncountably infinite will need this most impressive book as a guide and reference."-American Scientist"There is no question but that space should immediately be reserved for [this] book on the library shelf. Those who aspire to mastery of the contents should also reserve a large number of long winter evenings."-Zentralblatt f?r Mathematik und ihre Grenzgebiete/Mathematics Abstracts"Ethier and Kurtz have produced an excellent treatment of the modern theory of Markov processes that [is] useful both as a reference work and as a graduate textbook."-Journal of Statistical PhysicsMarkov Proce...

  4. Efficient Modelling and Generation of Markov Automata (extended version)

    NARCIS (Netherlands)

    Timmer, Mark; Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    2012-01-01

    This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the

  5. Hidden Markov Models for Time Series An Introduction Using R

    CERN Document Server

    Zucchini, Walter

    2009-01-01

    Illustrates the flexibility of HMMs as general-purpose models for time series data. This work presents an overview of HMMs for analyzing time series data, from continuous-valued, circular, and multivariate series to binary data, bounded and unbounded counts and categorical observations.

  6. Language Emptiness of Continuous-Time Parametric Timed Automata

    DEFF Research Database (Denmark)

    Benes, Nikola; Bezdek, Peter; Larsen, Kim Guldstrand

    2015-01-01

    Parametric timed automata extend the standard timed automata with the possibility to use parameters in the clock guards. In general, if the parameters are real-valued, the problem of language emptiness of such automata is undecidable even for various restricted subclasses. We thus focus on the case...... where parameters are assumed to be integer-valued, while the time still remains continuous. On the one hand, we show that the problem remains undecidable for parametric timed automata with three clocks and one parameter. On the other hand, for the case with arbitrary many clocks where only one......-time semantics only. To the best of our knowledge, this is the first positive result in the case of continuous-time and unbounded integer parameters, except for the rather simple case of single-clock automata....

  7. Analysis of mean time to data loss of fault-tolerant disk arrays RAID-6 based on specialized Markov chain

    Science.gov (United States)

    Rahman, P. A.; D'K Novikova Freyre Shavier, G.

    2018-03-01

    This scientific paper is devoted to the analysis of the mean time to data loss of redundant disk arrays RAID-6 with alternation of data considering different failure rates of disks both in normal state of the disk array and in degraded and rebuild states, and also nonzero time of the disk replacement. The reliability model developed by the authors on the basis of the Markov chain and obtained calculation formula for estimation of the mean time to data loss (MTTDL) of the RAID-6 disk arrays are also presented. At last, the technique of estimation of the initial reliability parameters and examples of calculation of the MTTDL of the RAID-6 disk arrays for the different numbers of disks are also given.

  8. Markov chain aggregation for agent-based models

    CERN Document Server

    Banisch, Sven

    2016-01-01

    This self-contained text develops a Markov chain approach that makes the rigorous analysis of a class of microscopic models that specify the dynamics of complex systems at the individual level possible. It presents a general framework of aggregation in agent-based and related computational models, one which makes use of lumpability and information theory in order to link the micro and macro levels of observation. The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent-based model (ABM), which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. An explicit formal representation of a resulting “micro-chain” including microscopic transition rates is derived for a class of models by using the random mapping representation of a Markov process. The type of probability distribution used to implement the stochastic part of the model, which defines the upd...

  9. Non-cooperative stochastic differential game theory of generalized Markov jump linear systems

    CERN Document Server

    Zhang, Cheng-ke; Zhou, Hai-ying; Bin, Ning

    2017-01-01

    This book systematically studies the stochastic non-cooperative differential game theory of generalized linear Markov jump systems and its application in the field of finance and insurance. The book is an in-depth research book of the continuous time and discrete time linear quadratic stochastic differential game, in order to establish a relatively complete framework of dynamic non-cooperative differential game theory. It uses the method of dynamic programming principle and Riccati equation, and derives it into all kinds of existence conditions and calculating method of the equilibrium strategies of dynamic non-cooperative differential game. Based on the game theory method, this book studies the corresponding robust control problem, especially the existence condition and design method of the optimal robust control strategy. The book discusses the theoretical results and its applications in the risk control, option pricing, and the optimal investment problem in the field of finance and insurance, enriching the...

  10. A GM (1, 1 Markov Chain-Based Aeroengine Performance Degradation Forecast Approach Using Exhaust Gas Temperature

    Directory of Open Access Journals (Sweden)

    Ning-bo Zhao

    2014-01-01

    Full Text Available Performance degradation forecast technology for quantitatively assessing degradation states of aeroengine using exhaust gas temperature is an important technology in the aeroengine health management. In this paper, a GM (1, 1 Markov chain-based approach is introduced to forecast exhaust gas temperature by taking the advantages of GM (1, 1 model in time series and the advantages of Markov chain model in dealing with highly nonlinear and stochastic data caused by uncertain factors. In this approach, firstly, the GM (1, 1 model is used to forecast the trend by using limited data samples. Then, Markov chain model is integrated into GM (1, 1 model in order to enhance the forecast performance, which can solve the influence of random fluctuation data on forecasting accuracy and achieving an accurate estimate of the nonlinear forecast. As an example, the historical monitoring data of exhaust gas temperature from CFM56 aeroengine of China Southern is used to verify the forecast performance of the GM (1, 1 Markov chain model. The results show that the GM (1, 1 Markov chain model is able to forecast exhaust gas temperature accurately, which can effectively reflect the random fluctuation characteristics of exhaust gas temperature changes over time.

  11. Two-boundary first exit time of Gauss-Markov processes for stochastic modeling of acto-myosin dynamics.

    Science.gov (United States)

    D'Onofrio, Giuseppe; Pirozzi, Enrica

    2017-05-01

    We consider a stochastic differential equation in a strip, with coefficients suitably chosen to describe the acto-myosin interaction subject to time-varying forces. By simulating trajectories of the stochastic dynamics via an Euler discretization-based algorithm, we fit experimental data and determine the values of involved parameters. The steps of the myosin are represented by the exit events from the strip. Motivated by these results, we propose a specific stochastic model based on the corresponding time-inhomogeneous Gauss-Markov and diffusion process evolving between two absorbing boundaries. We specify the mean and covariance functions of the stochastic modeling process taking into account time-dependent forces including the effect of an external load. We accurately determine the probability density function (pdf) of the first exit time (FET) from the strip by solving a system of two non singular second-type Volterra integral equations via a numerical quadrature. We provide numerical estimations of the mean of FET as approximations of the dwell-time of the proteins dynamics. The percentage of backward steps is given in agreement to experimental data. Numerical and simulation results are compared and discussed.

  12. A continuous-time/discrete-time mixed audio-band sigma delta ADC

    International Nuclear Information System (INIS)

    Liu Yan; Hua Siliang; Wang Donghui; Hou Chaohuan

    2011-01-01

    This paper introduces a mixed continuous-time/discrete-time, single-loop, fourth-order, 4-bit audio-band sigma delta ADC that combines the benefits of continuous-time and discrete-time circuits, while mitigating the challenges associated with continuous-time design. Measurement results show that the peak SNR of this ADC reaches 100 dB and the total power consumption is less than 30 mW. (semiconductor integrated circuits)

  13. A New GMRES(m Method for Markov Chains

    Directory of Open Access Journals (Sweden)

    Bing-Yuan Pu

    2013-01-01

    Full Text Available This paper presents a class of new accelerated restarted GMRES method for calculating the stationary probability vector of an irreducible Markov chain. We focus on the mechanism of this new hybrid method by showing how to periodically combine the GMRES and vector extrapolation method into a much efficient one for improving the convergence rate in Markov chain problems. Numerical experiments are carried out to demonstrate the efficiency of our new algorithm on several typical Markov chain problems.

  14. Markov switching of the electricity supply curve and power prices dynamics

    Science.gov (United States)

    Mari, Carlo; Cananà, Lucianna

    2012-02-01

    Regime-switching models seem to well capture the main features of power prices behavior in deregulated markets. In a recent paper, we have proposed an equilibrium methodology to derive electricity prices dynamics from the interplay between supply and demand in a stochastic environment. In particular, assuming that the supply function is described by a power law where the exponent is a two-state strictly positive Markov process, we derived a regime switching dynamics of power prices in which regime switches are induced by transitions between Markov states. In this paper, we provide a dynamical model to describe the random behavior of power prices where the only non-Brownian component of the motion is endogenously introduced by Markov transitions in the exponent of the electricity supply curve. In this context, the stochastic process driving the switching mechanism becomes observable, and we will show that the non-Brownian component of the dynamics induced by transitions from Markov states is responsible for jumps and spikes of very high magnitude. The empirical analysis performed on three Australian markets confirms that the proposed approach seems quite flexible and capable of incorporating the main features of power prices time-series, thus reproducing the first four moments of log-returns empirical distributions in a satisfactory way.

  15. A single-server queue with batch arrivals and semi-Markov services

    NARCIS (Netherlands)

    Abhishek,; Boon, M.A.A.; Boxma, O.J.; Núñez-Queija, R.

    2017-01-01

    We investigate the transient and stationary queue length distributions of a class of service systems with correlated service times. The classical (Formula presented.) queue with semi-Markov service times is the most prominent example in this class and serves as a vehicle to display our results. The

  16. A theoretical Markov chain model for evaluating correctional ...

    African Journals Online (AJOL)

    In this paper a stochastic method is applied in the study of the long time effect of confinement in a correctional institution on the behaviour of a person with criminal tendencies. The approach used is Markov chain, which uses past history to predict the state of a system in the future. A model is developed for comparing the ...

  17. A new look at the robust control of discrete-time Markov jump linear systems

    Science.gov (United States)

    Todorov, M. G.; Fragoso, M. D.

    2016-03-01

    In this paper, we make a foray in the role played by a set of four operators on the study of robust H2 and mixed H2/H∞ control problems for discrete-time Markov jump linear systems. These operators appear in the study of mean square stability for this class of systems. By means of new linear matrix inequality (LMI) characterisations of controllers, which include slack variables that, to some extent, separate the robustness and performance objectives, we introduce four alternative approaches to the design of controllers which are robustly stabilising and at the same time provide a guaranteed level of H2 performance. Since each operator provides a different degree of conservatism, the results are unified in the form of an iterative LMI technique for designing robust H2 controllers, whose convergence is attained in a finite number of steps. The method yields a new way of computing mixed H2/H∞ controllers, whose conservatism decreases with iteration. Two numerical examples illustrate the applicability of the proposed results for the control of a small unmanned aerial vehicle, and for an underactuated robotic arm.

  18. Switching Markov chains for a holistic modeling of SIS unavailability

    International Nuclear Information System (INIS)

    Mechri, Walid; Simon, Christophe; BenOthman, Kamel

    2015-01-01

    This paper proposes a holistic approach to model the Safety Instrumented Systems (SIS). The model is based on Switching Markov Chain and integrates several parameters like Common Cause Failure, Imperfect Proof testing, partial proof testing, etc. The basic concepts of Switching Markov Chain applied to reliability analysis are introduced and a model to compute the unavailability for a case study is presented. The proposed Switching Markov Chain allows us to assess the effect of each parameter on the SIS performance. The proposed method ensures the relevance of the results. - Highlights: • A holistic approach to model the unavailability safety systems using Switching Markov chains. • The model integrates several parameters like probability of failure due to the test, the probability of not detecting a failure in a test. • The basic concepts of the Switching Markov Chains are introduced and applied to compute the unavailability for safety systems. • The proposed Switching Markov Chain allows assessing the effect of each parameter on the chemical reactor performance

  19. Markov-switching model for nonstationary runoff conditioned on El Nino information

    DEFF Research Database (Denmark)

    Gelati, Emiliano; Madsen, H.; Rosbjerg, Dan

    2010-01-01

    We define a Markov-modulated autoregressive model with exogenous input (MARX) to generate runoff scenarios using climatic information. Runoff parameterization is assumed to be conditioned on a hidden climate state following a Markov chain, where state transition probabilities are functions...... of the climatic input. MARX allows stochastic modeling of nonstationary runoff, as runoff anomalies are described by a mixture of autoregressive models with exogenous input, each one corresponding to a climate state. We apply MARX to inflow time series of the Daule Peripa reservoir (Ecuador). El Nino Southern...... Oscillation (ENSO) information is used to condition runoff parameterization. Among the investigated ENSO indexes, the NINO 1+2 sea surface temperature anomalies and the trans-Nino index perform best as predictors. In the perspective of reservoir optimization at various time scales, MARX produces realistic...

  20. Bayesian analysis of Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2006-01-01

    Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...... a partially ordered Markov point process as the auxiliary variable. As the method requires simulation from the "unknown" likelihood, perfect simulation algorithms for spatial point processes become useful....

  1. Susceptible-infected-susceptible epidemics on networks with general infection and cure times

    Science.gov (United States)

    Cator, E.; van de Bovenkamp, R.; Van Mieghem, P.

    2013-06-01

    The classical, continuous-time susceptible-infected-susceptible (SIS) Markov epidemic model on an arbitrary network is extended to incorporate infection and curing or recovery times each characterized by a general distribution (rather than an exponential distribution as in Markov processes). This extension, called the generalized SIS (GSIS) model, is believed to have a much larger applicability to real-world epidemics (such as information spread in online social networks, real diseases, malware spread in computer networks, etc.) that likely do not feature exponential times. While the exact governing equations for the GSIS model are difficult to deduce due to their non-Markovian nature, accurate mean-field equations are derived that resemble our previous N-intertwined mean-field approximation (NIMFA) and so allow us to transfer the whole analytic machinery of the NIMFA to the GSIS model. In particular, we establish the criterion to compute the epidemic threshold in the GSIS model. Moreover, we show that the average number of infection attempts during a recovery time is the more natural key parameter, instead of the effective infection rate in the classical, continuous-time SIS Markov model. The relative simplicity of our mean-field results enables us to treat more general types of SIS epidemics, while offering an easier key parameter to measure the average activity of those general viral agents.

  2. Quantum Enhanced Inference in Markov Logic Networks.

    Science.gov (United States)

    Wittek, Peter; Gogolin, Christian

    2017-04-19

    Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.

  3. Quantum Enhanced Inference in Markov Logic Networks

    Science.gov (United States)

    Wittek, Peter; Gogolin, Christian

    2017-04-01

    Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.

  4. MARKOV Model Application to Proliferation Risk Reduction of an Advanced Nuclear System

    International Nuclear Information System (INIS)

    Bari, R.A.

    2008-01-01

    The Generation IV International Forum (GIF) emphasizes proliferation resistance and physical protection (PR and PP) as a main goal for future nuclear energy systems. The GIF PR and PP Working Group has developed a methodology for the evaluation of these systems. As an application of the methodology, Markov model has been developed for the evaluation of proliferation resistance and is demonstrated for a hypothetical Example Sodium Fast Reactor (ESFR) system. This paper presents the case of diversion by the facility owner/operator to obtain material that could be used in a nuclear weapon. The Markov model is applied to evaluate material diversion strategies. The following features of the Markov model are presented here: (1) An effective detection rate has been introduced to account for the implementation of multiple safeguards approaches at a given strategic point; (2) Technical failure to divert material is modeled as intrinsic barriers related to the design of the facility or the properties of the material in the facility; and (3) Concealment to defeat or degrade the performance of safeguards is recognized in the Markov model. Three proliferation risk measures are calculated directly by the Markov model: the detection probability, technical failure probability, and proliferation time. The material type is indicated by an index that is based on the quality of material diverted. Sensitivity cases have been done to demonstrate the effects of different modeling features on the measures of proliferation resistance

  5. Markov processes from K. Ito's perspective (AM-155)

    CERN Document Server

    Stroock, Daniel W

    2003-01-01

    Kiyosi Itô''s greatest contribution to probability theory may be his introduction of stochastic differential equations to explain the Kolmogorov-Feller theory of Markov processes. Starting with the geometric ideas that guided him, this book gives an account of Itô''s program. The modern theory of Markov processes was initiated by A. N. Kolmogorov. However, Kolmogorov''s approach was too analytic to reveal the probabilistic foundations on which it rests. In particular, it hides the central role played by the simplest Markov processes: those with independent, identically distributed incremen

  6. Assessing type I error and power of multistate Markov models for panel data-A simulation study.

    Science.gov (United States)

    Cassarly, Christy; Martin, Renee' H; Chimowitz, Marc; Peña, Edsel A; Ramakrishnan, Viswanathan; Palesch, Yuko Y

    2017-01-01

    Ordinal outcomes collected at multiple follow-up visits are common in clinical trials. Sometimes, one visit is chosen for the primary analysis and the scale is dichotomized amounting to loss of information. Multistate Markov models describe how a process moves between states over time. Here, simulation studies are performed to investigate the type I error and power characteristics of multistate Markov models for panel data with limited non-adjacent state transitions. The results suggest that the multistate Markov models preserve the type I error and adequate power is achieved with modest sample sizes for panel data with limited non-adjacent state transitions.

  7. Applying Mean-Field Approximation to Continuous Time Markov Chains

    NARCIS (Netherlands)

    Kolesnichenko, A.V.; Senni, Valerio; Pourranjabar, Alireza; Remke, A.K.I.; Stoelinga, M.I.A.

    2014-01-01

    The mean-field analysis technique is used to perform analysis of a system with a large number of components to determine the emergent deterministic behaviour and how this behaviour modifies when its parameters are perturbed. The computer science performance modelling and analysis community has found

  8. Bayesian posterior distributions without Markov chains.

    Science.gov (United States)

    Cole, Stephen R; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B

    2012-03-01

    Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976-1983) assessing the relation between residential exposure to magnetic fields and the development of childhood cancer. Results from rejection sampling (odds ratio (OR) = 1.69, 95% posterior interval (PI): 0.57, 5.00) were similar to MCMC results (OR = 1.69, 95% PI: 0.58, 4.95) and approximations from data-augmentation priors (OR = 1.74, 95% PI: 0.60, 5.06). In example 2, the authors apply rejection sampling to a cohort study of 315 human immunodeficiency virus seroconverters (1984-1998) to assess the relation between viral load after infection and 5-year incidence of acquired immunodeficiency syndrome, adjusting for (continuous) age at seroconversion and race. In this more complex example, rejection sampling required a notably longer run time than MCMC sampling but remained feasible and again yielded similar results. The transparency of the proposed approach comes at a price of being less broadly applicable than MCMC.

  9. Bridge Deterioration Prediction Model Based On Hybrid Markov-System Dynamic

    Directory of Open Access Journals (Sweden)

    Widodo Soetjipto Jojok

    2017-01-01

    Full Text Available Instantaneous bridge failure tends to increase in Indonesia. To mitigate this condition, Indonesia’s Bridge Management System (I-BMS has been applied to continuously monitor the condition of bridges. However, I-BMS only implements visual inspection for maintenance priority of the bridge structure component instead of bridge structure system. This paper proposes a new bridge failure prediction model based on hybrid Markov-System Dynamic (MSD. System dynamic is used to represent the correlation among bridge structure components while Markov chain is used to calculate temporal probability of the bridge failure. Around 235 data of bridges in Indonesia were collected from Directorate of Bridge the Ministry of Public Works and Housing for calculating transition probability of the model. To validate the model, a medium span concrete bridge was used as a case study. The result shows that the proposed model can accurately predict the bridge condition. Besides predicting the probability of the bridge failure, this model can also be used as an early warning system for bridge monitoring activity.

  10. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  11. Asymptotics for Estimating Equations in Hidden Markov Models

    DEFF Research Database (Denmark)

    Hansen, Jørgen Vinsløv; Jensen, Jens Ledet

    Results on asymptotic normality for the maximum likelihood estimate in hidden Markov models are extended in two directions. The stationarity assumption is relaxed, which allows for a covariate process influencing the hidden Markov process. Furthermore a class of estimating equations is considered...

  12. Asymptotic evolution of quantum Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Novotny, Jaroslav [FNSPE, CTU in Prague, 115 19 Praha 1 - Stare Mesto (Czech Republic); Alber, Gernot [Institut fuer Angewandte Physik, Technische Universitaet Darmstadt, D-64289 Darmstadt (Germany)

    2012-07-01

    The iterated quantum operations, so called quantum Markov chains, play an important role in various branches of physics. They constitute basis for many discrete models capable to explore fundamental physical problems, such as the approach to thermal equilibrium, or the asymptotic dynamics of macroscopic physical systems far from thermal equilibrium. On the other hand, in the more applied area of quantum technology they also describe general characteristic properties of quantum networks or they can describe different quantum protocols in the presence of decoherence. A particularly, an interesting aspect of these quantum Markov chains is their asymptotic dynamics and its characteristic features. We demonstrate there is always a vector subspace (typically low-dimensional) of so-called attractors on which the resulting superoperator governing the iterative time evolution of quantum states can be diagonalized and in which the asymptotic quantum dynamics takes place. As the main result interesting algebraic relations are presented for this set of attractors which allow to specify their dual basis and to determine them in a convenient way. Based on this general theory we show some generalizations concerning the theory of fixed points or asymptotic evolution of random quantum operations.

  13. Embedding a State Space Model Into a Markov Decision Process

    DEFF Research Database (Denmark)

    Nielsen, Lars Relund; Jørgensen, Erik; Højsgaard, Søren

    2011-01-01

    In agriculture Markov decision processes (MDPs) with finite state and action space are often used to model sequential decision making over time. For instance, states in the process represent possible levels of traits of the animal and transition probabilities are based on biological models...

  14. Inferring animal densities from tracking data using Markov chains.

    Science.gov (United States)

    Whitehead, Hal; Jonsen, Ian D

    2013-01-01

    The distributions and relative densities of species are keys to ecology. Large amounts of tracking data are being collected on a wide variety of animal species using several methods, especially electronic tags that record location. These tracking data are effectively used for many purposes, but generally provide biased measures of distribution, because the starts of the tracks are not randomly distributed among the locations used by the animals. We introduce a simple Markov-chain method that produces unbiased measures of relative density from tracking data. The density estimates can be over a geographical grid, and/or relative to environmental measures. The method assumes that the tracked animals are a random subset of the population in respect to how they move through the habitat cells, and that the movements of the animals among the habitat cells form a time-homogenous Markov chain. We illustrate the method using simulated data as well as real data on the movements of sperm whales. The simulations illustrate the bias introduced when the initial tracking locations are not randomly distributed, as well as the lack of bias when the Markov method is used. We believe that this method will be important in giving unbiased estimates of density from the growing corpus of animal tracking data.

  15. Inferring animal densities from tracking data using Markov chains.

    Directory of Open Access Journals (Sweden)

    Hal Whitehead

    Full Text Available The distributions and relative densities of species are keys to ecology. Large amounts of tracking data are being collected on a wide variety of animal species using several methods, especially electronic tags that record location. These tracking data are effectively used for many purposes, but generally provide biased measures of distribution, because the starts of the tracks are not randomly distributed among the locations used by the animals. We introduce a simple Markov-chain method that produces unbiased measures of relative density from tracking data. The density estimates can be over a geographical grid, and/or relative to environmental measures. The method assumes that the tracked animals are a random subset of the population in respect to how they move through the habitat cells, and that the movements of the animals among the habitat cells form a time-homogenous Markov chain. We illustrate the method using simulated data as well as real data on the movements of sperm whales. The simulations illustrate the bias introduced when the initial tracking locations are not randomly distributed, as well as the lack of bias when the Markov method is used. We believe that this method will be important in giving unbiased estimates of density from the growing corpus of animal tracking data.

  16. Continuous-time quantum walks on star graphs

    International Nuclear Information System (INIS)

    Salimi, S.

    2009-01-01

    In this paper, we investigate continuous-time quantum walk on star graphs. It is shown that quantum central limit theorem for a continuous-time quantum walk on star graphs for N-fold star power graph, which are invariant under the quantum component of adjacency matrix, converges to continuous-time quantum walk on K 2 graphs (complete graph with two vertices) and the probability of observing walk tends to the uniform distribution.

  17. Six types Monte Carlo for estimating the current unavailability of Markov system with dependent repair

    International Nuclear Information System (INIS)

    Xiao Gang; Li Zhizhong

    2004-01-01

    Based on integral equaiton describing the life-history of Markov system, six types of estimators of the current unavailability of Markov system with dependent repair are propounded. Combining with the biased sampling of state transition time of system, six types of Monte Carlo for estimating the current unavailability are given. Two numerical examples are given to deal with the variances and efficiencies of the six types of Monte Carlo methods. (authors)

  18. Timing of bariatric surgery for severely obese adolescents: a Markov decision-analysis.

    Science.gov (United States)

    Stroud, Andrea M; Parker, Devin; Croitoru, Daniel P

    2016-05-01

    Although controversial, bariatric surgery is increasingly being performed in adolescents. We developed a model to simulate the effect of timing of gastric bypass in obese adolescents on quantity and quality of life. A Markov state-transition model was constructed comparing two treatment strategies: gastric bypass surgery at age 16 versus delayed surgery in adulthood. The model simulated a hypothetical cohort of adolescents with body mass index of 45kg/m(2). Model inputs were derived from current literature. The main outcome measure was quality and quantity of life, measured using quality-adjusted life-years (QALYs). For females, early gastric bypass surgery was favored by 2.02 QALYs compared to delaying surgery until age 35 (48.91 vs. 46.89 QALYs). The benefit was even greater for males, where early surgery was favored by 2.9 QALYs (48.30 vs. 45.40 QALYs). The absolute benefit of surgery at age 16 increased; the later surgery was delayed into adulthood. Sensitivity analyses demonstrated that adult surgery was favored only when the values for adverse events were unrealistically high. In our model, early gastric bypass in obese adolescents improved both quality and quantity of life. These findings are useful for surgeons and pediatricians when counseling adolescents considering weight loss surgery. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Planning Tunnel Construction Using Markov Chain Monte Carlo (MCMC

    Directory of Open Access Journals (Sweden)

    Juan P. Vargas

    2015-01-01

    Full Text Available Tunnels, drifts, drives, and other types of underground excavation are very common in mining as well as in the construction of roads, railways, dams, and other civil engineering projects. Planning is essential to the success of tunnel excavation, and construction time is one of the most important factors to be taken into account. This paper proposes a simulation algorithm based on a stochastic numerical method, the Markov chain Monte Carlo method, that can provide the best estimate of the opening excavation times for the classic method of drilling and blasting. Taking account of technical considerations that affect the tunnel excavation cycle, the simulation is developed through a computational algorithm. Using the Markov chain Monte Carlo method, the unit operations involved in the underground excavation cycle are identified and assigned probability distributions that, with random number input, make it possible to simulate the total excavation time. The results obtained with this method are compared with a real case of tunneling excavation. By incorporating variability in the planning, it is possible to determine with greater certainty the ranges over which the execution times of the unit operations fluctuate. In addition, the financial risks associated with planning errors can be reduced and the exploitation of resources maximized.

  20. [Compared Markov with fractal models by using single-channel experimental and simulation data].

    Science.gov (United States)

    Lan, Tonghan; Wu, Hongxiu; Lin, Jiarui

    2006-10-01

    The gating mechanical kinetical of ion channels has been modeled as a Markov process. In these models it is assumed that the channel protein has a small number of discrete conformational states and kinetic rate constants connecting these states are constant, the transition rate constants among the states is independent both of time and of the previous channel activity. It is assumed in Liebovitch's fractal model that the channel exists in an infinite number of energy states, consequently, transitions from one conductance state to another would be governed by a continuum of rate constants. In this paper, a statistical comparison is presented of Markov and fractal models of ion channel gating, the analysis is based on single-channel data from ion channel voltage-dependence K+ single channel of neuron cell and simulation data from three-states Markov model.

  1. Markov modeling for the neurosurgeon: a review of the literature and an introduction to cost-effectiveness research.

    Science.gov (United States)

    Wali, Arvin R; Brandel, Michael G; Santiago-Dieppa, David R; Rennert, Robert C; Steinberg, Jeffrey A; Hirshman, Brian R; Murphy, James D; Khalessi, Alexander A

    2018-05-01

    OBJECTIVE Markov modeling is a clinical research technique that allows competing medical strategies to be mathematically assessed in order to identify the optimal allocation of health care resources. The authors present a review of the recently published neurosurgical literature that employs Markov modeling and provide a conceptual framework with which to evaluate, critique, and apply the findings generated from health economics research. METHODS The PubMed online database was searched to identify neurosurgical literature published from January 2010 to December 2017 that had utilized Markov modeling for neurosurgical cost-effectiveness studies. Included articles were then assessed with regard to year of publication, subspecialty of neurosurgery, decision analytical techniques utilized, and source information for model inputs. RESULTS A total of 55 articles utilizing Markov models were identified across a broad range of neurosurgical subspecialties. Sixty-five percent of the papers were published within the past 3 years alone. The majority of models derived health transition probabilities, health utilities, and cost information from previously published studies or publicly available information. Only 62% of the studies incorporated indirect costs. Ninety-three percent of the studies performed a 1-way or 2-way sensitivity analysis, and 67% performed a probabilistic sensitivity analysis. A review of the conceptual framework of Markov modeling and an explanation of the different terminology and methodology are provided. CONCLUSIONS As neurosurgeons continue to innovate and identify novel treatment strategies for patients, Markov modeling will allow for better characterization of the impact of these interventions on a patient and societal level. The aim of this work is to equip the neurosurgical readership with the tools to better understand, critique, and apply findings produced from cost-effectiveness research.

  2. A Correlated Random Effects Model for Non-homogeneous Markov Processes with Nonignorable Missingness.

    Science.gov (United States)

    Chen, Baojiang; Zhou, Xiao-Hua

    2013-05-01

    Life history data arising in clusters with prespecified assessment time points for patients often feature incomplete data since patients may choose to visit the clinic based on their needs. Markov process models provide a useful tool describing disease progression for life history data. The literature mainly focuses on time homogeneous process. In this paper we develop methods to deal with non-homogeneous Markov process with incomplete clustered life history data. A correlated random effects model is developed to deal with the nonignorable missingness, and a time transformation is employed to address the non-homogeneity in the transition model. Maximum likelihood estimate based on the Monte-Carlo EM algorithm is advocated for parameter estimation. Simulation studies demonstrate that the proposed method works well in many situations. We also apply this method to an Alzheimer's disease study.

  3. Consistent Estimation of Partition Markov Models

    Directory of Open Access Journals (Sweden)

    Jesús E. García

    2017-04-01

    Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.

  4. Real-time classification of humans versus animals using profiling sensors and hidden Markov tree model

    Science.gov (United States)

    Hossen, Jakir; Jacobs, Eddie L.; Chari, Srikant

    2015-07-01

    Linear pyroelectric array sensors have enabled useful classifications of objects such as humans and animals to be performed with relatively low-cost hardware in border and perimeter security applications. Ongoing research has sought to improve the performance of these sensors through signal processing algorithms. In the research presented here, we introduce the use of hidden Markov tree (HMT) models for object recognition in images generated by linear pyroelectric sensors. HMTs are trained to statistically model the wavelet features of individual objects through an expectation-maximization learning process. Human versus animal classification for a test object is made by evaluating its wavelet features against the trained HMTs using the maximum-likelihood criterion. The classification performance of this approach is compared to two other techniques; a texture, shape, and spectral component features (TSSF) based classifier and a speeded-up robust feature (SURF) classifier. The evaluation indicates that among the three techniques, the wavelet-based HMT model works well, is robust, and has improved classification performance compared to a SURF-based algorithm in equivalent computation time. When compared to the TSSF-based classifier, the HMT model has a slightly degraded performance but almost an order of magnitude improvement in computation time enabling real-time implementation.

  5. Technical manual for basic version of the Markov chain nest productivity model (MCnest)

    Science.gov (United States)

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  6. User’s manual for basic version of MCnest Markov chain nest productivity model

    Science.gov (United States)

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  7. Density Control of Multi-Agent Systems with Safety Constraints: A Markov Chain Approach

    Science.gov (United States)

    Demirer, Nazli

    systems with a single agent and systems with large number of agents due to the probabilistic nature, where the probability distribution of each agent's state evolves according to a finite-state and discrete-time Markov chain (MC). Hence, designing proper decision control policies requires numerically tractable solution methods for the synthesis of Markov chains. The synthesis problem has the form of a Linear Matrix Inequality Problem (LMI), with LMI formulation of the constraints. To this end, we propose convex necessary and sufficient conditions for safety constraints in Markov chains, which is a novel result in the Markov chain literature. In addition to LMI-based, offline, Markov matrix synthesis method, we also propose a QP-based, online, method to compute a time-varying Markov matrix based on the real-time density feedback. Both problems are convex optimization problems that can be solved in a reliable and tractable way, utilizing existing tools in the literature. A Low Earth Orbit (LEO) swarm simulations are presented to validate the effectiveness of the proposed algorithms. Another problem tackled as a part of this research is the generalization of the density control problem to autonomous mobile agents with two control modes: ON and OFF. Here, each mode consists of a (possibly overlapping) finite set of actions, that is, there exist a set of actions for the ON mode and another set for the OFF mode. We give formulation for a new Markov chain synthesis problem, with additional measurements for the state transitions, where a policy is designed to ensure desired safety and convergence properties for the underlying Markov chain.

  8. Hidden Markov models applied to a subsequence of the Xylella fastidiosa genome

    Directory of Open Access Journals (Sweden)

    Silva Cibele Q. da

    2003-01-01

    Full Text Available Dependencies in DNA sequences are frequently modeled using Markov models. However, Markov chains cannot account for heterogeneity that may be present in different regions of the same DNA sequence. Hidden Markov models are more realistic than Markov models since they allow for the identification of heterogeneous regions of a DNA sequence. In this study we present an application of hidden Markov models to a subsequence of the Xylella fastidiosa DNA data. We found that a three-state model provides a good description for the data considered.

  9. Combination of Markov chain and optimal control solved by Pontryagin’s Minimum Principle for a fuel cell/supercapacitor vehicle

    International Nuclear Information System (INIS)

    Hemi, Hanane; Ghouili, Jamel; Cheriti, Ahmed

    2015-01-01

    Highlights: • A combination of Markov chain and an optimal control solved by Pontryagin’s Minimum Principle is presented. • This strategy is applied to hybrid electric vehicle dynamic model. • The hydrogen consumption is analyzed for two different vehicle mass and drive cycle. • The supercapacitor and fuel cell behavior is analyzed at high or sudden required power. - Abstract: In this article, a real time optimal control strategy based on Pontryagin’s Minimum Principle (PMP) combined with the Markov chain approach is used for a fuel cell/supercapacitor electrical vehicle. In real time, at high power and at high speed, two phenomena are observed. The first is obtained at higher required power, and the second is observed at sudden power demand. To avoid these situations, the Markov chain model is proposed to predict the future power demand during a driving cycle. The optimal control problem is formulated as an equivalent consumption minimization strategy (ECMS), that has to be solved by using the Pontryagin’s Minimum Principle. A Markov chain model is added as a separate block for a prediction of required power. This approach and the whole system are modeled and implemented using the MATLAB/Simulink. The model without Markov chain block and the model is with it are compared. The results presented demonstrate the importance of a Markov chain block added to a model

  10. Markov Chain Analysis of Musical Dice Games

    Science.gov (United States)

    Volchenkov, D.; Dawin, J. R.

    2012-07-01

    A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.

  11. Markov decision processes in artificial intelligence

    CERN Document Server

    Sigaud, Olivier

    2013-01-01

    Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision problems under uncertainty as well as Reinforcement Learning problems. Written by experts in the field, this book provides a global view of current research using MDPs in Artificial Intelligence. It starts with an introductory presentation of the fundamental aspects of MDPs (planning in MDPs, Reinforcement Learning, Partially Observable MDPs, Markov games and the use of non-classical criteria). Then it presents more advanced research trends in the domain and gives some concrete examples using illustr

  12. Prediction of Annual Rainfall Pattern Using Hidden Markov Model ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Hidden Markov model is very influential in stochastic world because of its ... the earth from the clouds. The usual ... Rainfall modelling and ... Markov Models have become popular tools ... environment sciences, University of Jos, plateau state,.

  13. A fast exact simulation method for a class of Markov jump processes.

    Science.gov (United States)

    Li, Yao; Hu, Lili

    2015-11-14

    A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze its properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.

  14. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    Keywords. Markov chain; state space; stationary transition probability; stationary distribution; irreducibility; aperiodicity; stationarity; M-H algorithm; proposal distribution; acceptance probability; image processing; Gibbs sampler.

  15. Deteksi Fraud Menggunakan Metode Model Markov Tersembunyi Pada Proses Bisnis

    Directory of Open Access Journals (Sweden)

    Andrean Hutama Koosasi

    2017-03-01

    Full Text Available Model Markov Tersembunyi merupakan sebuah metode statistik berdasarkan Model Markov sederhana yang memodelkan sistem serta membaginya dalam 2 (dua state, state tersembunyi dan state observasi. Dalam pengerjaan tugas akhir ini, penulis mengusulkan penggunaan metode Model Markov Tersembunyi untuk menemukan fraud didalam sebuah pelaksanaan proses bisnis. Dengan penggunaan metode Model Markov Tersembunyi ini, maka pengamatan terhadap elemen penyusun sebuah kasus/kejadian, yakni beberapa aktivitas, akan diperoleh sebuah nilai peluang, yang sekaligus memberikan prediksi terhadap kasus/kejadian tersebut, sebuah fraud atau tidak. Hasil ekpserimen ini menunjukkan bahwa metode yang diusulkan mampu memberikan prediksi akhir dengan evaluasi TPR sebesar 87,5% dan TNR sebesar 99,4%.

  16. Using Markov Chains and Multi-Objective Optimization for Energy-Efficient Context Recognition

    Directory of Open Access Journals (Sweden)

    Vito Janko

    2017-12-01

    Full Text Available The recognition of the user’s context with wearable sensing systems is a common problem in ubiquitous computing. However, the typically small battery of such systems often makes continuous recognition impractical. The strain on the battery can be reduced if the sensor setting is adapted to each context. We propose a method that efficiently finds near-optimal sensor settings for each context. It uses Markov chains to simulate the behavior of the system in different configurations and the multi-objective genetic algorithm to find a set of good non-dominated configurations. The method was evaluated on three real-life datasets and found good trade-offs between the system’s energy expenditure and the system’s accuracy. One of the solutions, for example, consumed five-times less energy than the default one, while sacrificing only two percentage points of accuracy.

  17. Using Markov Chains and Multi-Objective Optimization for Energy-Efficient Context Recognition.

    Science.gov (United States)

    Janko, Vito; Luštrek, Mitja

    2017-12-29

    The recognition of the user's context with wearable sensing systems is a common problem in ubiquitous computing. However, the typically small battery of such systems often makes continuous recognition impractical. The strain on the battery can be reduced if the sensor setting is adapted to each context. We propose a method that efficiently finds near-optimal sensor settings for each context. It uses Markov chains to simulate the behavior of the system in different configurations and the multi-objective genetic algorithm to find a set of good non-dominated configurations. The method was evaluated on three real-life datasets and found good trade-offs between the system's energy expenditure and the system's accuracy. One of the solutions, for example, consumed five-times less energy than the default one, while sacrificing only two percentage points of accuracy.

  18. Diffusion maps, clustering and fuzzy Markov modeling in peptide folding transitions

    International Nuclear Information System (INIS)

    Nedialkova, Lilia V.; Amat, Miguel A.; Kevrekidis, Ioannis G.; Hummer, Gerhard

    2014-01-01

    Using the helix-coil transitions of alanine pentapeptide as an illustrative example, we demonstrate the use of diffusion maps in the analysis of molecular dynamics simulation trajectories. Diffusion maps and other nonlinear data-mining techniques provide powerful tools to visualize the distribution of structures in conformation space. The resulting low-dimensional representations help in partitioning conformation space, and in constructing Markov state models that capture the conformational dynamics. In an initial step, we use diffusion maps to reduce the dimensionality of the conformational dynamics of Ala5. The resulting pretreated data are then used in a clustering step. The identified clusters show excellent overlap with clusters obtained previously by using the backbone dihedral angles as input, with small—but nontrivial—differences reflecting torsional degrees of freedom ignored in the earlier approach. We then construct a Markov state model describing the conformational dynamics in terms of a discrete-time random walk between the clusters. We show that by combining fuzzy C-means clustering with a transition-based assignment of states, we can construct robust Markov state models. This state-assignment procedure suppresses short-time memory effects that result from the non-Markovianity of the dynamics projected onto the space of clusters. In a comparison with previous work, we demonstrate how manifold learning techniques may complement and enhance informed intuition commonly used to construct reduced descriptions of the dynamics in molecular conformation space

  19. Diffusion maps, clustering and fuzzy Markov modeling in peptide folding transitions

    Energy Technology Data Exchange (ETDEWEB)

    Nedialkova, Lilia V.; Amat, Miguel A. [Department of Chemical and Biological Engineering, Princeton University, Princeton, New Jersey 08544 (United States); Kevrekidis, Ioannis G., E-mail: yannis@princeton.edu, E-mail: gerhard.hummer@biophys.mpg.de [Department of Chemical and Biological Engineering and Program in Applied and Computational Mathematics, Princeton University, Princeton, New Jersey 08544 (United States); Hummer, Gerhard, E-mail: yannis@princeton.edu, E-mail: gerhard.hummer@biophys.mpg.de [Department of Theoretical Biophysics, Max Planck Institute of Biophysics, Max-von-Laue-Str. 3, 60438 Frankfurt am Main (Germany)

    2014-09-21

    Using the helix-coil transitions of alanine pentapeptide as an illustrative example, we demonstrate the use of diffusion maps in the analysis of molecular dynamics simulation trajectories. Diffusion maps and other nonlinear data-mining techniques provide powerful tools to visualize the distribution of structures in conformation space. The resulting low-dimensional representations help in partitioning conformation space, and in constructing Markov state models that capture the conformational dynamics. In an initial step, we use diffusion maps to reduce the dimensionality of the conformational dynamics of Ala5. The resulting pretreated data are then used in a clustering step. The identified clusters show excellent overlap with clusters obtained previously by using the backbone dihedral angles as input, with small—but nontrivial—differences reflecting torsional degrees of freedom ignored in the earlier approach. We then construct a Markov state model describing the conformational dynamics in terms of a discrete-time random walk between the clusters. We show that by combining fuzzy C-means clustering with a transition-based assignment of states, we can construct robust Markov state models. This state-assignment procedure suppresses short-time memory effects that result from the non-Markovianity of the dynamics projected onto the space of clusters. In a comparison with previous work, we demonstrate how manifold learning techniques may complement and enhance informed intuition commonly used to construct reduced descriptions of the dynamics in molecular conformation space.

  20. Observer-Based Controller Design for a Class of Nonlinear Networked Control Systems with Random Time-Delays Modeled by Markov Chains

    Directory of Open Access Journals (Sweden)

    Yanfeng Wang

    2017-01-01

    Full Text Available This paper investigates the observer-based controller design problem for a class of nonlinear networked control systems with random time-delays. The nonlinearity is assumed to satisfy a global Lipschitz condition and two dependent Markov chains are employed to describe the time-delay from sensor to controller (S-C delay and the time-delay from controller to actuator (C-A delay, respectively. The transition probabilities of S-C delay and C-A delay are both assumed to be partly inaccessible. Sufficient conditions on the stochastic stability for the closed-loop systems are obtained by constructing proper Lyapunov functional. The methods of calculating the controller and the observer gain matrix are also given. Two numerical examples are used to illustrate the effectiveness of the proposed method.

  1. A Bayesian model for binary Markov chains

    Directory of Open Access Journals (Sweden)

    Belkheir Essebbar

    2004-02-01

    Full Text Available This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.

  2. Transition Effect Matrices and Quantum Markov Chains

    Science.gov (United States)

    Gudder, Stan

    2009-06-01

    A transition effect matrix (TEM) is a quantum generalization of a classical stochastic matrix. By employing a TEM we obtain a quantum generalization of a classical Markov chain. We first discuss state and operator dynamics for a quantum Markov chain. We then consider various types of TEMs and vector states. In particular, we study invariant, equilibrium and singular vector states and investigate projective, bistochastic, invertible and unitary TEMs.

  3. SU-E-J-115: Using Markov Chain Modeling to Elucidate Patterns in Breast Cancer Metastasis Over Time and Space

    Energy Technology Data Exchange (ETDEWEB)

    Comen, E; Mason, J; Kuhn, P [The Scripps Research Institute, La Jolla, CA (United States); Nieva, J [Billings Clinic, Billings, Montana (United States); Newton, P [University of Southern California, Los Angeles, CA (United States); Norton, L; Venkatappa, N; Jochelson, M [Memorial Sloan-Kettering Cancer Center, NY, NY (United States)

    2014-06-01

    Purpose: Traditionally, breast cancer metastasis is described as a process wherein cancer cells spread from the breast to multiple organ systems via hematogenous and lymphatic routes. Mapping organ specific patterns of cancer spread over time is essential to understanding metastatic progression. In order to better predict sites of metastases, here we demonstrate modeling of the patterned migration of metastasis. Methods: We reviewed the clinical history of 453 breast cancer patients from Memorial Sloan Kettering Cancer Center who were non-metastatic at diagnosis but developed metastasis over time. We used the variables of organ site of metastases as well as time to create a Markov chain model of metastasis. We illustrate the probabilities of metastasis occurring at a given anatomic site together with the probability of spread to additional sites. Results: Based on the clinical histories of 453 breast cancer patients who developed metastasis, we have learned (i) how to create the Markov transition matrix governing the probabilities of cancer progression from site to site; (ii) how to create a systemic network diagram governing disease progression modeled as a random walk on a directed graph; (iii) how to classify metastatic sites as ‘sponges’ that tend to only receive cancer cells or ‘spreaders’ that receive and release them; (iv) how to model the time-scales of disease progression as a Weibull probability distribution function; (v) how to perform Monte Carlo simulations of disease progression; and (vi) how to interpret disease progression as an entropy-increasing stochastic process. Conclusion: Based on our modeling, metastatic spread may follow predictable pathways. Mapping metastasis not simply by organ site, but by function as either a ‘spreader’ or ‘sponge’ fundamentally reframes our understanding of metastatic processes. This model serves as a novel platform from which we may integrate the evolving genomic landscape that drives cancer

  4. SU-E-J-115: Using Markov Chain Modeling to Elucidate Patterns in Breast Cancer Metastasis Over Time and Space

    International Nuclear Information System (INIS)

    Comen, E; Mason, J; Kuhn, P; Nieva, J; Newton, P; Norton, L; Venkatappa, N; Jochelson, M

    2014-01-01

    Purpose: Traditionally, breast cancer metastasis is described as a process wherein cancer cells spread from the breast to multiple organ systems via hematogenous and lymphatic routes. Mapping organ specific patterns of cancer spread over time is essential to understanding metastatic progression. In order to better predict sites of metastases, here we demonstrate modeling of the patterned migration of metastasis. Methods: We reviewed the clinical history of 453 breast cancer patients from Memorial Sloan Kettering Cancer Center who were non-metastatic at diagnosis but developed metastasis over time. We used the variables of organ site of metastases as well as time to create a Markov chain model of metastasis. We illustrate the probabilities of metastasis occurring at a given anatomic site together with the probability of spread to additional sites. Results: Based on the clinical histories of 453 breast cancer patients who developed metastasis, we have learned (i) how to create the Markov transition matrix governing the probabilities of cancer progression from site to site; (ii) how to create a systemic network diagram governing disease progression modeled as a random walk on a directed graph; (iii) how to classify metastatic sites as ‘sponges’ that tend to only receive cancer cells or ‘spreaders’ that receive and release them; (iv) how to model the time-scales of disease progression as a Weibull probability distribution function; (v) how to perform Monte Carlo simulations of disease progression; and (vi) how to interpret disease progression as an entropy-increasing stochastic process. Conclusion: Based on our modeling, metastatic spread may follow predictable pathways. Mapping metastasis not simply by organ site, but by function as either a ‘spreader’ or ‘sponge’ fundamentally reframes our understanding of metastatic processes. This model serves as a novel platform from which we may integrate the evolving genomic landscape that drives cancer

  5. Pathwise duals of monotone and additive Markov processes

    Czech Academy of Sciences Publication Activity Database

    Sturm, A.; Swart, Jan M.

    -, - (2018) ISSN 0894-9840 R&D Projects: GA ČR GAP201/12/2613 Institutional support: RVO:67985556 Keywords : pathwise duality * monotone Markov process * additive Markov process * interacting particle system Subject RIV: BA - General Mathematics Impact factor: 0.854, year: 2016 http://library.utia.cas.cz/separaty/2016/SI/swart-0465436.pdf

  6. MARKOV CHAIN MODELING OF PERFORMANCE DEGRADATION OF PHOTOVOLTAIC SYSTEM

    OpenAIRE

    E. Suresh Kumar; Asis Sarkar; Dhiren kumar Behera

    2012-01-01

    Modern probability theory studies chance processes for which theknowledge of previous outcomes influence predictions for future experiments. In principle, when a sequence of chance experiments, all of the past outcomes could influence the predictions for the next experiment. In Markov chain type of chance, the outcome of a given experiment can affect the outcome of the next experiment. The system state changes with time and the state X and time t are two random variables. Each of these variab...

  7. Estimating the Probability of a Rare Event Over a Finite Time Horizon

    NARCIS (Netherlands)

    de Boer, Pieter-Tjerk; L'Ecuyer, Pierre; Rubino, Gerardo; Tuffin, Bruno

    2007-01-01

    We study an approximation for the zero-variance change of measure to estimate the probability of a rare event in a continuous-time Markov chain. The rare event occurs when the chain reaches a given set of states before some fixed time limit. The jump rates of the chain are expressed as functions of

  8. Narrow Artificial Intelligence with Machine Learning for Real-Time Estimation of a Mobile Agent’s Location Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Cédric Beaulac

    2017-01-01

    Full Text Available We propose to use a supervised machine learning technique to track the location of a mobile agent in real time. Hidden Markov Models are used to build artificial intelligence that estimates the unknown position of a mobile target moving in a defined environment. This narrow artificial intelligence performs two distinct tasks. First, it provides real-time estimation of the mobile agent’s position using the forward algorithm. Second, it uses the Baum–Welch algorithm as a statistical learning tool to gain knowledge of the mobile target. Finally, an experimental environment is proposed, namely, a video game that we use to test our artificial intelligence. We present statistical and graphical results to illustrate the efficiency of our method.

  9. biomvRhsmm: Genomic Segmentation with Hidden Semi-Markov Model

    Directory of Open Access Journals (Sweden)

    Yang Du

    2014-01-01

    Full Text Available High-throughput technologies like tiling array and next-generation sequencing (NGS generate continuous homogeneous segments or signal peaks in the genome that represent transcripts and transcript variants (transcript mapping and quantification, regions of deletion and amplification (copy number variation, or regions characterized by particular common features like chromatin state or DNA methylation ratio (epigenetic modifications. However, the volume and output of data produced by these technologies present challenges in analysis. Here, a hidden semi-Markov model (HSMM is implemented and tailored to handle multiple genomic profile, to better facilitate genome annotation by assisting in the detection of transcripts, regulatory regions, and copy number variation by holistic microarray or NGS. With support for various data distributions, instead of limiting itself to one specific application, the proposed hidden semi-Markov model is designed to allow modeling options to accommodate different types of genomic data and to serve as a general segmentation engine. By incorporating genomic positions into the sojourn distribution of HSMM, with optional prior learning using annotation or previous studies, the modeling output is more biologically sensible. The proposed model has been compared with several other state-of-the-art segmentation models through simulation benchmarking, which shows that our efficient implementation achieves comparable or better sensitivity and specificity in genomic segmentation.

  10. An introduction to hidden Markov models for biological sequences

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose

    1998-01-01

    A non-matematical tutorial on hidden Markov models (HMMs) plus a description of one of the applications of HMMs: gene finding.......A non-matematical tutorial on hidden Markov models (HMMs) plus a description of one of the applications of HMMs: gene finding....

  11. Portfolio allocation under the vendor managed inventory: A Markov ...

    African Journals Online (AJOL)

    Portfolio allocation under the vendor managed inventory: A Markov decision process. ... Journal of Applied Sciences and Environmental Management ... This study provides a review of Markov decision processes and investigates its suitability for solutions to portfolio allocation problems under vendor managed inventory in ...

  12. The space-time model according to dimensional continuous space-time theory

    International Nuclear Information System (INIS)

    Martini, Luiz Cesar

    2014-01-01

    This article results from the Dimensional Continuous Space-Time Theory for which the introductory theoretician was presented in [1]. A theoretical model of the Continuous Space-Time is presented. The wave equation of time into absolutely stationary empty space referential will be described in detail. The complex time, that is the time fixed on the infinite phase time speed referential, is deduced from the New View of Relativity Theory that is being submitted simultaneously with this article in this congress. Finally considering the inseparable Space-Time is presented the duality equation wave-particle.

  13. A Cost-Effective Smoothed Multigrid with Modified Neighborhood-Based Aggregation for Markov Chains

    Directory of Open Access Journals (Sweden)

    Zhao-Li Shen

    2015-01-01

    Full Text Available Smoothed aggregation multigrid method is considered for computing stationary distributions of Markov chains. A judgement which determines whether to implement the whole aggregation procedure is proposed. Through this strategy, a large amount of time in the aggregation procedure is saved without affecting the convergence behavior. Besides this, we explain the shortage and irrationality of the Neighborhood-Based aggregation which is commonly used in multigrid methods. Then a modified version is presented to remedy and improve it. Numerical experiments on some typical Markov chain problems are reported to illustrate the performance of these methods.

  14. Poisson-Box Sampling algorithms for three-dimensional Markov binary mixtures

    Science.gov (United States)

    Larmier, Coline; Zoia, Andrea; Malvagi, Fausto; Dumonteil, Eric; Mazzolo, Alain

    2018-02-01

    Particle transport in Markov mixtures can be addressed by the so-called Chord Length Sampling (CLS) methods, a family of Monte Carlo algorithms taking into account the effects of stochastic media on particle propagation by generating on-the-fly the material interfaces crossed by the random walkers during their trajectories. Such methods enable a significant reduction of computational resources as opposed to reference solutions obtained by solving the Boltzmann equation for a large number of realizations of random media. CLS solutions, which neglect correlations induced by the spatial disorder, are faster albeit approximate, and might thus show discrepancies with respect to reference solutions. In this work we propose a new family of algorithms (called 'Poisson Box Sampling', PBS) aimed at improving the accuracy of the CLS approach for transport in d-dimensional binary Markov mixtures. In order to probe the features of PBS methods, we will focus on three-dimensional Markov media and revisit the benchmark problem originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]: for these configurations we will compare reference solutions, standard CLS solutions and the new PBS solutions for scalar particle flux, transmission and reflection coefficients. PBS will be shown to perform better than CLS at the expense of a reasonable increase in computational time.

  15. Sub-seasonal-to-seasonal Reservoir Inflow Forecast using Bayesian Hierarchical Hidden Markov Model

    Science.gov (United States)

    Mukhopadhyay, S.; Arumugam, S.

    2017-12-01

    Sub-seasonal-to-seasonal (S2S) (15-90 days) streamflow forecasting is an emerging area of research that provides seamless information for reservoir operation from weather time scales to seasonal time scales. From an operational perspective, sub-seasonal inflow forecasts are highly valuable as these enable water managers to decide short-term releases (15-30 days), while holding water for seasonal needs (e.g., irrigation and municipal supply) and to meet end-of-the-season target storage at a desired level. We propose a Bayesian Hierarchical Hidden Markov Model (BHHMM) to develop S2S inflow forecasts for the Tennessee Valley Area (TVA) reservoir system. Here, the hidden states are predicted by relevant indices that influence the inflows at S2S time scale. The hidden Markov model also captures the both spatial and temporal hierarchy in predictors that operate at S2S time scale with model parameters being estimated as a posterior distribution using a Bayesian framework. We present our work in two steps, namely single site model and multi-site model. For proof of concept, we consider inflows to Douglas Dam, Tennessee, in the single site model. For multisite model we consider reservoirs in the upper Tennessee valley. Streamflow forecasts are issued and updated continuously every day at S2S time scale. We considered precipitation forecasts obtained from NOAA Climate Forecast System (CFSv2) GCM as predictors for developing S2S streamflow forecasts along with relevant indices for predicting hidden states. Spatial dependence of the inflow series of reservoirs are also preserved in the multi-site model. To circumvent the non-normality of the data, we consider the HMM in a Generalized Linear Model setting. Skill of the proposed approach is tested using split sample validation against a traditional multi-site canonical correlation model developed using the same set of predictors. From the posterior distribution of the inflow forecasts, we also highlight different system behavior

  16. A Multi-stage Representation of Cell Proliferation as a Markov Process.

    Science.gov (United States)

    Yates, Christian A; Ford, Matthew J; Mort, Richard L

    2017-12-01

    The stochastic simulation algorithm commonly known as Gillespie's algorithm (originally derived for modelling well-mixed systems of chemical reactions) is now used ubiquitously in the modelling of biological processes in which stochastic effects play an important role. In well-mixed scenarios at the sub-cellular level it is often reasonable to assume that times between successive reaction/interaction events are exponentially distributed and can be appropriately modelled as a Markov process and hence simulated by the Gillespie algorithm. However, Gillespie's algorithm is routinely applied to model biological systems for which it was never intended. In particular, processes in which cell proliferation is important (e.g. embryonic development, cancer formation) should not be simulated naively using the Gillespie algorithm since the history-dependent nature of the cell cycle breaks the Markov process. The variance in experimentally measured cell cycle times is far less than in an exponential cell cycle time distribution with the same mean.Here we suggest a method of modelling the cell cycle that restores the memoryless property to the system and is therefore consistent with simulation via the Gillespie algorithm. By breaking the cell cycle into a number of independent exponentially distributed stages, we can restore the Markov property at the same time as more accurately approximating the appropriate cell cycle time distributions. The consequences of our revised mathematical model are explored analytically as far as possible. We demonstrate the importance of employing the correct cell cycle time distribution by recapitulating the results from two models incorporating cellular proliferation (one spatial and one non-spatial) and demonstrating that changing the cell cycle time distribution makes quantitative and qualitative differences to the outcome of the models. Our adaptation will allow modellers and experimentalists alike to appropriately represent cellular

  17. Real-Time Landmine Detection with Ground-Penetrating Radar Using Discriminative and Adaptive Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Ho KC

    2005-01-01

    Full Text Available We propose a real-time software system for landmine detection using ground-penetrating radar (GPR. The system includes an efficient and adaptive preprocessing component; a hidden Markov model- (HMM- based detector; a corrective training component; and an incremental update of the background model. The preprocessing is based on frequency-domain processing and performs ground-level alignment and background removal. The HMM detector is an improvement of a previously proposed system (baseline. It includes additional pre- and postprocessing steps to improve the time efficiency and enable real-time application. The corrective training component is used to adjust the initial model parameters to minimize the number of misclassification sequences. This component could be used offline, or online through feedback to adapt an initial model to specific sites and environments. The background update component adjusts the parameters of the background model to adapt it to each lane during testing. The proposed software system is applied to data acquired from three outdoor test sites at different geographic locations, using a state-of-the-art array GPR prototype. The first collection was used as training, and the other two (contain data from more than 1200 m of simulated dirt and gravel roads for testing. Our results indicate that, on average, the corrective training can improve the performance by about 10% for each site. For individual lanes, the performance gain can reach 50%.

  18. A Markov Process Inspired Cellular Automata Model of Road Traffic

    OpenAIRE

    Wang, Fa; Li, Li; Hu, Jianming; Ji, Yan; Yao, Danya; Zhang, Yi; Jin, Xuexiang; Su, Yuelong; Wei, Zheng

    2008-01-01

    To provide a more accurate description of the driving behaviors in vehicle queues, a namely Markov-Gap cellular automata model is proposed in this paper. It views the variation of the gap between two consequent vehicles as a Markov process whose stationary distribution corresponds to the observed distribution of practical gaps. The multiformity of this Markov process provides the model enough flexibility to describe various driving behaviors. Two examples are given to show how to specialize i...

  19. Assessing type I error and power of multistate Markov models for panel data-A simulation study

    OpenAIRE

    Cassarly, Christy; Martin, Renee’ H.; Chimowitz, Marc; Peña, Edsel A.; Ramakrishnan, Viswanathan; Palesch, Yuko Y.

    2016-01-01

    Ordinal outcomes collected at multiple follow-up visits are common in clinical trials. Sometimes, one visit is chosen for the primary analysis and the scale is dichotomized amounting to loss of information. Multistate Markov models describe how a process moves between states over time. Here, simulation studies are performed to investigate the type I error and power characteristics of multistate Markov models for panel data with limited non-adjacent state transitions. The results suggest that ...

  20. Optimal Linear Responses for Markov Chains and Stochastically Perturbed Dynamical Systems

    Science.gov (United States)

    Antown, Fadi; Dragičević, Davor; Froyland, Gary

    2018-03-01

    The linear response of a dynamical system refers to changes to properties of the system when small external perturbations are applied. We consider the little-studied question of selecting an optimal perturbation so as to (i) maximise the linear response of the equilibrium distribution of the system, (ii) maximise the linear response of the expectation of a specified observable, and (iii) maximise the linear response of the rate of convergence of the system to the equilibrium distribution. We also consider the inhomogeneous, sequential, or time-dependent situation where the governing dynamics is not stationary and one wishes to select a sequence of small perturbations so as to maximise the overall linear response at some terminal time. We develop the theory for finite-state Markov chains, provide explicit solutions for some illustrative examples, and numerically apply our theory to stochastically perturbed dynamical systems, where the Markov chain is replaced by a matrix representation of an approximate annealed transfer operator for the random dynamical system.

  1. Partially Hidden Markov Models

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Rissanen, Jorma

    1996-01-01

    Partially Hidden Markov Models (PHMM) are introduced. They differ from the ordinary HMM's in that both the transition probabilities of the hidden states and the output probabilities are conditioned on past observations. As an illustration they are applied to black and white image compression where...

  2. Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation

    NARCIS (Netherlands)

    Minasny, B.; Vrugt, J.A.; McBratney, A.B.

    2011-01-01

    This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior

  3. Cost Effective Community Based Dementia Screening: A Markov Model Simulation

    Directory of Open Access Journals (Sweden)

    Erin Saito

    2014-01-01

    Full Text Available Background. Given the dementia epidemic and the increasing cost of healthcare, there is a need to assess the economic benefit of community based dementia screening programs. Materials and Methods. Markov model simulations were generated using data obtained from a community based dementia screening program over a one-year period. The models simulated yearly costs of caring for patients based on clinical transitions beginning in pre dementia and extending for 10 years. Results. A total of 93 individuals (74 female, 19 male were screened for dementia and 12 meeting clinical criteria for either mild cognitive impairment (n=7 or dementia (n=5 were identified. Assuming early therapeutic intervention beginning during the year of dementia detection, Markov model simulations demonstrated 9.8% reduction in cost of dementia care over a ten-year simulation period, primarily through increased duration in mild stages and reduced time in more costly moderate and severe stages. Discussion. Community based dementia screening can reduce healthcare costs associated with caring for demented individuals through earlier detection and treatment, resulting in proportionately reduced time in more costly advanced stages.

  4. Representing Lumped Markov Chains by Minimal Polynomials over Field GF(q)

    Science.gov (United States)

    Zakharov, V. M.; Shalagin, S. V.; Eminov, B. F.

    2018-05-01

    A method has been proposed to represent lumped Markov chains by minimal polynomials over a finite field. The accuracy of representing lumped stochastic matrices, the law of lumped Markov chains depends linearly on the minimum degree of polynomials over field GF(q). The method allows constructing the realizations of lumped Markov chains on linear shift registers with a pre-defined “linear complexity”.

  5. Efficient Incorporation of Markov Random Fields in Change Detection

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Nielsen, Allan Aasbjerg; Carstensen, Jens Michael

    2009-01-01

    of noise, implying that the pixel-wise classifier is also noisy. There is thus a need for incorporating local homogeneity constraints into such a change detection framework. For this modelling task Markov Random Fields are suitable. Markov Random Fields have, however, previously been plagued by lack...

  6. First hitting probabilities for semi markov chains and estimation

    DEFF Research Database (Denmark)

    Georgiadis, Stylianos

    2017-01-01

    We first consider a stochastic system described by an absorbing semi-Markov chain with finite state space and we introduce the absorption probability to a class of recurrent states. Afterwards, we study the first hitting probability to a subset of states for an irreducible semi-Markov chain...

  7. On the impact of information delay on location-based relaying: a markov modeling approach

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Olsen, Rasmus Løvenstein; Madsen, Tatiana Kozlova

    2012-01-01

    For centralized selection of communication relays, the necessary decision information needs to be collected from the mobile nodes by the access point (centralized decision point). In mobile scenarios, the required information collection and forwarding delays will affect the reliability of the col......For centralized selection of communication relays, the necessary decision information needs to be collected from the mobile nodes by the access point (centralized decision point). In mobile scenarios, the required information collection and forwarding delays will affect the reliability...... of the collected information and hence will influence the performance of the relay selection method. This paper analyzes this influence in the decision process for the example of a mobile location-based relay selection approach using a continuous time Markov chain model. The model is used to obtain optimal relay...

  8. Using Markov Chains and Multi-Objective Optimization for Energy-Efficient Context Recognition †

    Science.gov (United States)

    Janko, Vito

    2017-01-01

    The recognition of the user’s context with wearable sensing systems is a common problem in ubiquitous computing. However, the typically small battery of such systems often makes continuous recognition impractical. The strain on the battery can be reduced if the sensor setting is adapted to each context. We propose a method that efficiently finds near-optimal sensor settings for each context. It uses Markov chains to simulate the behavior of the system in different configurations and the multi-objective genetic algorithm to find a set of good non-dominated configurations. The method was evaluated on three real-life datasets and found good trade-offs between the system’s energy expenditure and the system’s accuracy. One of the solutions, for example, consumed five-times less energy than the default one, while sacrificing only two percentage points of accuracy. PMID:29286301

  9. Subharmonic projections for a quantum Markov semigroup

    International Nuclear Information System (INIS)

    Fagnola, Franco; Rebolledo, Rolando

    2002-01-01

    This article introduces a concept of subharmonic projections for a quantum Markov semigroup, in view of characterizing the support projection of a stationary state in terms of the semigroup generator. These results, together with those of our previous article [J. Math. Phys. 42, 1296 (2001)], lead to a method for proving the existence of faithful stationary states. This is often crucial in the analysis of ergodic properties of quantum Markov semigroups. The method is illustrated by applications to physical models

  10. A Novel Grey Prediction Model Combining Markov Chain with Functional-Link Net and Its Application to Foreign Tourist Forecasting

    Directory of Open Access Journals (Sweden)

    Yi-Chung Hu

    2017-10-01

    Full Text Available Grey prediction models for time series have been widely applied to demand forecasting because only limited data are required for them to build a time series model without any statistical assumptions. Previous studies have demonstrated that the combination of grey prediction with neural networks helps grey prediction perform better. Some methods have been presented to improve the prediction accuracy of the popular GM(1,1 model by using the Markov chain to estimate the residual needed to modify a predicted value. Compared to the previous Grey-Markov models, this study contributes to apply the functional-link net to estimate the degree to which a predicted value obtained from the GM(1,1 model can be adjusted. Furthermore, the troublesome number of states and their bounds that are not easily specified in Markov chain have been determined by a genetic algorithm. To verify prediction performance, the proposed grey prediction model was applied to an important grey system problem—foreign tourist forecasting. Experimental results show that the proposed model provides satisfactory results compared to the other Grey-Markov models considered.

  11. Logics and Models for Stochastic Analysis Beyond Markov Chains

    DEFF Research Database (Denmark)

    Zeng, Kebin

    , because of the generality of ME distributions, we have to leave the world of Markov chains. To support ME distributions with multiple exits, we introduce a multi-exits ME distribution together with a process algebra MEME to express the systems having the semantics as Markov renewal processes with ME...

  12. The problem with time in mixed continuous/discrete time modelling

    NARCIS (Netherlands)

    Rovers, K.C.; Kuper, Jan; Smit, Gerardus Johannes Maria

    The design of cyber-physical systems requires the use of mixed continuous time and discrete time models. Current modelling tools have problems with time transformations (such as a time delay) or multi-rate systems. We will present a novel approach that implements signals as functions of time,

  13. On mean reward variance in semi-Markov processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2005-01-01

    Roč. 62, č. 3 (2005), s. 387-397 ISSN 1432-2994 R&D Projects: GA ČR(CZ) GA402/05/0115; GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov and semi-Markov processes with rewards * variance of cumulative reward * asymptotic behaviour Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.259, year: 2005

  14. Efficient tests for equivalence of hidden Markov processes and quantum random walks

    NARCIS (Netherlands)

    U. Faigle; A. Schönhuth (Alexander)

    2011-01-01

    htmlabstractWhile two hidden Markov process (HMP) resp.~quantum random walk (QRW) parametrizations can differ from one another, the stochastic processes arising from them can be equivalent. Here a polynomial-time algorithm is presented which can determine equivalence of two HMP parametrizations

  15. A GM (1, 1) Markov Chain-Based Aeroengine Performance Degradation Forecast Approach Using Exhaust Gas Temperature

    OpenAIRE

    Zhao, Ning-bo; Yang, Jia-long; Li, Shu-ying; Sun, Yue-wu

    2014-01-01

    Performance degradation forecast technology for quantitatively assessing degradation states of aeroengine using exhaust gas temperature is an important technology in the aeroengine health management. In this paper, a GM (1, 1) Markov chain-based approach is introduced to forecast exhaust gas temperature by taking the advantages of GM (1, 1) model in time series and the advantages of Markov chain model in dealing with highly nonlinear and stochastic data caused by uncertain factors. In this ap...

  16. Mixed Vehicle Flow At Signalized Intersection: Markov Chain Analysis

    Directory of Open Access Journals (Sweden)

    Gertsbakh Ilya B.

    2015-09-01

    Full Text Available We assume that a Poisson flow of vehicles arrives at isolated signalized intersection, and each vehicle, independently of others, represents a random number X of passenger car units (PCU’s. We analyze numerically the stationary distribution of the queue process {Zn}, where Zn is the number of PCU’s in a queue at the beginning of the n-th red phase, n → ∞. We approximate the number Yn of PCU’s arriving during one red-green cycle by a two-parameter Negative Binomial Distribution (NBD. The well-known fact is that {Zn} follow an infinite-state Markov chain. We approximate its stationary distribution using a finite-state Markov chain. We show numerically that there is a strong dependence of the mean queue length E[Zn] in equilibrium on the input distribution of Yn and, in particular, on the ”over dispersion” parameter γ= Var[Yn]/E[Yn]. For Poisson input, γ = 1. γ > 1 indicates presence of heavy-tailed input. In reality it means that a relatively large ”portion” of PCU’s, considerably exceeding the average, may arrive with high probability during one red-green cycle. Empirical formulas are presented for an accurate estimation of mean queue length as a function of load and g of the input flow. Using the Markov chain technique, we analyze the mean ”virtual” delay time for a car which always arrives at the beginning of the red phase.

  17. Simplification of Markov chains with infinite state space and the mathematical theory of random gene expression bursts

    Science.gov (United States)

    Jia, Chen

    2017-09-01

    Here we develop an effective approach to simplify two-time-scale Markov chains with infinite state spaces by removal of states with fast leaving rates, which improves the simplification method of finite Markov chains. We introduce the concept of fast transition paths and show that the effective transitions of the reduced chain can be represented as the superposition of the direct transitions and the indirect transitions via all the fast transition paths. Furthermore, we apply our simplification approach to the standard Markov model of single-cell stochastic gene expression and provide a mathematical theory of random gene expression bursts. We give the precise mathematical conditions for the bursting kinetics of both mRNAs and proteins. It turns out that random bursts exactly correspond to the fast transition paths of the Markov model. This helps us gain a better understanding of the physics behind the bursting kinetics as an emergent behavior from the fundamental multiscale biochemical reaction kinetics of stochastic gene expression.

  18. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  19. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Kashyap Manohar

    2008-01-01

    Full Text Available Abstract This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  20. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Chris Winstead

    2008-04-01

    Full Text Available This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  1. Automated generation of partial Markov chain from high level descriptions

    International Nuclear Information System (INIS)

    Brameret, P.-A.; Rauzy, A.; Roussel, J.-M.

    2015-01-01

    We propose an algorithm to generate partial Markov chains from high level implicit descriptions, namely AltaRica models. This algorithm relies on two components. First, a variation on Dijkstra's algorithm to compute shortest paths in a graph. Second, the definition of a notion of distance to select which states must be kept and which can be safely discarded. The proposed method solves two problems at once. First, it avoids a manual construction of Markov chains, which is both tedious and error prone. Second, up the price of acceptable approximations, it makes it possible to push back dramatically the exponential blow-up of the size of the resulting chains. We report experimental results that show the efficiency of the proposed approach. - Highlights: • We generate Markov chains from a higher level safety modeling language (AltaRica). • We use a variation on Dijkstra's algorithm to generate partial Markov chains. • Hence we solve two problems: the first problem is the tedious manual construction of Markov chains. • The second problem is the blow-up of the size of the chains, at the cost of decent approximations. • The experimental results highlight the efficiency of the method

  2. A Framework for Bioacoustic Vocalization Analysis Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Ebenezer Out-Nyarko

    2009-11-01

    Full Text Available Using Hidden Markov Models (HMMs as a recognition framework for automatic classification of animal vocalizations has a number of benefits, including the ability to handle duration variability through nonlinear time alignment, the ability to incorporate complex language or recognition constraints, and easy extendibility to continuous recognition and detection domains. In this work, we apply HMMs to several different species and bioacoustic tasks using generalized spectral features that can be easily adjusted across species and HMM network topologies suited to each task. This experimental work includes a simple call type classification task using one HMM per vocalization for repertoire analysis of Asian elephants, a language-constrained song recognition task using syllable models as base units for ortolan bunting vocalizations, and a stress stimulus differentiation task in poultry vocalizations using a non-sequential model via a one-state HMM with Gaussian mixtures. Results show strong performance across all tasks and illustrate the flexibility of the HMM framework for a variety of species, vocalization types, and analysis tasks.

  3. A Bayesian Markov geostatistical model for estimation of hydrogeological properties

    International Nuclear Information System (INIS)

    Rosen, L.; Gustafson, G.

    1996-01-01

    A geostatistical methodology based on Markov-chain analysis and Bayesian statistics was developed for probability estimations of hydrogeological and geological properties in the siting process of a nuclear waste repository. The probability estimates have practical use in decision-making on issues such as siting, investigation programs, and construction design. The methodology is nonparametric which makes it possible to handle information that does not exhibit standard statistical distributions, as is often the case for classified information. Data do not need to meet the requirements on additivity and normality as with the geostatistical methods based on regionalized variable theory, e.g., kriging. The methodology also has a formal way for incorporating professional judgments through the use of Bayesian statistics, which allows for updating of prior estimates to posterior probabilities each time new information becomes available. A Bayesian Markov Geostatistical Model (BayMar) software was developed for implementation of the methodology in two and three dimensions. This paper gives (1) a theoretical description of the Bayesian Markov Geostatistical Model; (2) a short description of the BayMar software; and (3) an example of application of the model for estimating the suitability for repository establishment with respect to the three parameters of lithology, hydraulic conductivity, and rock quality designation index (RQD) at 400--500 meters below ground surface in an area around the Aespoe Hard Rock Laboratory in southeastern Sweden

  4. Discounted semi-Markov decision processes : linear programming and policy iteration

    NARCIS (Netherlands)

    Wessels, J.; van Nunen, J.A.E.E.

    1975-01-01

    For semi-Markov decision processes with discounted rewards we derive the well known results regarding the structure of optimal strategies (nonrandomized, stationary Markov strategies) and the standard algorithms (linear programming, policy iteration). Our analysis is completely based on a primal

  5. Discounted semi-Markov decision processes : linear programming and policy iteration

    NARCIS (Netherlands)

    Wessels, J.; van Nunen, J.A.E.E.

    1974-01-01

    For semi-Markov decision processes with discounted rewards we derive the well known results regarding the structure of optimal strategies (nonrandomized, stationary Markov strategies) and the standard algorithms (linear programming, policy iteration). Our analysis is completely based on a primal

  6. Compositionality for Markov reward chains with fast and silent transitions

    NARCIS (Netherlands)

    Markovski, J.; Sokolova, A.; Trcka, N.; Vink, de E.P.

    2009-01-01

    A parallel composition is defined for Markov reward chains with stochastic discontinuity, and with fast and silent transitions. In this setting, compositionality with respect to the relevant aggregation preorders is established. For Markov reward chains with fast transitions the preorders are

  7. Detecting Structural Breaks using Hidden Markov Models

    DEFF Research Database (Denmark)

    Ntantamis, Christos

    Testing for structural breaks and identifying their location is essential for econometric modeling. In this paper, a Hidden Markov Model (HMM) approach is used in order to perform these tasks. Breaks are defined as the data points where the underlying Markov Chain switches from one state to another....... The estimation of the HMM is conducted using a variant of the Iterative Conditional Expectation-Generalized Mixture (ICE-GEMI) algorithm proposed by Delignon et al. (1997), that permits analysis of the conditional distributions of economic data and allows for different functional forms across regimes...

  8. Bayesian inference for hybrid discrete-continuous stochastic kinetic models

    International Nuclear Information System (INIS)

    Sherlock, Chris; Golightly, Andrew; Gillespie, Colin S

    2014-01-01

    We consider the problem of efficiently performing simulation and inference for stochastic kinetic models. Whilst it is possible to work directly with the resulting Markov jump process (MJP), computational cost can be prohibitive for networks of realistic size and complexity. In this paper, we consider an inference scheme based on a novel hybrid simulator that classifies reactions as either ‘fast’ or ‘slow’ with fast reactions evolving as a continuous Markov process whilst the remaining slow reaction occurrences are modelled through a MJP with time-dependent hazards. A linear noise approximation (LNA) of fast reaction dynamics is employed and slow reaction events are captured by exploiting the ability to solve the stochastic differential equation driving the LNA. This simulation procedure is used as a proposal mechanism inside a particle MCMC scheme, thus allowing Bayesian inference for the model parameters. We apply the scheme to a simple application and compare the output with an existing hybrid approach and also a scheme for performing inference for the underlying discrete stochastic model. (paper)

  9. Markov Chain Monte Carlo

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.

  10. FuzzyStatProb: An R Package for the Estimation of Fuzzy Stationary Probabilities from a Sequence of Observations of an Unknown Markov Chain

    Directory of Open Access Journals (Sweden)

    Pablo J. Villacorta

    2016-07-01

    Full Text Available Markov chains are well-established probabilistic models of a wide variety of real systems that evolve along time. Countless examples of applications of Markov chains that successfully capture the probabilistic nature of real problems include areas as diverse as biology, medicine, social science, and engineering. One interesting feature which characterizes certain kinds of Markov chains is their stationary distribution, which stands for the global fraction of time the system spends in each state. The computation of the stationary distribution requires precise knowledge of the transition probabilities. When the only information available is a sequence of observations drawn from the system, such probabilities have to be estimated. Here we review an existing method to estimate fuzzy transition probabilities from observations and, with them, obtain the fuzzy stationary distribution of the resulting fuzzy Markov chain. The method also works when the user directly provides fuzzy transition probabilities. We provide an implementation in the R environment that is the first available to the community and serves as a proof of concept. We demonstrate the usefulness of our proposal with computational experiments on a toy problem, namely a time-homogeneous Markov chain that guides the randomized movement of an autonomous robot that patrols a small area.

  11. Continuous Time Structural Equation Modeling with R Package ctsem

    Directory of Open Access Journals (Sweden)

    Charles C. Driver

    2017-04-01

    Full Text Available We introduce ctsem, an R package for continuous time structural equation modeling of panel (N > 1 and time series (N = 1 data, using full information maximum likelihood. Most dynamic models (e.g., cross-lagged panel models in the social and behavioural sciences are discrete time models. An assumption of discrete time models is that time intervals between measurements are equal, and that all subjects were assessed at the same intervals. Violations of this assumption are often ignored due to the difficulty of accounting for varying time intervals, therefore parameter estimates can be biased and the time course of effects becomes ambiguous. By using stochastic differential equations to estimate an underlying continuous process, continuous time models allow for any pattern of measurement occasions. By interfacing to OpenMx, ctsem combines the flexible specification of structural equation models with the enhanced data gathering opportunities and improved estimation of continuous time models. ctsem can estimate relationships over time for multiple latent processes, measured by multiple noisy indicators with varying time intervals between observations. Within and between effects are estimated simultaneously by modeling both observed covariates and unobserved heterogeneity. Exogenous shocks with different shapes, group differences, higher order diffusion effects and oscillating processes can all be simply modeled. We first introduce and define continuous time models, then show how to specify and estimate a range of continuous time models using ctsem.

  12. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    Systat Software Asia-Pacific. Ltd., in Bangalore, where the technical work for the development of the statistical software Systat takes ... In Part 4, we discuss some applications of the Markov ... one can construct the joint probability distribution of.

  13. Generalization of the Wide-Sense Markov Concept to a Widely Linear Processing

    International Nuclear Information System (INIS)

    Espinosa-Pulido, Juan Antonio; Navarro-Moreno, Jesús; Fernández-Alcalá, Rosa María; Ruiz-Molina, Juan Carlos; Oya-Lechuga, Antonia; Ruiz-Fuentes, Nuria

    2014-01-01

    In this paper we show that the classical definition and the associated characterizations of wide-sense Markov (WSM) signals are not valid for improper complex signals. For that, we propose an extension of the concept of WSM to a widely linear (WL) setting and the study of new characterizations. Specifically, we introduce a new class of signals, called widely linear Markov (WLM) signals, and we analyze some of their properties based either on second-order properties or on state-space models from a WL processing standpoint. The study is performed in both the forwards and backwards directions of time. Thus, we provide two forwards and backwards Markovian representations for WLM signals. Finally, different estimation recursive algorithms are obtained for these models

  14. Pseudo-Hermitian continuous-time quantum walks

    Energy Technology Data Exchange (ETDEWEB)

    Salimi, S; Sorouri, A, E-mail: shsalimi@uok.ac.i, E-mail: a.sorouri@uok.ac.i [Department of Physics, University of Kurdistan, PO Box 66177-15175, Sanandaj (Iran, Islamic Republic of)

    2010-07-09

    In this paper we present a model exhibiting a new type of continuous-time quantum walk (as a quantum-mechanical transport process) on networks, which is described by a non-Hermitian Hamiltonian possessing a real spectrum. We call it pseudo-Hermitian continuous-time quantum walk. We introduce a method to obtain the probability distribution of walk on any vertex and then study a specific system. We observe that the probability distribution on certain vertices increases compared to that of the Hermitian case. This formalism makes the transport process faster and can be useful for search algorithms.

  15. The generalization ability of online SVM classification based on Markov sampling.

    Science.gov (United States)

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  16. Prediction and generation of binary Markov processes: Can a finite-state fox catch a Markov mouse?

    Science.gov (United States)

    Ruebeck, Joshua B.; James, Ryan G.; Mahoney, John R.; Crutchfield, James P.

    2018-01-01

    Understanding the generative mechanism of a natural system is a vital component of the scientific method. Here, we investigate one of the fundamental steps toward this goal by presenting the minimal generator of an arbitrary binary Markov process. This is a class of processes whose predictive model is well known. Surprisingly, the generative model requires three distinct topologies for different regions of parameter space. We show that a previously proposed generator for a particular set of binary Markov processes is, in fact, not minimal. Our results shed the first quantitative light on the relative (minimal) costs of prediction and generation. We find, for instance, that the difference between prediction and generation is maximized when the process is approximately independently, identically distributed.

  17. Markov transitions and the propagation of chaos

    International Nuclear Information System (INIS)

    Gottlieb, A.

    1998-01-01

    The propagation of chaos is a central concept of kinetic theory that serves to relate the equations of Boltzmann and Vlasov to the dynamics of many-particle systems. Propagation of chaos means that molecular chaos, i.e., the stochastic independence of two random particles in a many-particle system, persists in time, as the number of particles tends to infinity. We establish a necessary and sufficient condition for a family of general n-particle Markov processes to propagate chaos. This condition is expressed in terms of the Markov transition functions associated to the n-particle processes, and it amounts to saying that chaos of random initial states propagates if it propagates for pure initial states. Our proof of this result relies on the weak convergence approach to the study of chaos due to Sztitman and Tanaka. We assume that the space in which the particles live is homomorphic to a complete and separable metric space so that we may invoke Prohorov's theorem in our proof. We also show that, if the particles can be in only finitely many states, then molecular chaos implies that the specific entropies in the n-particle distributions converge to the entropy of the limiting single-particle distribution

  18. Rate estimation in partially observed Markov jump processes with measurement errors

    OpenAIRE

    Amrein, Michael; Kuensch, Hans R.

    2010-01-01

    We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...

  19. Continuous-time quantum random walks require discrete space

    International Nuclear Information System (INIS)

    Manouchehri, K; Wang, J B

    2007-01-01

    Quantum random walks are shown to have non-intuitive dynamics which makes them an attractive area of study for devising quantum algorithms for long-standing open problems as well as those arising in the field of quantum computing. In the case of continuous-time quantum random walks, such peculiar dynamics can arise from simple evolution operators closely resembling the quantum free-wave propagator. We investigate the divergence of quantum walk dynamics from the free-wave evolution and show that, in order for continuous-time quantum walks to display their characteristic propagation, the state space must be discrete. This behavior rules out many continuous quantum systems as possible candidates for implementing continuous-time quantum random walks

  20. Continuous-time quantum random walks require discrete space

    Science.gov (United States)

    Manouchehri, K.; Wang, J. B.

    2007-11-01

    Quantum random walks are shown to have non-intuitive dynamics which makes them an attractive area of study for devising quantum algorithms for long-standing open problems as well as those arising in the field of quantum computing. In the case of continuous-time quantum random walks, such peculiar dynamics can arise from simple evolution operators closely resembling the quantum free-wave propagator. We investigate the divergence of quantum walk dynamics from the free-wave evolution and show that, in order for continuous-time quantum walks to display their characteristic propagation, the state space must be discrete. This behavior rules out many continuous quantum systems as possible candidates for implementing continuous-time quantum random walks.

  1. Quantum Markov processes and applications in many-body systems

    International Nuclear Information System (INIS)

    Temme, P. K.

    2010-01-01

    This thesis is concerned with the investigation of quantum as well as classical Markov processes and their application in the field of strongly correlated many-body systems. A Markov process is a special kind of stochastic process, which is determined by an evolution that is independent of its history and only depends on the current state of the system. The application of Markov processes has a long history in the field of statistical mechanics and classical many-body theory. Not only are Markov processes used to describe the dynamics of stochastic systems, but they predominantly also serve as a practical method that allows for the computation of fundamental properties of complex many-body systems by means of probabilistic algorithms. The aim of this thesis is to investigate the properties of quantum Markov processes, i.e. Markov processes taking place in a quantum mechanical state space, and to gain a better insight into complex many-body systems by means thereof. Moreover, we formulate a novel quantum algorithm which allows for the computation of the thermal and ground states of quantum many-body systems. After a brief introduction to quantum Markov processes we turn to an investigation of their convergence properties. We find bounds on the convergence rate of the quantum process by generalizing geometric bounds found for classical processes. We generalize a distance measure that serves as the basis for our investigations, the chi-square divergence, to non-commuting probability spaces. This divergence allows for a convenient generalization of the detailed balance condition to quantum processes. We then devise the quantum algorithm that can be seen as the natural generalization of the ubiquitous Metropolis algorithm to simulate quantum many-body Hamiltonians. By this we intend to provide further evidence, that a quantum computer can serve as a fully-fledged quantum simulator, which is not only capable of describing the dynamical evolution of quantum systems, but

  2. Monitoring and Modeling of Spatiotemporal Urban Expansion and Land-Use/Land-Cover Change Using Integrated Markov Chain Cellular Automata Model

    Directory of Open Access Journals (Sweden)

    Bhagawat Rimal

    2017-09-01

    Full Text Available Spatial–temporal analysis of land-use/land-cover (LULC change as well as the monitoring and modeling of urban expansion are essential for the planning and management of urban environments. Such environments reflect the economic conditions and quality of life of the individual country. Urbanization is generally influenced by national laws, plans and policies and by power, politics and poor governance in many less-developed countries. Remote sensing tools play a vital role in monitoring LULC change and measuring the rate of urbanization at both the local and global levels. The current study evaluated the LULC changes and urban expansion of Jhapa district of Nepal. The spatial–temporal dynamics of LULC were identified using six time-series atmospherically-corrected surface reflectance Landsat images from 1989 to 2016. A hybrid cellular automata Markov chain (CA–Markov model was used to simulate future urbanization by 2026 and 2036. The analysis shows that the urban area has increased markedly and is expected to continue to grow rapidly in the future, whereas the area for agriculture has decreased. Meanwhile, forest and shrub areas have remained almost constant. Seasonal rainfall and flooding routinely cause predictable transformation of sand, water bodies and cultivated land from one type to another. The results suggest that the use of Landsat time-series archive images and the CA–Markov model are the best options for long-term spatiotemporal analysis and achieving an acceptable level of prediction accuracy. Furthermore, understanding the relationship between the spatiotemporal dynamics of urbanization and LULC change and simulating future landscape change is essential, as they are closely interlinked. These scientific findings of past, present and future land-cover scenarios of the study area will assist planners/decision-makers to formulate sustainable urban development and environmental protection plans and will remain a scientific asset

  3. Temperature scaling method for Markov chains.

    Science.gov (United States)

    Crosby, Lonnie D; Windus, Theresa L

    2009-01-22

    The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.

  4. Fermionic Markov Chains

    OpenAIRE

    Fannes, Mark; Wouters, Jeroen

    2012-01-01

    We study a quantum process that can be considered as a quantum analogue for the classical Markov process. We specifically construct a version of these processes for free Fermions. For such free Fermionic processes we calculate the entropy density. This can be done either directly using Szeg\\"o's theorem for asymptotic densities of functions of Toeplitz matrices, or through an extension of said theorem to rates of functions, which we present in this article.

  5. Confluence reduction for Markov automata

    NARCIS (Netherlands)

    Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. Recently, the process algebra MAPA was introduced to efficiently model such systems. As always, the state space explosion threatens the analysability of the models

  6. Confluence Reduction for Markov Automata

    NARCIS (Netherlands)

    Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette; Braberman, Victor; Fribourg, Laurent

    Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. Recently, the process algebra MAPA was introduced to efficiently model such systems. As always, the state space explosion threatens the analysability of the models

  7. DISEÑO Y MANIPULACIÓN DE MODELOS OCULTOS DE MARKOV, UTILIZANDO HERRAMIENTAS HTK: UNA TUTORÍA DESIGN AND MANIPULATION OF HIDDEN MARKOV MODELS USING HTK TOOLS: A TUTORIAL

    Directory of Open Access Journals (Sweden)

    Roberto Carrillo Aguilar

    2007-04-01

    Full Text Available Este trabajo da a conocer el sistema de desarrollo de software para el diseño y manipulación de modelos ocultos de Markov, denominado HTK. Actualmente, la técnica de modelos ocultos de Markov es la herramienta más efectiva para implementar sistemas reconocedores del habla. HTK está orientado principalmente a ese aspecto. Su arquitectura es robusta y autosuficiente. Permite: la entrada lógica y natural desde un micrófono, dispone de módulos para la conversión A/D, preprocesado y parametrización de la información, posee herramientas para definir y manipular modelos ocultos de Markov, tiene librerías para entrenamiento y manipulación de los modelos ocultos de Markov ya definidos, considera funciones para definir la gramática, y además: Una serie de herramientas adicionales permiten lograr el objetivo final de obtener una hipotética transcripción del habla (conversión voz - texto.This paper presents HTK, a software development platform for the design and management of Hidden Markov Models. Nowadays, the Hidden Markov Models technique is the more effective one to implement voice recognition systems. HTK is mainly oriented to this application. Its architecture is robust and self-sufficient. It allows a natural input from a microphone, it has modules for A/D conversion, it allows pre-processing and parameterization of information, it possesses tools to define and manage the Hidden Markov Models, libraries for training and use the already defined Hidden Markov Models. It has functions to define the grammar and it has additional tools to reach the final objective, to obtain an hypothetical transcription of the talking (voice to text translation.

  8. A Monte Carlo study of time-aggregation in continuous-time and discrete-time parametric hazard models.

    NARCIS (Netherlands)

    Hofstede, ter F.; Wedel, M.

    1998-01-01

    This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are

  9. Scalable approximate policies for Markov decision process models of hospital elective admissions.

    Science.gov (United States)

    Zhu, George; Lizotte, Dan; Hoey, Jesse

    2014-05-01

    To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. The Consensus String Problem and the Complexity of Comparing Hidden Markov Models

    DEFF Research Database (Denmark)

    Lyngsø, Rune Bang; Pedersen, Christian Nørgaard Storm

    2002-01-01

    The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960s, and has since then been applied to numerous problems, e.g. biological sequence analysis. Most applications of hidden Markov models are based on efficient algorithms for computing...... the probability of generating a given string, or computing the most likely path generating a given string. In this paper we consider the problem of computing the most likely string, or consensus string, generated by a given model, and its implications on the complexity of comparing hidden Markov models. We show...... that computing the consensus string, and approximating its probability within any constant factor, is NP-hard, and that the same holds for the closely related labeling problem for class hidden Markov models. Furthermore, we establish the NP-hardness of comparing two hidden Markov models under the L∞- and L1...

  11. Quantum tomography, phase-space observables and generalized Markov kernels

    International Nuclear Information System (INIS)

    Pellonpaeae, Juha-Pekka

    2009-01-01

    We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.

  12. Markov dynamic models for long-timescale protein motion.

    KAUST Repository

    Chiang, Tsung-Han

    2010-06-01

    Molecular dynamics (MD) simulation is a well-established method for studying protein motion at the atomic scale. However, it is computationally intensive and generates massive amounts of data. One way of addressing the dual challenges of computation efficiency and data analysis is to construct simplified models of long-timescale protein motion from MD simulation data. In this direction, we propose to use Markov models with hidden states, in which the Markovian states represent potentially overlapping probabilistic distributions over protein conformations. We also propose a principled criterion for evaluating the quality of a model by its ability to predict long-timescale protein motions. Our method was tested on 2D synthetic energy landscapes and two extensively studied peptides, alanine dipeptide and the villin headpiece subdomain (HP-35 NleNle). One interesting finding is that although a widely accepted model of alanine dipeptide contains six states, a simpler model with only three states is equally good for predicting long-timescale motions. We also used the constructed Markov models to estimate important kinetic and dynamic quantities for protein folding, in particular, mean first-passage time. The results are consistent with available experimental measurements.

  13. Markov dynamic models for long-timescale protein motion.

    KAUST Repository

    Chiang, Tsung-Han; Hsu, David; Latombe, Jean-Claude

    2010-01-01

    Molecular dynamics (MD) simulation is a well-established method for studying protein motion at the atomic scale. However, it is computationally intensive and generates massive amounts of data. One way of addressing the dual challenges of computation efficiency and data analysis is to construct simplified models of long-timescale protein motion from MD simulation data. In this direction, we propose to use Markov models with hidden states, in which the Markovian states represent potentially overlapping probabilistic distributions over protein conformations. We also propose a principled criterion for evaluating the quality of a model by its ability to predict long-timescale protein motions. Our method was tested on 2D synthetic energy landscapes and two extensively studied peptides, alanine dipeptide and the villin headpiece subdomain (HP-35 NleNle). One interesting finding is that although a widely accepted model of alanine dipeptide contains six states, a simpler model with only three states is equally good for predicting long-timescale motions. We also used the constructed Markov models to estimate important kinetic and dynamic quantities for protein folding, in particular, mean first-passage time. The results are consistent with available experimental measurements.

  14. The combinational structure of non-homogeneous Markov chains with countable states

    Directory of Open Access Journals (Sweden)

    A. Mukherjea

    1983-01-01

    Full Text Available Let P(s,t denote a non-homogeneous continuous parameter Markov chain with countable state space E and parameter space [a,b], −∞0}. It is shown in this paper that R(s,t is reflexive, transitive, and independent of (s,t, s

  15. Non-homogeneous Markov process models with informative observations with an application to Alzheimer's disease.

    Science.gov (United States)

    Chen, Baojiang; Zhou, Xiao-Hua

    2011-05-01

    Identifying risk factors for transition rates among normal cognition, mildly cognitive impairment, dementia and death in an Alzheimer's disease study is very important. It is known that transition rates among these states are strongly time dependent. While Markov process models are often used to describe these disease progressions, the literature mainly focuses on time homogeneous processes, and limited tools are available for dealing with non-homogeneity. Further, patients may choose when they want to visit the clinics, which creates informative observations. In this paper, we develop methods to deal with non-homogeneous Markov processes through time scale transformation when observation times are pre-planned with some observations missing. Maximum likelihood estimation via the EM algorithm is derived for parameter estimation. Simulation studies demonstrate that the proposed method works well under a variety of situations. An application to the Alzheimer's disease study identifies that there is a significant increase in transition rates as a function of time. Furthermore, our models reveal that the non-ignorable missing mechanism is perhaps reasonable. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. PELACAKAN DAN PENGENALAN WAJAH MENGGUNAKAN METODE EMBEDDED HIDDEN MARKOV MODELS

    Directory of Open Access Journals (Sweden)

    Arie Wirawan Margono

    2004-01-01

    Full Text Available Tracking and recognizing human face becomes one of the important research subjects nowadays, where it is applicable in security system like room access, surveillance, as well as searching for person identity in police database. Because of applying in security case, it is necessary to have robust system for certain conditions such as: background influence, non-frontal face pose of male or female in different age and race. The aim of this research is to develop software which combines human face tracking using CamShift algorithm and face recognition system using Embedded Hidden Markov Models. The software uses video camera (webcam for real-time input, video AVI for dynamic input, and image file for static input. The software uses Object Oriented Programming (OOP coding style with C++ programming language, Microsoft Visual C++ 6.0® compiler, and assisted by some libraries of Intel Image Processing Library (IPL and Intel Open Source Computer Vision (OpenCV. System testing shows that object tracking based on skin complexion using CamShift algorithm comes out well, for tracking of single or even two face objects at once. Human face recognition system using Embedded Hidden Markov Models method has reach accuracy percentage of 82.76%, using 341 human faces in database that consists of 31 individuals with 11 poses and 29 human face testers. Abstract in Bahasa Indonesia : Pelacakan dan pengenalan wajah manusia merupakan salah satu bidang yang cukup berkembang dewasa ini, dimana aplikasi dapat diterapkan dalam bidang keamanan (security system seperti ijin akses masuk ruangan, pengawasan lokasi (surveillance, maupun pencarian identitas individu pada database kepolisian. Karena diterapkan dalam kasus keamanan, dibutuhkan sistem yang handal terhadap beberapa kondisi, seperti: pengaruh latar belakang, pose wajah non-frontal terhadap pria maupun wanita dalam perbedaan usia dan ras. Tujuan penelitiam ini adalah untuk membuat perangkat lunak yang menggabungkan

  17. On a Markov chain roulette-type game

    International Nuclear Information System (INIS)

    El-Shehawey, M A; El-Shreef, Gh A

    2009-01-01

    A Markov chain on non-negative integers which arises in a roulette-type game is discussed. The transition probabilities are p 01 = ρ, p Nj = δ Nj , p i,i+W = q, p i,i-1 = p = 1 - q, 1 ≤ W < N, 0 ≤ ρ ≤ 1, N - W < j ≤ N and i = 1, 2, ..., N - W. Using formulae for the determinant of a partitioned matrix, a closed form expression for the solution of the Markov chain roulette-type game is deduced. The present analysis is supported by two mathematical models from tumor growth and war with bargaining

  18. Parameter Estimation in Continuous Time Domain

    Directory of Open Access Journals (Sweden)

    Gabriela M. ATANASIU

    2016-12-01

    Full Text Available This paper will aim to presents the applications of a continuous-time parameter estimation method for estimating structural parameters of a real bridge structure. For the purpose of illustrating this method two case studies of a bridge pile located in a highly seismic risk area are considered, for which the structural parameters for the mass, damping and stiffness are estimated. The estimation process is followed by the validation of the analytical results and comparison with them to the measurement data. Further benefits and applications for the continuous-time parameter estimation method in civil engineering are presented in the final part of this paper.

  19. Anticontrol of chaos in continuous-time systems via time-delay feedback.

    Science.gov (United States)

    Wang, Xiao Fan; Chen, Guanrong; Yu, Xinghuo

    2000-12-01

    In this paper, a systematic design approach based on time-delay feedback is developed for anticontrol of chaos in a continuous-time system. This anticontrol method can drive a finite-dimensional, continuous-time, autonomous system from nonchaotic to chaotic, and can also enhance the existing chaos of an originally chaotic system. Asymptotic analysis is used to establish an approximate relationship between a time-delay differential equation and a discrete map. Anticontrol of chaos is then accomplished based on this relationship and the differential-geometry control theory. Several examples are given to verify the effectiveness of the methodology and to illustrate the systematic design procedure. (c) 2000 American Institute of Physics.

  20. ANALYTIC WORD RECOGNITION WITHOUT SEGMENTATION BASED ON MARKOV RANDOM FIELDS

    NARCIS (Netherlands)

    Coisy, C.; Belaid, A.

    2004-01-01

    In this paper, a method for analytic handwritten word recognition based on causal Markov random fields is described. The words models are HMMs where each state corresponds to a letter; each letter is modelled by a NSHP­HMM (Markov field). Global models are build dynamically, and used for recognition

  1. A Markov decision model for optimising economic production lot size ...

    African Journals Online (AJOL)

    Adopting such a Markov decision process approach, the states of a Markov chain represent possible states of demand. The decision of whether or not to produce additional inventory units is made using dynamic programming. This approach demonstrates the existence of an optimal state-dependent EPL size, and produces ...

  2. Continuity of Local Time: An applied perspective

    OpenAIRE

    Ramirez, Jorge M.; Waymire, Edward C.; Thomann, Enrique A.

    2015-01-01

    Continuity of local time for Brownian motion ranks among the most notable mathematical results in the theory of stochastic processes. This article addresses its implications from the point of view of applications. In particular an extension of previous results on an explicit role of continuity of (natural) local time is obtained for applications to recent classes of problems in physics, biology and finance involving discontinuities in a dispersion coefficient. The main theorem and its corolla...

  3. Detecting critical state before phase transition of complex biological systems by hidden Markov model.

    Science.gov (United States)

    Chen, Pei; Liu, Rui; Li, Yongjun; Chen, Luonan

    2016-07-15

    Identifying the critical state or pre-transition state just before the occurrence of a phase transition is a challenging task, because the state of the system may show little apparent change before this critical transition during the gradual parameter variations. Such dynamics of phase transition is generally composed of three stages, i.e. before-transition state, pre-transition state and after-transition state, which can be considered as three different Markov processes. By exploring the rich dynamical information provided by high-throughput data, we present a novel computational method, i.e. hidden Markov model (HMM) based approach, to detect the switching point of the two Markov processes from the before-transition state (a stationary Markov process) to the pre-transition state (a time-varying Markov process), thereby identifying the pre-transition state or early-warning signals of the phase transition. To validate the effectiveness, we apply this method to detect the signals of the imminent phase transitions of complex systems based on the simulated datasets, and further identify the pre-transition states as well as their critical modules for three real datasets, i.e. the acute lung injury triggered by phosgene inhalation, MCF-7 human breast cancer caused by heregulin and HCV-induced dysplasia and hepatocellular carcinoma. Both functional and pathway enrichment analyses validate the computational results. The source code and some supporting files are available at https://github.com/rabbitpei/HMM_based-method lnchen@sibs.ac.cn or liyj@scut.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Using Continuous Action Spaces to Solve Discrete Problems

    NARCIS (Netherlands)

    van Hasselt, Hado; Wiering, Marco

    2009-01-01

    Real-world control problems are often modeled as Markov Decision Processes (MDPs) with discrete action spaces to facilitate the use of the many reinforcement learning algorithms that exist to find solutions for such MDPs. For many of these problems an underlying continuous action space can be

  5. Neyman, Markov processes and survival analysis.

    Science.gov (United States)

    Yang, Grace

    2013-07-01

    J. Neyman used stochastic processes extensively in his applied work. One example is the Fix and Neyman (F-N) competing risks model (1951) that uses finite homogeneous Markov processes to analyse clinical trials with breast cancer patients. We revisit the F-N model, and compare it with the Kaplan-Meier (K-M) formulation for right censored data. The comparison offers a way to generalize the K-M formulation to include risks of recovery and relapses in the calculation of a patient's survival probability. The generalization is to extend the F-N model to a nonhomogeneous Markov process. Closed-form solutions of the survival probability are available in special cases of the nonhomogeneous processes, like the popular multiple decrement model (including the K-M model) and Chiang's staging model, but these models do not consider recovery and relapses while the F-N model does. An analysis of sero-epidemiology current status data with recurrent events is illustrated. Fix and Neyman used Neyman's RBAN (regular best asymptotic normal) estimates for the risks, and provided a numerical example showing the importance of considering both the survival probability and the length of time of a patient living a normal life in the evaluation of clinical trials. The said extension would result in a complicated model and it is unlikely to find analytical closed-form solutions for survival analysis. With ever increasing computing power, numerical methods offer a viable way of investigating the problem.

  6. XMRF: an R package to fit Markov Networks to high-throughput genetics data.

    Science.gov (United States)

    Wan, Ying-Wooi; Allen, Genevera I; Baker, Yulia; Yang, Eunho; Ravikumar, Pradeep; Anderson, Matthew; Liu, Zhandong

    2016-08-26

    Technological advances in medicine have led to a rapid proliferation of high-throughput "omics" data. Tools to mine this data and discover disrupted disease networks are needed as they hold the key to understanding complicated interactions between genes, mutations and aberrations, and epi-genetic markers. We developed an R software package, XMRF, that can be used to fit Markov Networks to various types of high-throughput genomics data. Encoding the models and estimation techniques of the recently proposed exponential family Markov Random Fields (Yang et al., 2012), our software can be used to learn genetic networks from RNA-sequencing data (counts via Poisson graphical models), mutation and copy number variation data (categorical via Ising models), and methylation data (continuous via Gaussian graphical models). XMRF is the only tool that allows network structure learning using the native distribution of the data instead of the standard Gaussian. Moreover, the parallelization feature of the implemented algorithms computes the large-scale biological networks efficiently. XMRF is available from CRAN and Github ( https://github.com/zhandong/XMRF ).

  7. MARKOV CHAIN PORTFOLIO LIQUIDITY OPTIMIZATION MODEL

    Directory of Open Access Journals (Sweden)

    Eder Oliveira Abensur

    2014-05-01

    Full Text Available The international financial crisis of September 2008 and May 2010 showed the importance of liquidity as an attribute to be considered in portfolio decisions. This study proposes an optimization model based on available public data, using Markov chain and Genetic Algorithms concepts as it considers the classic duality of risk versus return and incorporating liquidity costs. The work intends to propose a multi-criterion non-linear optimization model using liquidity based on a Markov chain. The non-linear model was tested using Genetic Algorithms with twenty five Brazilian stocks from 2007 to 2009. The results suggest that this is an innovative development methodology and useful for developing an efficient and realistic financial portfolio, as it considers many attributes such as risk, return and liquidity.

  8. Transportation and concentration inequalities for bifurcating Markov chains

    DEFF Research Database (Denmark)

    Penda, S. Valère Bitseki; Escobar-Bach, Mikael; Guillin, Arnaud

    2017-01-01

    We investigate the transportation inequality for bifurcating Markov chains which are a class of processes indexed by a regular binary tree. Fitting well models like cell growth when each individual gives birth to exactly two offsprings, we use transportation inequalities to provide useful...... concentration inequalities.We also study deviation inequalities for the empirical means under relaxed assumptions on the Wasserstein contraction for the Markov kernels. Applications to bifurcating nonlinear autoregressive processes are considered for point-wise estimates of the non-linear autoregressive...

  9. Hidden Semi Markov Models for Multiple Observation Sequences: The mhsmm Package for R

    DEFF Research Database (Denmark)

    O'Connell, Jarad Michael; Højsgaard, Søren

    2011-01-01

    models only allow a geometrically distributed sojourn time in a given state, while hidden semi-Markov models extend this by allowing an arbitrary sojourn distribution. We demonstrate the software with simulation examples and an application involving the modelling of the ovarian cycle of dairy cows...

  10. Bayesian Markov-Chain-Monte-Carlo inversion of time-lapse crosshole GPR data to characterize the vadose zone at the Arrenaes Site, Denmark

    DEFF Research Database (Denmark)

    Scholer, Marie; Irving, James; Zibar, Majken Caroline Looms

    2012-01-01

    We examined to what extent time-lapse crosshole ground-penetrating radar traveltimes, measured during a forced infiltration experiment at the Arreneas field site in Denmark, could help to quantify vadose zone hydraulic properties and their corresponding uncertainties using a Bayesian Markov...... distributions compared with the corresponding priors, which in turn significantly improves knowledge of soil hydraulic properties. Overall, the results obtained clearly demonstrate the value of the information contained in time-lapse GPR data for characterizing vadose zone dynamics.......-chain-Monte-Carlo inversion approach with different priors. The ground-penetrating radar (GPR) geophysical method has the potential to provide valuable information on the hydraulic properties of the vadose zone because of its strong sensitivity to soil water content. In particular, recent evidence has suggested...

  11. Reliability measures for indexed semi-Markov chains applied to wind energy production

    International Nuclear Information System (INIS)

    D'Amico, Guglielmo; Petroni, Filippo; Prattico, Flavio

    2015-01-01

    The computation of the dependability measures is a crucial point in many engineering problems as well as in the planning and development of a wind farm. In this paper we address the issue of energy production by wind turbines by using an indexed semi-Markov chain as a model of wind speed. We present the mathematical model, the data and technical characteristics of a commercial wind turbine (Aircon HAWT-10kW). We show how to compute some of the main dependability measures such as reliability, availability and maintainability functions. We compare the results of the model with real energy production obtained from data available in the Lastem station (Italy) and sampled every 10 min. - Highlights: • Semi-Markov models. • Time series generation of wind speed. • Computation of availability, reliability and maintainability.

  12. Markov Chains for Investigating and Predicting Migration: A Case from Southwestern China

    Science.gov (United States)

    Qin, Bo; Wang, Yiyu; Xu, Haoming

    2018-03-01

    In order to accurately predict the population’s happiness, this paper conducted two demographic surveys on a new district of a city in western China, and carried out a dynamic analysis using related mathematical methods. This paper argues that the migration of migrants in the city will change the pattern of spatial distribution of human resources in the city and thus affect the social and economic development in all districts. The migration status of the population will change randomly with the passage of time, so it can be predicted and analyzed through the Markov process. The Markov process provides the local government and decision-making bureau a valid basis for the dynamic analysis of the mobility of migrants in the city as well as the ways for promoting happiness of local people’s lives.

  13. Hidden Markov Model Application to Transfer The Trader Online Forex Brokers

    Directory of Open Access Journals (Sweden)

    Farida Suharleni

    2012-05-01

    Full Text Available Hidden Markov Model is elaboration of Markov chain, which is applicable to cases that can’t directly observe. In this research, Hidden Markov Model is used to know trader’s transition to broker forex online. In Hidden Markov Model, observed state is observable part and hidden state is hidden part. Hidden Markov Model allows modeling system that contains interrelated observed state and hidden state. As observed state in trader’s transition to broker forex online is category 1, category 2, category 3, category 4, category 5 by condition of every broker forex online, whereas as hidden state is broker forex online Marketiva, Masterforex, Instaforex, FBS and Others. First step on application of Hidden Markov Model in this research is making construction model by making a probability of transition matrix (A from every broker forex online. Next step is making a probability of observation matrix (B by making conditional probability of five categories, that is category 1, category 2, category 3, category 4, category 5 by condition of every broker forex online and also need to determine an initial state probability (π from every broker forex online. The last step is using Viterbi algorithm to find hidden state sequences that is broker forex online sequences which is the most possible based on model and observed state that is the five categories. Application of Hidden Markov Model is done by making program with Viterbi algorithm using Delphi 7.0 software with observed state based on simulation data. Example: By the number of observation T = 5 and observed state sequences O = (2,4,3,5,1 is found hidden state sequences which the most possible with observed state O as following : where X1 = FBS, X2 = Masterforex, X3 = Marketiva, X4 = Others, and X5 = Instaforex.

  14. Adaptive Partially Hidden Markov Models

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Rasmussen, Tage

    1996-01-01

    Partially Hidden Markov Models (PHMM) have recently been introduced. The transition and emission probabilities are conditioned on the past. In this report, the PHMM is extended with a multiple token version. The different versions of the PHMM are applied to bi-level image coding....

  15. Interaction-aided continuous time quantum search

    International Nuclear Information System (INIS)

    Bae, Joonwoo; Kwon, Younghun; Baek, Inchan; Yoon, Dalsun

    2005-01-01

    The continuous quantum search algorithm (based on the Farhi-Gutmann Hamiltonian evolution) is known to be analogous to the Grover (or discrete time quantum) algorithm. Any errors introduced in Grover algorithm are fatal to its success. In the same way the Farhi-Gutmann Hamiltonian algorithm has a severe difficulty when the Hamiltonian is perturbed. In this letter we will show that the interaction term in quantum search Hamiltonian (actually which is in the generalized quantum search Hamiltonian) can save the perturbed Farhi-Gutmann Hamiltonian that should otherwise fail. We note that this fact is quite remarkable since it implies that introduction of interaction can be a way to correct some errors on the continuous time quantum search

  16. Integral-Value Models for Outcomes over Continuous Time

    DEFF Research Database (Denmark)

    Harvey, Charles M.; Østerdal, Lars Peter

    Models of preferences between outcomes over continuous time are important for individual, corporate, and social decision making, e.g., medical treatment, infrastructure development, and environmental regulation. This paper presents a foundation for such models. It shows that conditions on prefere...... on preferences between real- or vector-valued outcomes over continuous time are satisfied if and only if the preferences are represented by a value function having an integral form......Models of preferences between outcomes over continuous time are important for individual, corporate, and social decision making, e.g., medical treatment, infrastructure development, and environmental regulation. This paper presents a foundation for such models. It shows that conditions...

  17. A Multi-Armed Bandit Approach to Following a Markov Chain

    Science.gov (United States)

    2017-06-01

    Introduction to online convex optimization ,” Foundations and Trends in Optimization , vol. 2, no. 3-4, pp. 157–325, 2016. [3] A. Mahajan and D. Teneketzis...stochastic optimization , machine learning, discrete time Markov chains, stochastic Multi-Armed Bandit, combinatorial Multi-Armed Bandit, online learning, and...fulfillment of the requirements for the degree of MASTER OF SCIENCE IN OPERATIONS RESEARCH from the NAVAL POSTGRADUATE SCHOOL June 2017 Approved by: Roberto

  18. Synthesis of the Markov model of the thermochemical degradation of a polymer in solution

    Directory of Open Access Journals (Sweden)

    V. K. Bityukov

    2017-01-01

    Full Text Available The paper deals with the problem of mathematical modeling of thermochemical destruction process. The apparatus of Markov's chains is used to synthesize a mathematical model. The authors of the study suggest to consider the destruction process as a random one, where the system state changes, which is characterized by the proportion of macromolecules in each fraction of the molecular- and weight distribution. The intensities of transitions from one state to another characterize the corresponding rates of destruction processes for each fraction of the molecular- and weight distribution. The processes of crosslinking and polymerization in this work were neglected, and it was accepted that there is a probability of transition from any state with a lower order index (corresponding to fractions with higher molecular weights to any state with a higher index (corresponding to fractions with lower molecular weights. Markov's chain with discrete states and continuous time was taken as the mathematical model basis. Interactive graphical simulation environment MathWorksSimulink was used as a simulation environment. Experimental studies of polybutadiene destruction in solution were carried out to evaluate the mathematical model parameters. The GPC (gel-penetration chromatography data of the polybutadiene solution were used as the initial (starting data for estimating the polymer WMD (molecular weight distribution. Mean-square deviation of the calculated data from the experimental data for each fraction and at specified times was minimized for the numerical search of parameter values. The results of comparison of experimental and calculated on mathematical model data showed an error of calculations on the average about 5%, which indicates an acceptable error in estimating of polymer fractions proportions change during the process of destruction for the process under consideration and conditions.

  19. Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant

    Science.gov (United States)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.

    2015-12-01

    This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.

  20. Bearing Degradation Process Prediction Based on the Support Vector Machine and Markov Model

    Directory of Open Access Journals (Sweden)

    Shaojiang Dong

    2014-01-01

    Full Text Available Predicting the degradation process of bearings before they reach the failure threshold is extremely important in industry. This paper proposed a novel method based on the support vector machine (SVM and the Markov model to achieve this goal. Firstly, the features are extracted by time and time-frequency domain methods. However, the extracted original features are still with high dimensional and include superfluous information, and the nonlinear multifeatures fusion technique LTSA is used to merge the features and reduces the dimension. Then, based on the extracted features, the SVM model is used to predict the bearings degradation process, and the CAO method is used to determine the embedding dimension of the SVM model. After the bearing degradation process is predicted by SVM model, the Markov model is used to improve the prediction accuracy. The proposed method was validated by two bearing run-to-failure experiments, and the results proved the effectiveness of the methodology.