WorldWideScience

Sample records for discrete-state continuous-time markov

  1. A mathematical approach for evaluating Markov models in continuous time without discrete-event simulation.

    Science.gov (United States)

    van Rosmalen, Joost; Toy, Mehlika; O'Mahony, James F

    2013-08-01

    Markov models are a simple and powerful tool for analyzing the health and economic effects of health care interventions. These models are usually evaluated in discrete time using cohort analysis. The use of discrete time assumes that changes in health states occur only at the end of a cycle period. Discrete-time Markov models only approximate the process of disease progression, as clinical events typically occur in continuous time. The approximation can yield biased cost-effectiveness estimates for Markov models with long cycle periods and if no half-cycle correction is made. The purpose of this article is to present an overview of methods for evaluating Markov models in continuous time. These methods use mathematical results from stochastic process theory and control theory. The methods are illustrated using an applied example on the cost-effectiveness of antiviral therapy for chronic hepatitis B. The main result is a mathematical solution for the expected time spent in each state in a continuous-time Markov model. It is shown how this solution can account for age-dependent transition rates and discounting of costs and health effects, and how the concept of tunnel states can be used to account for transition rates that depend on the time spent in a state. The applied example shows that the continuous-time model yields more accurate results than the discrete-time model but does not require much computation time and is easily implemented. In conclusion, continuous-time Markov models are a feasible alternative to cohort analysis and can offer several theoretical and practical advantages.

  2. Subgeometric Ergodicity Analysis of Continuous-Time Markov Chains under Random-Time State-Dependent Lyapunov Drift Conditions

    Directory of Open Access Journals (Sweden)

    Mokaedi V. Lekgari

    2014-01-01

    Full Text Available We investigate random-time state-dependent Foster-Lyapunov analysis on subgeometric rate ergodicity of continuous-time Markov chains (CTMCs. We are mainly concerned with making use of the available results on deterministic state-dependent drift conditions for CTMCs and on random-time state-dependent drift conditions for discrete-time Markov chains and transferring them to CTMCs.

  3. Summary statistics for end-point conditioned continuous-time Markov chains

    DEFF Research Database (Denmark)

    Hobolth, Asger; Jensen, Jens Ledet

    Continuous-time Markov chains are a widely used modelling tool. Applications include DNA sequence evolution, ion channel gating behavior and mathematical finance. We consider the problem of calculating properties of summary statistics (e.g. mean time spent in a state, mean number of jumps between...... two states and the distribution of the total number of jumps) for discretely observed continuous time Markov chains. Three alternative methods for calculating properties of summary statistics are described and the pros and cons of the methods are discussed. The methods are based on (i) an eigenvalue...... decomposition of the rate matrix, (ii) the uniformization method, and (iii) integrals of matrix exponentials. In particular we develop a framework that allows for analyses of rather general summary statistics using the uniformization method....

  4. Road maintenance optimization through a discrete-time semi-Markov decision process

    International Nuclear Information System (INIS)

    Zhang Xueqing; Gao Hui

    2012-01-01

    Optimization models are necessary for efficient and cost-effective maintenance of a road network. In this regard, road deterioration is commonly modeled as a discrete-time Markov process such that an optimal maintenance policy can be obtained based on the Markov decision process, or as a renewal process such that an optimal maintenance policy can be obtained based on the renewal theory. However, the discrete-time Markov process cannot capture the real time at which the state transits while the renewal process considers only one state and one maintenance action. In this paper, road deterioration is modeled as a semi-Markov process in which the state transition has the Markov property and the holding time in each state is assumed to follow a discrete Weibull distribution. Based on this semi-Markov process, linear programming models are formulated for both infinite and finite planning horizons in order to derive optimal maintenance policies to minimize the life-cycle cost of a road network. A hypothetical road network is used to illustrate the application of the proposed optimization models. The results indicate that these linear programming models are practical for the maintenance of a road network having a large number of road segments and that they are convenient to incorporate various constraints on the decision process, for example, performance requirements and available budgets. Although the optimal maintenance policies obtained for the road network are randomized stationary policies, the extent of this randomness in decision making is limited. The maintenance actions are deterministic for most states and the randomness in selecting actions occurs only for a few states.

  5. Sampling rare fluctuations of discrete-time Markov chains

    Science.gov (United States)

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  6. Mapping of uncertainty relations between continuous and discrete time.

    Science.gov (United States)

    Chiuchiù, Davide; Pigolotti, Simone

    2018-03-01

    Lower bounds on fluctuations of thermodynamic currents depend on the nature of time, discrete or continuous. To understand the physical reason, we compare current fluctuations in discrete-time Markov chains and continuous-time master equations. We prove that current fluctuations in the master equations are always more likely, due to random timings of transitions. This comparison leads to a mapping of the moments of a current between discrete and continuous time. We exploit this mapping to obtain uncertainty bounds. Our results reduce the quests for uncertainty bounds in discrete and continuous time to a single problem.

  7. Quasi-stationary distributions for reducible absorbing Markov chains in discrete time

    NARCIS (Netherlands)

    van Doorn, Erik A.; Pollett, P.K.

    2009-01-01

    We consider discrete-time Markov chains with one coffin state and a finite set $S$ of transient states, and are interested in the limiting behaviour of such a chain as time $n \\to \\infty,$ conditional on survival up to $n$. It is known that, when $S$ is irreducible, the limiting conditional

  8. Nonparametric Estimation of Interval Reliability for Discrete-Time Semi-Markov Systems

    DEFF Research Database (Denmark)

    Georgiadis, Stylianos; Limnios, Nikolaos

    2016-01-01

    In this article, we consider a repairable discrete-time semi-Markov system with finite state space. The measure of the interval reliability is given as the probability of the system being operational over a given finite-length time interval. A nonparametric estimator is proposed for the interval...

  9. Recursive smoothers for hidden discrete-time Markov chains

    Directory of Open Access Journals (Sweden)

    Lakhdar Aggoun

    2005-01-01

    Full Text Available We consider a discrete-time Markov chain observed through another Markov chain. The proposed model extends models discussed by Elliott et al. (1995. We propose improved recursive formulae to update smoothed estimates of processes related to the model. These recursive estimates are used to update the parameter of the model via the expectation maximization (EM algorithm.

  10. SIMULATION FROM ENDPOINT-CONDITIONED, CONTINUOUS-TIME MARKOV CHAINS ON A FINITE STATE SPACE, WITH APPLICATIONS TO MOLECULAR EVOLUTION.

    Science.gov (United States)

    Hobolth, Asger; Stone, Eric A

    2009-09-01

    Analyses of serially-sampled data often begin with the assumption that the observations represent discrete samples from a latent continuous-time stochastic process. The continuous-time Markov chain (CTMC) is one such generative model whose popularity extends to a variety of disciplines ranging from computational finance to human genetics and genomics. A common theme among these diverse applications is the need to simulate sample paths of a CTMC conditional on realized data that is discretely observed. Here we present a general solution to this sampling problem when the CTMC is defined on a discrete and finite state space. Specifically, we consider the generation of sample paths, including intermediate states and times of transition, from a CTMC whose beginning and ending states are known across a time interval of length T. We first unify the literature through a discussion of the three predominant approaches: (1) modified rejection sampling, (2) direct sampling, and (3) uniformization. We then give analytical results for the complexity and efficiency of each method in terms of the instantaneous transition rate matrix Q of the CTMC, its beginning and ending states, and the length of sampling time T. In doing so, we show that no method dominates the others across all model specifications, and we give explicit proof of which method prevails for any given Q, T, and endpoints. Finally, we introduce and compare three applications of CTMCs to demonstrate the pitfalls of choosing an inefficient sampler.

  11. Effective degree Markov-chain approach for discrete-time epidemic processes on uncorrelated networks.

    Science.gov (United States)

    Cai, Chao-Ran; Wu, Zhi-Xi; Guan, Jian-Yue

    2014-11-01

    Recently, Gómez et al. proposed a microscopic Markov-chain approach (MMCA) [S. Gómez, J. Gómez-Gardeñes, Y. Moreno, and A. Arenas, Phys. Rev. E 84, 036105 (2011)PLEEE81539-375510.1103/PhysRevE.84.036105] to the discrete-time susceptible-infected-susceptible (SIS) epidemic process and found that the epidemic prevalence obtained by this approach agrees well with that by simulations. However, we found that the approach cannot be straightforwardly extended to a susceptible-infected-recovered (SIR) epidemic process (due to its irreversible property), and the epidemic prevalences obtained by MMCA and Monte Carlo simulations do not match well when the infection probability is just slightly above the epidemic threshold. In this contribution we extend the effective degree Markov-chain approach, proposed for analyzing continuous-time epidemic processes [J. Lindquist, J. Ma, P. Driessche, and F. Willeboordse, J. Math. Biol. 62, 143 (2011)JMBLAJ0303-681210.1007/s00285-010-0331-2], to address discrete-time binary-state (SIS) or three-state (SIR) epidemic processes on uncorrelated complex networks. It is shown that the final epidemic size as well as the time series of infected individuals obtained from this approach agree very well with those by Monte Carlo simulations. Our results are robust to the change of different parameters, including the total population size, the infection probability, the recovery probability, the average degree, and the degree distribution of the underlying networks.

  12. Markov Chain Modelling for Short-Term NDVI Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Stepčenko Artūrs

    2016-12-01

    Full Text Available In this paper, the NDVI time series forecasting model has been developed based on the use of discrete time, continuous state Markov chain of suitable order. The normalised difference vegetation index (NDVI is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation; therefore, it is an important variable for vegetation forecasting. A Markov chain is a stochastic process that consists of a state space. This stochastic process undergoes transitions from one state to another in the state space with some probabilities. A Markov chain forecast model is flexible in accommodating various forecast assumptions and structures. The present paper discusses the considerations and techniques in building a Markov chain forecast model at each step. Continuous state Markov chain model is analytically described. Finally, the application of the proposed Markov chain model is illustrated with reference to a set of NDVI time series data.

  13. Computing continuous-time Markov chains as transformers of unbounded observables

    DEFF Research Database (Denmark)

    Danos, Vincent; Heindel, Tobias; Garnier, Ilias

    2017-01-01

    The paper studies continuous-time Markov chains (CTMCs) as transformers of real-valued functions on their state space, considered as generalised predicates and called observables. Markov chains are assumed to take values in a countable state space S; observables f: S → ℝ may be unbounded...

  14. Fitting and interpreting continuous-time latent Markov models for panel data.

    Science.gov (United States)

    Lange, Jane M; Minin, Vladimir N

    2013-11-20

    Multistate models characterize disease processes within an individual. Clinical studies often observe the disease status of individuals at discrete time points, making exact times of transitions between disease states unknown. Such panel data pose considerable modeling challenges. Assuming the disease process progresses accordingly, a standard continuous-time Markov chain (CTMC) yields tractable likelihoods, but the assumption of exponential sojourn time distributions is typically unrealistic. More flexible semi-Markov models permit generic sojourn distributions yet yield intractable likelihoods for panel data in the presence of reversible transitions. One attractive alternative is to assume that the disease process is characterized by an underlying latent CTMC, with multiple latent states mapping to each disease state. These models retain analytic tractability due to the CTMC framework but allow for flexible, duration-dependent disease state sojourn distributions. We have developed a robust and efficient expectation-maximization algorithm in this context. Our complete data state space consists of the observed data and the underlying latent trajectory, yielding computationally efficient expectation and maximization steps. Our algorithm outperforms alternative methods measured in terms of time to convergence and robustness. We also examine the frequentist performance of latent CTMC point and interval estimates of disease process functionals based on simulated data. The performance of estimates depends on time, functional, and data-generating scenario. Finally, we illustrate the interpretive power of latent CTMC models for describing disease processes on a dataset of lung transplant patients. We hope our work will encourage wider use of these models in the biomedical setting. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Neural Network Based Finite-Time Stabilization for Discrete-Time Markov Jump Nonlinear Systems with Time Delays

    Directory of Open Access Journals (Sweden)

    Fei Chen

    2013-01-01

    Full Text Available This paper deals with the finite-time stabilization problem for discrete-time Markov jump nonlinear systems with time delays and norm-bounded exogenous disturbance. The nonlinearities in different jump modes are parameterized by neural networks. Subsequently, a linear difference inclusion state space representation for a class of neural networks is established. Based on this, sufficient conditions are derived in terms of linear matrix inequalities to guarantee stochastic finite-time boundedness and stochastic finite-time stabilization of the closed-loop system. A numerical example is illustrated to verify the efficiency of the proposed technique.

  16. Extending Markov Automata with State and Action Rewards

    NARCIS (Netherlands)

    Guck, Dennis; Timmer, Mark; Blom, Stefan; Bertrand, N.; Bortolussi, L.

    This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton that allows the modelling of systems incorporating rewards in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Our models support both rewards that are

  17. Markov processes

    CERN Document Server

    Kirkwood, James R

    2015-01-01

    Review of ProbabilityShort HistoryReview of Basic Probability DefinitionsSome Common Probability DistributionsProperties of a Probability DistributionProperties of the Expected ValueExpected Value of a Random Variable with Common DistributionsGenerating FunctionsMoment Generating FunctionsExercisesDiscrete-Time, Finite-State Markov ChainsIntroductionNotationTransition MatricesDirected Graphs: Examples of Markov ChainsRandom Walk with Reflecting BoundariesGambler’s RuinEhrenfest ModelCentral Problem of Markov ChainsCondition to Ensure a Unique Equilibrium StateFinding the Equilibrium StateTransient and Recurrent StatesIndicator FunctionsPerron-Frobenius TheoremAbsorbing Markov ChainsMean First Passage TimeMean Recurrence Time and the Equilibrium StateFundamental Matrix for Regular Markov ChainsDividing a Markov Chain into Equivalence ClassesPeriodic Markov ChainsReducible Markov ChainsSummaryExercisesDiscrete-Time, Infinite-State Markov ChainsRenewal ProcessesDelayed Renewal ProcessesEquilibrium State f...

  18. MARKOV GRAPHS OF ONE–DIMENSIONAL DYNAMICAL SYSTEMS AND THEIR DISCRETE ANALOGUES AND THEIR DISCRETE ANALOGUES

    Directory of Open Access Journals (Sweden)

    SERGIY KOZERENKO

    2016-04-01

    Full Text Available One feature of the famous Sharkovsky’s theorem is that it can be proved using digraphs of a special type (the so–called Markov graphs. The most general definition assigns a Markov graph to every continuous map from the topological graph to itself. We show that this definition is too broad, i.e. every finite digraph can be viewed as a Markov graph of some one–dimensional dynamical system on a tree. We therefore consider discrete analogues of Markov graphs for vertex maps on combinatorial trees and characterize all maps on trees whose discrete Markov graphs are of the following types: complete, complete bipartite, the disjoint union of cycles, with every arc being a loop.

  19. Fitting timeseries by continuous-time Markov chains: A quadratic programming approach

    International Nuclear Information System (INIS)

    Crommelin, D.T.; Vanden-Eijnden, E.

    2006-01-01

    Construction of stochastic models that describe the effective dynamics of observables of interest is an useful instrument in various fields of application, such as physics, climate science, and finance. We present a new technique for the construction of such models. From the timeseries of an observable, we construct a discrete-in-time Markov chain and calculate the eigenspectrum of its transition probability (or stochastic) matrix. As a next step we aim to find the generator of a continuous-time Markov chain whose eigenspectrum resembles the observed eigenspectrum as closely as possible, using an appropriate norm. The generator is found by solving a minimization problem: the norm is chosen such that the object function is quadratic and convex, so that the minimization problem can be solved using quadratic programming techniques. The technique is illustrated on various toy problems as well as on datasets stemming from simulations of molecular dynamics and of atmospheric flows

  20. The Markov process admits a consistent steady-state thermodynamic formalism

    Science.gov (United States)

    Peng, Liangrong; Zhu, Yi; Hong, Liu

    2018-01-01

    The search for a unified formulation for describing various non-equilibrium processes is a central task of modern non-equilibrium thermodynamics. In this paper, a novel steady-state thermodynamic formalism was established for general Markov processes described by the Chapman-Kolmogorov equation. Furthermore, corresponding formalisms of steady-state thermodynamics for the master equation and Fokker-Planck equation could be rigorously derived in mathematics. To be concrete, we proved that (1) in the limit of continuous time, the steady-state thermodynamic formalism for the Chapman-Kolmogorov equation fully agrees with that for the master equation; (2) a similar one-to-one correspondence could be established rigorously between the master equation and Fokker-Planck equation in the limit of large system size; (3) when a Markov process is restrained to one-step jump, the steady-state thermodynamic formalism for the Fokker-Planck equation with discrete state variables also goes to that for master equations, as the discretization step gets smaller and smaller. Our analysis indicated that general Markov processes admit a unified and self-consistent non-equilibrium steady-state thermodynamic formalism, regardless of underlying detailed models.

  1. STATISTICAL ANALYSIS OF NOTATIONAL AFL DATA USING CONTINUOUS TIME MARKOV CHAINS

    Directory of Open Access Journals (Sweden)

    Denny Meyer

    2006-12-01

    Full Text Available Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs, with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated

  2. Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions

    OpenAIRE

    Masuyama, Hiroyuki

    2014-01-01

    In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally,...

  3. An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains

    Directory of Open Access Journals (Sweden)

    Qihong Duan

    2010-01-01

    Full Text Available In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there are m transient states in the system and that there are n failure time data. The devised algorithm only needs to compute the exponential of m×m upper triangular matrices for O(nm2 times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.

  4. Nonequilibrium thermodynamic potentials for continuous-time Markov chains.

    Science.gov (United States)

    Verley, Gatien

    2016-01-01

    We connect the rare fluctuations of an equilibrium (EQ) process and the typical fluctuations of a nonequilibrium (NE) stationary process. In the framework of large deviation theory, this observation allows us to introduce NE thermodynamic potentials. For continuous-time Markov chains, we identify the relevant pairs of conjugated variables and propose two NE ensembles: one with fixed dynamics and fluctuating time-averaged variables, and another with fixed time-averaged variables, but a fluctuating dynamics. Accordingly, we show that NE processes are equivalent to conditioned EQ processes ensuring that NE potentials are Legendre dual. We find a variational principle satisfied by the NE potentials that reach their maximum in the NE stationary state and whose first derivatives produce the NE equations of state and second derivatives produce the NE Maxwell relations generalizing the Onsager reciprocity relations.

  5. Semi-Markov processes

    CERN Document Server

    Grabski

    2014-01-01

    Semi-Markov Processes: Applications in System Reliability and Maintenance is a modern view of discrete state space and continuous time semi-Markov processes and their applications in reliability and maintenance. The book explains how to construct semi-Markov models and discusses the different reliability parameters and characteristics that can be obtained from those models. The book is a useful resource for mathematicians, engineering practitioners, and PhD and MSc students who want to understand the basic concepts and results of semi-Markov process theory. Clearly defines the properties and

  6. H2-control and the separation principle for discrete-time jump systems with the Markov chain in a general state space

    Science.gov (United States)

    Figueiredo, Danilo Zucolli; Costa, Oswaldo Luiz do Valle

    2017-10-01

    This paper deals with the H2 optimal control problem of discrete-time Markov jump linear systems (MJLS) considering the case in which the Markov chain takes values in a general Borel space ?. It is assumed that the controller has access only to an output variable and to the jump parameter. The goal, in this case, is to design a dynamic Markov jump controller such that the H2-norm of the closed-loop system is minimised. It is shown that the H2-norm can be written as the sum of two H2-norms, such that one of them does not depend on the control, and the other one is obtained from the optimal filter for an infinite-horizon filtering problem. This result can be seen as a separation principle for MJLS with Markov chain in a Borel space ? considering the infinite time horizon case.

  7. Error bounds for augmented truncations of discrete-time block-monotone Markov chains under subgeometric drift conditions

    OpenAIRE

    Masuyama, Hiroyuki

    2015-01-01

    This paper studies the last-column-block-augmented northwest-corner truncation (LC-block-augmented truncation, for short) of discrete-time block-monotone Markov chains under subgeometric drift conditions. The main result of this paper is to present an upper bound for the total variation distance between the stationary probability vectors of a block-monotone Markov chain and its LC-block-augmented truncation. The main result is extended to Markov chains that themselves may not be block monoton...

  8. Generalization bounds of ERM-based learning processes for continuous-time Markov chains.

    Science.gov (United States)

    Zhang, Chao; Tao, Dacheng

    2012-12-01

    Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.

  9. Model checking conditional CSL for continuous-time Markov chains

    DEFF Research Database (Denmark)

    Gao, Yang; Xu, Ming; Zhan, Naijun

    2013-01-01

    In this paper, we consider the model-checking problem of continuous-time Markov chains (CTMCs) with respect to conditional logic. To the end, we extend Continuous Stochastic Logic introduced in Aziz et al. (2000) [1] to Conditional Continuous Stochastic Logic (CCSL) by introducing a conditional...

  10. Distinct timing mechanisms produce discrete and continuous movements.

    Directory of Open Access Journals (Sweden)

    Raoul Huys

    2008-04-01

    Full Text Available The differentiation of discrete and continuous movement is one of the pillars of motor behavior classification. Discrete movements have a definite beginning and end, whereas continuous movements do not have such discriminable end points. In the past decade there has been vigorous debate whether this classification implies different control processes. This debate up until the present has been empirically based. Here, we present an unambiguous non-empirical classification based on theorems in dynamical system theory that sets discrete and continuous movements apart. Through computational simulations of representative modes of each class and topological analysis of the flow in state space, we show that distinct control mechanisms underwrite discrete and fast rhythmic movements. In particular, we demonstrate that discrete movements require a time keeper while fast rhythmic movements do not. We validate our computational findings experimentally using a behavioral paradigm in which human participants performed finger flexion-extension movements at various movement paces and under different instructions. Our results demonstrate that the human motor system employs different timing control mechanisms (presumably via differential recruitment of neural subsystems to accomplish varying behavioral functions such as speed constraints.

  11. Continuous-time Markov decision processes theory and applications

    CERN Document Server

    Guo, Xianping

    2009-01-01

    This volume provides the first book entirely devoted to recent developments on the theory and applications of continuous-time Markov decision processes (MDPs). The MDPs presented here include most of the cases that arise in applications.

  12. Discrete time Markov chains (DTMC) susceptible infected susceptible (SIS) epidemic model with two pathogens in two patches

    Science.gov (United States)

    Lismawati, Eka; Respatiwulan; Widyaningsih, Purnami

    2017-06-01

    The SIS epidemic model describes the pattern of disease spread with characteristics that recovered individuals can be infected more than once. The number of susceptible and infected individuals every time follows the discrete time Markov process. It can be represented by the discrete time Markov chains (DTMC) SIS. The DTMC SIS epidemic model can be developed for two pathogens in two patches. The aims of this paper are to reconstruct and to apply the DTMC SIS epidemic model with two pathogens in two patches. The model was presented as transition probabilities. The application of the model obtain that the number of susceptible individuals decreases while the number of infected individuals increases for each pathogen in each patch.

  13. Relative entropy and waiting time for continuous-time Markov processes

    NARCIS (Netherlands)

    Chazottes, J.R.; Giardinà, C.; Redig, F.H.J.

    2006-01-01

    For discrete-time stochastic processes, there is a close connection between return (resp. waiting) times and entropy (resp. relative entropy). Such a connection cannot be straightforwardly extended to the continuous-time setting. Contrarily to the discrete-time case one needs a reference measure on

  14. Analysis of Streamline Separation at Infinity Using Time-Discrete Markov Chains.

    Science.gov (United States)

    Reich, W; Scheuermann, G

    2012-12-01

    Existing methods for analyzing separation of streamlines are often restricted to a finite time or a local area. In our paper we introduce a new method that complements them by allowing an infinite-time-evaluation of steady planar vector fields. Our algorithm unifies combinatorial and probabilistic methods and introduces the concept of separation in time-discrete Markov-Chains. We compute particle distributions instead of the streamlines of single particles. We encode the flow into a map and then into a transition matrix for each time direction. Finally, we compare the results of our grid-independent algorithm to the popular Finite-Time-Lyapunov-Exponents and discuss the discrepancies.

  15. Performance Modeling of Communication Networks with Markov Chains

    CERN Document Server

    Mo, Jeonghoon

    2010-01-01

    This book is an introduction to Markov chain modeling with applications to communication networks. It begins with a general introduction to performance modeling in Chapter 1 where we introduce different performance models. We then introduce basic ideas of Markov chain modeling: Markov property, discrete time Markov chain (DTMe and continuous time Markov chain (CTMe. We also discuss how to find the steady state distributions from these Markov chains and how they can be used to compute the system performance metric. The solution methodologies include a balance equation technique, limiting probab

  16. RESEARCH ABSORBING STATES OF THE SYSTEM USING MARKOV CHAINS AND FUNDAMENTAL MATRIX

    Directory of Open Access Journals (Sweden)

    Тетяна Мефодіївна ОЛЕХ

    2016-02-01

    Full Text Available The article discusses the use Markov chains to research models that reflect the essential properties of systems, including methods of measuring the parameters of projects and assess their effectiveness. In the study carried out by its decomposition system for certain discrete state and create a diagram of transitions between these states. Specificity displays various objects Markov homogeneous chains with discrete states and discrete time determined by the method of calculation of transition probabilities. A model of success criteria for absorbing state system that is universal for all projects. A breakdown of passages to the matrix submatrices. The variation elements under matrix Q n with growth linked to the definition of important quantitative characteristics of absorbing circuits: 1 the probability of achieving the status of absorbing any given; 2 the mean number of steps needed to achieve the absorbing state; 3 the mean time that the system spends in each state to hit irreversible system in absorbing state. Built fundamental matrix that allowed calculating the different characteristics of the system. Considered fundamental matrix for supposedly modeled absorbing Markov chain, which gives the forecast for the behavior of the system in the future regardless of the absolute value of the time elapsed from the starting point. This property illustrates the fundamental matrix Markov process that characterizes it as a process without aftereffect.

  17. Applied discrete-time queues

    CERN Document Server

    Alfa, Attahiru S

    2016-01-01

    This book introduces the theoretical fundamentals for modeling queues in discrete-time, and the basic procedures for developing queuing models in discrete-time. There is a focus on applications in modern telecommunication systems. It presents how most queueing models in discrete-time can be set up as discrete-time Markov chains. Techniques such as matrix-analytic methods (MAM) that can used to analyze the resulting Markov chains are included. This book covers single node systems, tandem system and queueing networks. It shows how queues with time-varying parameters can be analyzed, and illustrates numerical issues associated with computations for the discrete-time queueing systems. Optimal control of queues is also covered. Applied Discrete-Time Queues targets researchers, advanced-level students and analysts in the field of telecommunication networks. It is suitable as a reference book and can also be used as a secondary text book in computer engineering and computer science. Examples and exercises are includ...

  18. Optimal State Estimation for Discrete-Time Markov Jump Systems with Missing Observations

    Directory of Open Access Journals (Sweden)

    Qing Sun

    2014-01-01

    Full Text Available This paper is concerned with the optimal linear estimation for a class of direct-time Markov jump systems with missing observations. An observer-based approach of fault detection and isolation (FDI is investigated as a detection mechanic of fault case. For systems with known information, a conditional prediction of observations is applied and fault observations are replaced and isolated; then, an FDI linear minimum mean square error estimation (LMMSE can be developed by comprehensive utilizing of the correct information offered by systems. A recursive equation of filtering based on the geometric arguments can be obtained. Meanwhile, a stability of the state estimator will be guaranteed under appropriate assumption.

  19. Local and global dynamics of Ramsey model: From continuous to discrete time.

    Science.gov (United States)

    Guzowska, Malgorzata; Michetti, Elisabetta

    2018-05-01

    The choice of time as a discrete or continuous variable may radically affect equilibrium stability in an endogenous growth model with durable consumption. In the continuous-time Ramsey model [F. P. Ramsey, Econ. J. 38(152), 543-559 (1928)], the steady state is locally saddle-path stable with monotonic convergence. However, in the discrete-time version, the steady state may be unstable or saddle-path stable with monotonic or oscillatory convergence or periodic solutions [see R.-A. Dana et al., Handbook on Optimal Growth 1 (Springer, 2006) and G. Sorger, Working Paper No. 1505 (2015)]. When this occurs, the discrete-time counterpart of the continuous-time model is not consistent with the initial framework. In order to obtain a discrete-time Ramsey model preserving the main properties of the continuous-time counterpart, we use a general backward and forward discretisation as initially proposed by Bosi and Ragot [Theor. Econ. Lett. 2(1), 10-15 (2012)]. The main result of the study here presented is that, with this hybrid discretisation method, fixed points and local dynamics do not change. For what it concerns global dynamics, i.e., long-run behavior for initial conditions taken on the state space, we mainly perform numerical analysis with the main scope of comparing both qualitative and quantitative evolution of the two systems, also varying some parameters of interest.

  20. Basic problems solving for two-dimensional discrete 3 × 4 order hidden markov model

    International Nuclear Information System (INIS)

    Wang, Guo-gang; Gan, Zong-liang; Tang, Gui-jin; Cui, Zi-guan; Zhu, Xiu-chang

    2016-01-01

    A novel model is proposed to overcome the shortages of the classical hypothesis of the two-dimensional discrete hidden Markov model. In the proposed model, the state transition probability depends on not only immediate horizontal and vertical states but also on immediate diagonal state, and the observation symbol probability depends on not only current state but also on immediate horizontal, vertical and diagonal states. This paper defines the structure of the model, and studies the three basic problems of the model, including probability calculation, path backtracking and parameters estimation. By exploiting the idea that the sequences of states on rows or columns of the model can be seen as states of a one-dimensional discrete 1 × 2 order hidden Markov model, several algorithms solving the three questions are theoretically derived. Simulation results further demonstrate the performance of the algorithms. Compared with the two-dimensional discrete hidden Markov model, there are more statistical characteristics in the structure of the proposed model, therefore the proposed model theoretically can more accurately describe some practical problems.

  1. Singular Perturbation for the Discounted Continuous Control of Piecewise Deterministic Markov Processes

    International Nuclear Information System (INIS)

    Costa, O. L. V.; Dufour, F.

    2011-01-01

    This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP’s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space ℝ n . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter ε>0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as ε goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as ε goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.

  2. Continuous-time quantum random walks require discrete space

    International Nuclear Information System (INIS)

    Manouchehri, K; Wang, J B

    2007-01-01

    Quantum random walks are shown to have non-intuitive dynamics which makes them an attractive area of study for devising quantum algorithms for long-standing open problems as well as those arising in the field of quantum computing. In the case of continuous-time quantum random walks, such peculiar dynamics can arise from simple evolution operators closely resembling the quantum free-wave propagator. We investigate the divergence of quantum walk dynamics from the free-wave evolution and show that, in order for continuous-time quantum walks to display their characteristic propagation, the state space must be discrete. This behavior rules out many continuous quantum systems as possible candidates for implementing continuous-time quantum random walks

  3. Continuous-time quantum random walks require discrete space

    Science.gov (United States)

    Manouchehri, K.; Wang, J. B.

    2007-11-01

    Quantum random walks are shown to have non-intuitive dynamics which makes them an attractive area of study for devising quantum algorithms for long-standing open problems as well as those arising in the field of quantum computing. In the case of continuous-time quantum random walks, such peculiar dynamics can arise from simple evolution operators closely resembling the quantum free-wave propagator. We investigate the divergence of quantum walk dynamics from the free-wave evolution and show that, in order for continuous-time quantum walks to display their characteristic propagation, the state space must be discrete. This behavior rules out many continuous quantum systems as possible candidates for implementing continuous-time quantum random walks.

  4. Event-Triggered Asynchronous Guaranteed Cost Control for Markov Jump Discrete-Time Neural Networks With Distributed Delay and Channel Fading.

    Science.gov (United States)

    Yan, Huaicheng; Zhang, Hao; Yang, Fuwen; Zhan, Xisheng; Peng, Chen

    2017-08-18

    This paper is concerned with the guaranteed cost control problem for a class of Markov jump discrete-time neural networks (NNs) with event-triggered mechanism, asynchronous jumping, and fading channels. The Markov jump NNs are introduced to be close to reality, where the modes of the NNs and guaranteed cost controller are determined by two mutually independent Markov chains. The asynchronous phenomenon is considered, which increases the difficulty of designing required mode-dependent controller. The event-triggered mechanism is designed by comparing the relative measurement error with the last triggered state at the process of data transmission, which is used to eliminate dispensable transmission and reduce the networked energy consumption. In addition, the signal fading is considered for the effect of signal reflection and shadow in wireless networks, which is modeled by the novel Rice fading models. Some novel sufficient conditions are obtained to guarantee that the closed-loop system reaches a specified cost value under the designed jumping state feedback control law in terms of linear matrix inequalities. Finally, some simulation results are provided to illustrate the effectiveness of the proposed method.

  5. State estimation for discrete-time Markovian jumping neural networks with mixed mode-dependent delays

    International Nuclear Information System (INIS)

    Liu Yurong; Wang Zidong; Liu Xiaohui

    2008-01-01

    In this Letter, we investigate the state estimation problem for a new class of discrete-time neural networks with Markovian jumping parameters as well as mode-dependent mixed time-delays. The parameters of the discrete-time neural networks are subject to the switching from one mode to another at different times according to a Markov chain, and the mixed time-delays consist of both discrete and distributed delays that are dependent on the Markovian jumping mode. New techniques are developed to deal with the mixed time-delays in the discrete-time setting, and a novel Lyapunov-Krasovskii functional is put forward to reflect the mode-dependent time-delays. Sufficient conditions are established in terms of linear matrix inequalities (LMIs) that guarantee the existence of the state estimators. We show that both the existence conditions and the explicit expression of the desired estimator can be characterized in terms of the solution to an LMI. A numerical example is exploited to show the usefulness of the derived LMI-based conditions

  6. Continuous-Time Semi-Markov Models in Health Economic Decision Making: An Illustrative Example in Heart Failure Disease Management.

    Science.gov (United States)

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    2016-01-01

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.

  7. A study on the stochastic model for nuclide transport in the fractured porous rock using continuous time Markov process

    International Nuclear Information System (INIS)

    Lee, Youn Myoung

    1995-02-01

    As a newly approaching model, a stochastic model using continuous time Markov process for nuclide decay chain transport of arbitrary length in the fractured porous rock medium has been proposed, by which the need for solving a set of partial differential equations corresponding to various sets of side conditions can be avoided. Once the single planar fracture in the rock matrix is represented by a series of finite number of compartments having region wise constant parameter values in them, the medium is continuous in view of various processes associated with nuclide transport but discrete in medium space and such geologic system is assumed to have Markov property, since the Markov process requires that only the present value of the time dependent random variable be known to determine the future value of random variable, nuclide transport in the medium can then be modeled as a continuous time Markov process. Processes that are involved in nuclide transport are advective transport due to groundwater flow, diffusion into the rock matrix, adsorption onto the wall of the fracture and within the pores in the rock matrix, and radioactive decay chain. The transition probabilities for nuclide from the transition intensities between and out of the compartments are represented utilizing Chapman-Kolmogorov equation, through which the expectation and the variance of nuclide distribution for each compartment or the fractured rock medium can be obtained. Some comparisons between Markov process model developed in this work and available analytical solutions for one-dimensional layered porous medium, fractured medium with rock matrix diffusion, and porous medium considering three member nuclide decay chain without rock matrix diffusion have been made showing comparatively good agreement for all cases. To verify the model developed in this work another comparative study was also made by fitting the experimental data obtained with NaLS and uranine running in the artificial fractured

  8. Expectation propagation for continuous time stochastic processes

    International Nuclear Information System (INIS)

    Cseke, Botond; Schnoerr, David; Sanguinetti, Guido; Opper, Manfred

    2016-01-01

    We consider the inverse problem of reconstructing the posterior measure over the trajectories of a diffusion process from discrete time observations and continuous time constraints. We cast the problem in a Bayesian framework and derive approximations to the posterior distributions of single time marginals using variational approximate inference, giving rise to an expectation propagation type algorithm. For non-linear diffusion processes, this is achieved by leveraging moment closure approximations. We then show how the approximation can be extended to a wide class of discrete-state Markov jump processes by making use of the chemical Langevin equation. Our empirical results show that the proposed method is computationally efficient and provides good approximations for these classes of inverse problems. (paper)

  9. A continuous-time/discrete-time mixed audio-band sigma delta ADC

    International Nuclear Information System (INIS)

    Liu Yan; Hua Siliang; Wang Donghui; Hou Chaohuan

    2011-01-01

    This paper introduces a mixed continuous-time/discrete-time, single-loop, fourth-order, 4-bit audio-band sigma delta ADC that combines the benefits of continuous-time and discrete-time circuits, while mitigating the challenges associated with continuous-time design. Measurement results show that the peak SNR of this ADC reaches 100 dB and the total power consumption is less than 30 mW. (semiconductor integrated circuits)

  10. Markov chains models, algorithms and applications

    CERN Document Server

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  11. 438 Optimal Number of States in Hidden Markov Models and its ...

    African Journals Online (AJOL)

    In this paper, Hidden Markov Model is applied to model human movements as to .... emit either discrete information or a continuous data derived from a Probability .... For each hidden state in the test set, the probability = ... by applying the Kullback-Leibler distance (Juang & Rabiner, 1985) which ..... One Size Does Not Fit.

  12. Mapping absorption processes onto a Markov chain, conserving the mean first passage time

    International Nuclear Information System (INIS)

    Biswas, Katja

    2013-01-01

    The dynamics of a multidimensional system is projected onto a discrete state master equation using the transition rates W(k → k′; t, t + dt) between a set of states {k} represented by the regions {ζ k } in phase or discrete state space. Depending on the dynamics Γ i (t) of the original process and the choice of ζ k , the discretized process can be Markovian or non-Markovian. For absorption processes, it is shown that irrespective of these properties of the projection, a master equation with time-independent transition rates W-bar (k→k ' ) can be obtained, which conserves the total occupation time of the partitions of the phase or discrete state space of the original process. An expression for the transition probabilities p-bar (k ' |k) is derived based on either time-discrete measurements {t i } with variable time stepping Δ (i+1)i = t i+1 − t i or the theoretical knowledge at continuous times t. This allows computational methods of absorbing Markov chains to be used to obtain the mean first passage time (MFPT) of the system. To illustrate this approach, the procedure is applied to obtain the MFPT for the overdamped Brownian motion of particles subject to a system with dichotomous noise and the escape from an entropic barrier. The high accuracy of the simulation results confirms with the theory. (paper)

  13. Model Checking Infinite-State Markov Chains

    NARCIS (Netherlands)

    Remke, Anne Katharina Ingrid; Haverkort, Boudewijn R.H.M.; Cloth, L.

    2004-01-01

    In this paper algorithms for model checking CSL (continuous stochastic logic) against infinite-state continuous-time Markov chains of so-called quasi birth-death type are developed. In doing so we extend the applicability of CSL model checking beyond the recently proposed case for finite-state

  14. The deviation matrix of a continuous-time Markov chain

    NARCIS (Netherlands)

    Coolen-Schrijner, P.; van Doorn, E.A.

    2001-01-01

    The deviation matrix of an ergodic, continuous-time Markov chain with transition probability matrix $P(.)$ and ergodic matrix $\\Pi$ is the matrix $D \\equiv \\int_0^{\\infty} (P(t)-\\Pi)dt$. We give conditions for $D$ to exist and discuss properties and a representation of $D$. The deviation matrix of a

  15. The deviation matrix of a continuous-time Markov chain

    NARCIS (Netherlands)

    Coolen-Schrijner, Pauline; van Doorn, Erik A.

    2002-01-01

    he deviation matrix of an ergodic, continuous-time Markov chain with transition probability matrix $P(.)$ and ergodic matrix $\\Pi$ is the matrix $D \\equiv \\int_0^{\\infty} (P(t)-\\Pi)dt$. We give conditions for $D$ to exist and discuss properties and a representation of $D$. The deviation matrix of a

  16. Stochastic Kuramoto oscillators with discrete phase states

    Science.gov (United States)

    Jörg, David J.

    2017-09-01

    We present a generalization of the Kuramoto phase oscillator model in which phases advance in discrete phase increments through Poisson processes, rendering both intrinsic oscillations and coupling inherently stochastic. We study the effects of phase discretization on the synchronization and precision properties of the coupled system both analytically and numerically. Remarkably, many key observables such as the steady-state synchrony and the quality of oscillations show distinct extrema while converging to the classical Kuramoto model in the limit of a continuous phase. The phase-discretized model provides a general framework for coupled oscillations in a Markov chain setting.

  17. Stochastic Kuramoto oscillators with discrete phase states.

    Science.gov (United States)

    Jörg, David J

    2017-09-01

    We present a generalization of the Kuramoto phase oscillator model in which phases advance in discrete phase increments through Poisson processes, rendering both intrinsic oscillations and coupling inherently stochastic. We study the effects of phase discretization on the synchronization and precision properties of the coupled system both analytically and numerically. Remarkably, many key observables such as the steady-state synchrony and the quality of oscillations show distinct extrema while converging to the classical Kuramoto model in the limit of a continuous phase. The phase-discretized model provides a general framework for coupled oscillations in a Markov chain setting.

  18. Markov chains of nonlinear Markov processes and an application to a winner-takes-all model for social conformity

    Energy Technology Data Exchange (ETDEWEB)

    Frank, T D [Center for the Ecological Study of Perception and Action, Department of Psychology, University of Connecticut, 406 Babbidge Road, Storrs, CT 06269 (United States)

    2008-07-18

    We discuss nonlinear Markov processes defined on discrete time points and discrete state spaces using Markov chains. In this context, special attention is paid to the distinction between linear and nonlinear Markov processes. We illustrate that the Chapman-Kolmogorov equation holds for nonlinear Markov processes by a winner-takes-all model for social conformity. (fast track communication)

  19. Markov chains of nonlinear Markov processes and an application to a winner-takes-all model for social conformity

    International Nuclear Information System (INIS)

    Frank, T D

    2008-01-01

    We discuss nonlinear Markov processes defined on discrete time points and discrete state spaces using Markov chains. In this context, special attention is paid to the distinction between linear and nonlinear Markov processes. We illustrate that the Chapman-Kolmogorov equation holds for nonlinear Markov processes by a winner-takes-all model for social conformity. (fast track communication)

  20. Process Modeling for Energy Usage in “Smart House” System with a Help of Markov Discrete Chain

    Directory of Open Access Journals (Sweden)

    Victor Kravets

    2016-05-01

    Full Text Available Method for evaluating economic efficiency of technical systems using discrete Markov chains modelling illustrated by the system of “Smart house”, consisting, for example, of the three independently functioning elements. Dynamic model of a random power consumption process in the form of a symmetrical state graph of heterogeneous discrete Markov chain is built. The corresponding mathematical model of a random Markov process of power consumption in the “smart house” system in recurrent matrix form is being developed. Technique of statistical determination of probability of random transition elements of the system and the corresponding to the transition probability matrix of the discrete inhomogeneous Markov chain are developed. Statistically determined random transitions of system elements power consumption and the corresponding distribution laws are introduced. The matrix of transition prices, expectations for the possible states of a system price transition and, eventually, the cost of Markov process of power consumption throughout the day.

  1. Semi-Markov Chains and Hidden Semi-Markov Models toward Applications Their Use in Reliability and DNA Analysis

    CERN Document Server

    Barbu, Vlad

    2008-01-01

    Semi-Markov processes are much more general and better adapted to applications than the Markov ones because sojourn times in any state can be arbitrarily distributed, as opposed to the geometrically distributed sojourn time in the Markov case. This book concerns with the estimation of discrete-time semi-Markov and hidden semi-Markov processes

  2. Fast-slow asymptotics for a Markov chain model of fast sodium current

    Science.gov (United States)

    Starý, Tomáš; Biktashev, Vadim N.

    2017-09-01

    We explore the feasibility of using fast-slow asymptotics to eliminate the computational stiffness of discrete-state, continuous-time deterministic Markov chain models of ionic channels underlying cardiac excitability. We focus on a Markov chain model of fast sodium current, and investigate its asymptotic behaviour with respect to small parameters identified in different ways.

  3. Rate Reduction for State-labelled Markov Chains with Upper Time-bounded CSL Requirements

    Directory of Open Access Journals (Sweden)

    Bharath Siva Kumar Tati

    2016-07-01

    Full Text Available This paper presents algorithms for identifying and reducing a dedicated set of controllable transition rates of a state-labelled continuous-time Markov chain model. The purpose of the reduction is to make states to satisfy a given requirement, specified as a CSL upper time-bounded Until formula. We distinguish two different cases, depending on the type of probability bound. A natural partitioning of the state space allows us to develop possible solutions, leading to simple algorithms for both cases.

  4. Discrete-time semi-Markov modeling of human papillomavirus persistence

    Science.gov (United States)

    Mitchell, C. E.; Hudgens, M. G.; King, C. C.; Cu-Uvin, S.; Lo, Y.; Rompalo, A.; Sobel, J.; Smith, J. S.

    2011-01-01

    Multi-state modeling is often employed to describe the progression of a disease process. In epidemiological studies of certain diseases, the disease state is typically only observed at periodic clinical visits, producing incomplete longitudinal data. In this paper we consider fitting semi-Markov models to estimate the persistence of human papillomavirus (HPV) type-specific infection in studies where the status of HPV type(s) is assessed periodically. Simulation study results are presented indicating the semi-Markov estimator is more accurate than an estimator currently used in the HPV literature. The methods are illustrated using data from the HIV Epidemiology Research Study (HERS). PMID:21538985

  5. Parallel algorithms for simulating continuous time Markov chains

    Science.gov (United States)

    Nicol, David M.; Heidelberger, Philip

    1992-01-01

    We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.

  6. Power plant reliability calculation with Markov chain models

    International Nuclear Information System (INIS)

    Senegacnik, A.; Tuma, M.

    1998-01-01

    In the paper power plant operation is modelled using continuous time Markov chains with discrete state space. The model is used to compute the power plant reliability and the importance and influence of individual states, as well as the transition probabilities between states. For comparison the model is fitted to data for coal and nuclear power plants recorded over several years. (orig.) [de

  7. Prognostics for Steam Generator Tube Rupture using Markov Chain model

    International Nuclear Information System (INIS)

    Kim, Gibeom; Heo, Gyunyoung; Kim, Hyeonmin

    2016-01-01

    This paper will describe the prognostics method for evaluating and forecasting the ageing effect and demonstrate the procedure of prognostics for the Steam Generator Tube Rupture (SGTR) accident. Authors will propose the data-driven method so called MCMC (Markov Chain Monte Carlo) which is preferred to the physical-model method in terms of flexibility and availability. Degradation data is represented as growth of burst probability over time. Markov chain model is performed based on transition probability of state. And the state must be discrete variable. Therefore, burst probability that is continuous variable have to be changed into discrete variable to apply Markov chain model to the degradation data. The Markov chain model which is one of prognostics methods was described and the pilot demonstration for a SGTR accident was performed as a case study. The Markov chain model is strong since it is possible to be performed without physical models as long as enough data are available. However, in the case of the discrete Markov chain used in this study, there must be loss of information while the given data is discretized and assigned to the finite number of states. In this process, original information might not be reflected on prediction sufficiently. This should be noted as the limitation of discrete models. Now we will be studying on other prognostics methods such as GPM (General Path Model) which is also data-driven method as well as the particle filer which belongs to physical-model method and conducting comparison analysis

  8. Detecting Faults By Use Of Hidden Markov Models

    Science.gov (United States)

    Smyth, Padhraic J.

    1995-01-01

    Frequency of false alarms reduced. Faults in complicated dynamic system (e.g., antenna-aiming system, telecommunication network, or human heart) detected automatically by method of automated, continuous monitoring. Obtains time-series data by sampling multiple sensor outputs at discrete intervals of t and processes data via algorithm determining whether system in normal or faulty state. Algorithm implements, among other things, hidden first-order temporal Markov model of states of system. Mathematical model of dynamics of system not needed. Present method is "prior" method mentioned in "Improved Hidden-Markov-Model Method of Detecting Faults" (NPO-18982).

  9. Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.

    Science.gov (United States)

    Serebrinsky, Santiago A

    2011-03-01

    We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.

  10. Simulating continuous-time Hamiltonian dynamics by way of a discrete-time quantum walk

    International Nuclear Information System (INIS)

    Schmitz, A.T.; Schwalm, W.A.

    2016-01-01

    Much effort has been made to connect the continuous-time and discrete-time quantum walks. We present a method for making that connection for a general graph Hamiltonian on a bigraph. Furthermore, such a scheme may be adapted for simulating discretized quantum models on a quantum computer. A coin operator is found for the discrete-time quantum walk which exhibits the same dynamics as the continuous-time evolution. Given the spectral decomposition of the graph Hamiltonian and certain restrictions, the discrete-time evolution is solved for explicitly and understood at or near important values of the parameters. Finally, this scheme is connected to past results for the 1D chain. - Highlights: • A discrete-time quantum walk is purposed which approximates a continuous-time quantum walk. • The purposed quantum walk could be used to simulate Hamiltonian dynamics on a quantum computer. • Given the spectra decomposition of the Hamiltonian, the quantum walk is solved explicitly. • The method is demonstrated and connected to previous work done on the 1D chain.

  11. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    Science.gov (United States)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  12. Using Continuous Action Spaces to Solve Discrete Problems

    NARCIS (Netherlands)

    van Hasselt, Hado; Wiering, Marco

    2009-01-01

    Real-world control problems are often modeled as Markov Decision Processes (MDPs) with discrete action spaces to facilitate the use of the many reinforcement learning algorithms that exist to find solutions for such MDPs. For many of these problems an underlying continuous action space can be

  13. Continuous-Time Semi-Markov Models in Health Economic Decision Making : An Illustrative Example in Heart Failure Disease Management

    NARCIS (Netherlands)

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease

  14. A Monte Carlo study of time-aggregation in continuous-time and discrete-time parametric hazard models.

    NARCIS (Netherlands)

    Hofstede, ter F.; Wedel, M.

    1998-01-01

    This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are

  15. The problem with time in mixed continuous/discrete time modelling

    NARCIS (Netherlands)

    Rovers, K.C.; Kuper, Jan; Smit, Gerardus Johannes Maria

    The design of cyber-physical systems requires the use of mixed continuous time and discrete time models. Current modelling tools have problems with time transformations (such as a time delay) or multi-rate systems. We will present a novel approach that implements signals as functions of time,

  16. Discrete time and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  17. Stability Analysis of Continuous-Time and Discrete-Time Quaternion-Valued Neural Networks With Linear Threshold Neurons.

    Science.gov (United States)

    Chen, Xiaofeng; Song, Qiankun; Li, Zhongshan; Zhao, Zhenjiang; Liu, Yurong

    2018-07-01

    This paper addresses the problem of stability for continuous-time and discrete-time quaternion-valued neural networks (QVNNs) with linear threshold neurons. Applying the semidiscretization technique to the continuous-time QVNNs, the discrete-time analogs are obtained, which preserve the dynamical characteristics of their continuous-time counterparts. Via the plural decomposition method of quaternion, homeomorphic mapping theorem, as well as Lyapunov theorem, some sufficient conditions on the existence, uniqueness, and global asymptotical stability of the equilibrium point are derived for the continuous-time QVNNs and their discrete-time analogs, respectively. Furthermore, a uniform sufficient condition on the existence, uniqueness, and global asymptotical stability of the equilibrium point is obtained for both continuous-time QVNNs and their discrete-time version. Finally, two numerical examples are provided to substantiate the effectiveness of the proposed results.

  18. Time-delay analyzer with continuous discretization

    International Nuclear Information System (INIS)

    Bayatyan, G.L.; Darbinyan, K.T.; Mkrtchyan, K.K.; Stepanyan, S.S.

    1988-01-01

    A time-delay analyzer is described which when triggered by a start pulse of adjustable duration performs continuous discretization of the analyzed signal within nearly 22 ns time intervals, the recording in a memory unit with following slow read-out of the information to the computer and its processing. The time-delay analyzer consists of four CAMAC-VECTOR systems of unit width. With its help one can separate comparatively short, small-amplitude rare signals against the background of quasistationary noise processes. 4 refs.; 3 figs

  19. Markov chains theory and applications

    CERN Document Server

    Sericola, Bruno

    2013-01-01

    Markov chains are a fundamental class of stochastic processes. They are widely used to solve problems in a large number of domains such as operational research, computer science, communication networks and manufacturing systems. The success of Markov chains is mainly due to their simplicity of use, the large number of available theoretical results and the quality of algorithms developed for the numerical evaluation of many metrics of interest.The author presents the theory of both discrete-time and continuous-time homogeneous Markov chains. He carefully examines the explosion phenomenon, the

  20. Introduction to the numerical solutions of Markov chains

    CERN Document Server

    Stewart, Williams J

    1994-01-01

    A cornerstone of applied probability, Markov chains can be used to help model how plants grow, chemicals react, and atoms diffuse - and applications are increasingly being found in such areas as engineering, computer science, economics, and education. To apply the techniques to real problems, however, it is necessary to understand how Markov chains can be solved numerically. In this book, the first to offer a systematic and detailed treatment of the numerical solution of Markov chains, William Stewart provides scientists on many levels with the power to put this theory to use in the actual world, where it has applications in areas as diverse as engineering, economics, and education. His efforts make for essential reading in a rapidly growing field. Here, Stewart explores all aspects of numerically computing solutions of Markov chains, especially when the state is huge. He provides extensive background to both discrete-time and continuous-time Markov chains and examines many different numerical computing metho...

  1. Markov bridges, bisection and variance reduction

    DEFF Research Database (Denmark)

    Asmussen, Søren; Hobolth, Asger

    . In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented......Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints...

  2. Fluctuation relations for equilibrium states with broken discrete or continuous symmetries

    International Nuclear Information System (INIS)

    Lacoste, D; Gaspard, P

    2015-01-01

    Isometric fluctuation relations are deduced for the fluctuations of the order parameter in equilibrium systems of condensed-matter physics with broken discrete or continuous symmetries. These relations are similar to their analogues obtained for non-equilibrium systems where the broken symmetry is time reversal. At equilibrium, these relations show that the ratio of the probabilities of opposite fluctuations goes exponentially with the symmetry-breaking external field and the magnitude of the fluctuations. These relations are applied to the Curie–Weiss, Heisenberg, and XY models of magnetism where the continuous rotational symmetry is broken, as well as to the q-state Potts model and the p-state clock model where discrete symmetries are broken. Broken symmetries are also considered in the anisotropic Curie–Weiss model. For infinite systems, the results are calculated using large-deviation theory. The relations are also applied to mean-field models of nematic liquid crystals where the order parameter is tensorial. Moreover, their extension to quantum systems is also deduced. (paper)

  3. Exponential stability of continuous-time and discrete-time bidirectional associative memory networks with delays

    International Nuclear Information System (INIS)

    Liang Jinling; Cao Jinde

    2004-01-01

    First, convergence of continuous-time Bidirectional Associative Memory (BAM) neural networks are studied. By using Lyapunov functionals and some analysis technique, the delay-independent sufficient conditions are obtained for the networks to converge exponentially toward the equilibrium associated with the constant input sources. Second, discrete-time analogues of the continuous-time BAM networks are formulated and studied. It is shown that the convergence characteristics of the continuous-time systems are preserved by the discrete-time analogues without any restriction imposed on the uniform discretionary step size. An illustrative example is given to demonstrate the effectiveness of the obtained results

  4. A scaling analysis of a cat and mouse Markov chain

    NARCIS (Netherlands)

    Litvak, Nelli; Robert, Philippe

    2012-01-01

    If ($C_n$) a Markov chain on a discrete state space $S$, a Markov chain ($C_n, M_n$) on the product space $S \\times S$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain behaves like the original Markov chain and the second component changes only when both

  5. A Markov reward model checker

    NARCIS (Netherlands)

    Katoen, Joost P.; Maneesh Khattri, M.; Zapreev, I.S.; Zapreev, I.S.

    2005-01-01

    This short tool paper introduces MRMC, a model checker for discrete-time and continuous-time Markov reward models. It supports reward extensions of PCTL and CSL, and allows for the automated verification of properties concerning long-run and instantaneous rewards as well as cumulative rewards. In

  6. The Green-Kubo formula, autocorrelation function and fluctuation spectrum for finite Markov chains with continuous time

    International Nuclear Information System (INIS)

    Chen Yong; Chen Xi; Qian Minping

    2006-01-01

    A general form of the Green-Kubo formula, which describes the fluctuations pertaining to all the steady states whether equilibrium or non-equilibrium, for a system driven by a finite Markov chain with continuous time (briefly, MC) {ξ t }, is shown. The equivalence of different forms of the Green-Kubo formula is exploited. We also look at the differences in terms of the autocorrelation function and the fluctuation spectrum between the equilibrium state and the non-equilibrium steady state. Also, if the MC is in the non-equilibrium steady state, we can always find a complex function ψ, such that the fluctuation spectrum of {φ(ξ t )} is non-monotonous in [0, + ∞)

  7. Performance analysis of chi models using discrete-time probabilistic reward graphs

    NARCIS (Netherlands)

    Trcka, N.; Georgievska, S.; Markovski, J.; Andova, S.; Vink, de E.P.

    2008-01-01

    We propose the model of discrete-time probabilistic reward graphs (DTPRGs) for performance analysis of systems exhibiting discrete deterministic time delays and probabilistic behavior, via their interpretation as discrete-time Markov reward chains, full-fledged platform for qualitative and

  8. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  9. Markov state models of protein misfolding

    Science.gov (United States)

    Sirur, Anshul; De Sancho, David; Best, Robert B.

    2016-02-01

    Markov state models (MSMs) are an extremely useful tool for understanding the conformational dynamics of macromolecules and for analyzing MD simulations in a quantitative fashion. They have been extensively used for peptide and protein folding, for small molecule binding, and for the study of native ensemble dynamics. Here, we adapt the MSM methodology to gain insight into the dynamics of misfolded states. To overcome possible flaws in root-mean-square deviation (RMSD)-based metrics, we introduce a novel discretization approach, based on coarse-grained contact maps. In addition, we extend the MSM methodology to include "sink" states in order to account for the irreversibility (on simulation time scales) of processes like protein misfolding. We apply this method to analyze the mechanism of misfolding of tandem repeats of titin domains, and how it is influenced by confinement in a chaperonin-like cavity.

  10. A Markov chain Monte Carlo Expectation Maximization Algorithm for Statistical Analysis of DNA Sequence Evolution with Neighbor-Dependent Substitution Rates

    DEFF Research Database (Denmark)

    Hobolth, Asger

    2008-01-01

    The evolution of DNA sequences can be described by discrete state continuous time Markov processes on a phylogenetic tree. We consider neighbor-dependent evolutionary models where the instantaneous rate of substitution at a site depends on the states of the neighboring sites. Neighbor...

  11. A New Approach to Rational Discrete-Time Approximations to Continuous-Time Fractional-Order Systems

    OpenAIRE

    Matos , Carlos; Ortigueira , Manuel ,

    2012-01-01

    Part 10: Signal Processing; International audience; In this paper a new approach to rational discrete-time approximations to continuous fractional-order systems of the form 1/(sα+p) is proposed. We will show that such fractional-order LTI system can be decomposed into sub-systems. One has the classic behavior and the other is similar to a Finite Impulse Response (FIR) system. The conversion from continuous-time to discrete-time systems will be done using the Laplace transform inversion integr...

  12. Markov chains and mixing times

    CERN Document Server

    Levin, David A

    2017-01-01

    Markov Chains and Mixing Times is a magical book, managing to be both friendly and deep. It gently introduces probabilistic techniques so that an outsider can follow. At the same time, it is the first book covering the geometric theory of Markov chains and has much that will be new to experts. It is certainly THE book that I will use to teach from. I recommend it to all comers, an amazing achievement. -Persi Diaconis, Mary V. Sunseri Professor of Statistics and Mathematics, Stanford University Mixing times are an active research topic within many fields from statistical physics to the theory of algorithms, as well as having intrinsic interest within mathematical probability and exploiting discrete analogs of important geometry concepts. The first edition became an instant classic, being accessible to advanced undergraduates and yet bringing readers close to current research frontiers. This second edition adds chapters on monotone chains, the exclusion process and hitting time parameters. Having both exercises...

  13. A joint logistic regression and covariate-adjusted continuous-time Markov chain model.

    Science.gov (United States)

    Rubin, Maria Laura; Chan, Wenyaw; Yamal, Jose-Miguel; Robertson, Claudia Sue

    2017-12-10

    The use of longitudinal measurements to predict a categorical outcome is an increasingly common goal in research studies. Joint models are commonly used to describe two or more models simultaneously by considering the correlated nature of their outcomes and the random error present in the longitudinal measurements. However, there is limited research on joint models with longitudinal predictors and categorical cross-sectional outcomes. Perhaps the most challenging task is how to model the longitudinal predictor process such that it represents the true biological mechanism that dictates the association with the categorical response. We propose a joint logistic regression and Markov chain model to describe a binary cross-sectional response, where the unobserved transition rates of a two-state continuous-time Markov chain are included as covariates. We use the method of maximum likelihood to estimate the parameters of our model. In a simulation study, coverage probabilities of about 95%, standard deviations close to standard errors, and low biases for the parameter values show that our estimation method is adequate. We apply the proposed joint model to a dataset of patients with traumatic brain injury to describe and predict a 6-month outcome based on physiological data collected post-injury and admission characteristics. Our analysis indicates that the information provided by physiological changes over time may help improve prediction of long-term functional status of these severely ill subjects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. A Markov chain Monte Carlo Expectation Maximization Algorithm for Statistical Analysis of DNA Sequence Evolution with Neighbor-Dependent Substitution Rates

    DEFF Research Database (Denmark)

    Hobolth, Asger

    2008-01-01

    -dimensional integrals required in the EM algorithm are estimated using MCMC sampling. The MCMC sampler requires simulation of sample paths from a continuous time Markov process, conditional on the beginning and ending states and the paths of the neighboring sites. An exact path sampling algorithm is developed......The evolution of DNA sequences can be described by discrete state continuous time Markov processes on a phylogenetic tree. We consider neighbor-dependent evolutionary models where the instantaneous rate of substitution at a site depends on the states of the neighboring sites. Neighbor......-dependent substitution models are analytically intractable and must be analyzed using either approximate or simulation-based methods. We describe statistical inference of neighbor-dependent models using a Markov chain Monte Carlo expectation maximization (MCMC-EM) algorithm. In the MCMC-EM algorithm, the high...

  15. Semi-Markov Arnason-Schwarz models.

    Science.gov (United States)

    King, Ruth; Langrock, Roland

    2016-06-01

    We consider multi-state capture-recapture-recovery data where observed individuals are recorded in a set of possible discrete states. Traditionally, the Arnason-Schwarz model has been fitted to such data where the state process is modeled as a first-order Markov chain, though second-order models have also been proposed and fitted to data. However, low-order Markov models may not accurately represent the underlying biology. For example, specifying a (time-independent) first-order Markov process involves the assumption that the dwell time in each state (i.e., the duration of a stay in a given state) has a geometric distribution, and hence that the modal dwell time is one. Specifying time-dependent or higher-order processes provides additional flexibility, but at the expense of a potentially significant number of additional model parameters. We extend the Arnason-Schwarz model by specifying a semi-Markov model for the state process, where the dwell-time distribution is specified more generally, using, for example, a shifted Poisson or negative binomial distribution. A state expansion technique is applied in order to represent the resulting semi-Markov Arnason-Schwarz model in terms of a simpler and computationally tractable hidden Markov model. Semi-Markov Arnason-Schwarz models come with only a very modest increase in the number of parameters, yet permit a significantly more flexible state process. Model selection can be performed using standard procedures, and in particular via the use of information criteria. The semi-Markov approach allows for important biological inference to be drawn on the underlying state process, for example, on the times spent in the different states. The feasibility of the approach is demonstrated in a simulation study, before being applied to real data corresponding to house finches where the states correspond to the presence or absence of conjunctivitis. © 2015, The International Biometric Society.

  16. The Green-Kubo formula, autocorrelation function and fluctuation spectrum for finite Markov chains with continuous time

    Energy Technology Data Exchange (ETDEWEB)

    Chen Yong; Chen Xi; Qian Minping [School of Mathematical Sciences, Peking University, Beijing 100871 (China)

    2006-03-17

    A general form of the Green-Kubo formula, which describes the fluctuations pertaining to all the steady states whether equilibrium or non-equilibrium, for a system driven by a finite Markov chain with continuous time (briefly, MC) {l_brace}{xi}{sub t}{r_brace}, is shown. The equivalence of different forms of the Green-Kubo formula is exploited. We also look at the differences in terms of the autocorrelation function and the fluctuation spectrum between the equilibrium state and the non-equilibrium steady state. Also, if the MC is in the non-equilibrium steady state, we can always find a complex function {psi}, such that the fluctuation spectrum of {l_brace}{phi}({xi}{sub t}){r_brace} is non-monotonous in [0, + {infinity})

  17. Integrating Continuous-Time and Discrete-Event Concepts in Process Modelling, Simulation and Control

    NARCIS (Netherlands)

    Beek, van D.A.; Gordijn, S.H.F.; Rooda, J.E.; Ertas, A.

    1995-01-01

    Currently, modelling of systems in the process industry requires the use of different specification languages for the specification of the discrete-event and continuous-time subsystems. In this way, models are restricted to individual subsystems of either a continuous-time or discrete-event nature.

  18. Continuous and Discrete-Time Optimal Controls for an Isolated Signalized Intersection

    Directory of Open Access Journals (Sweden)

    Jiyuan Tan

    2017-01-01

    Full Text Available A classical control problem for an isolated oversaturated intersection is revisited with a focus on the optimal control policy to minimize total delay. The difference and connection between existing continuous-time planning models and recently proposed discrete-time planning models are studied. A gradient descent algorithm is proposed to convert the optimal control plan of the continuous-time model to the plan of the discrete-time model in many cases. Analytic proof and numerical tests for the algorithm are also presented. The findings shed light on the links between two kinds of models.

  19. Discrete-time rewards model-checked

    NARCIS (Netherlands)

    Larsen, K.G.; Andova, S.; Niebert, Peter; Hermanns, H.; Katoen, Joost P.

    2003-01-01

    This paper presents a model-checking approach for analyzing discrete-time Markov reward models. For this purpose, the temporal logic probabilistic CTL is extended with reward constraints. This allows to formulate complex measures – involving expected as well as accumulated rewards – in a precise and

  20. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains

    DEFF Research Database (Denmark)

    Tataru, Paula Cristina; Hobolth, Asger

    2011-01-01

    past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. RESULTS: We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned......BACKGROUND: Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications...... of the algorithms is available at www.birc.au.dk/~paula/. CONCLUSIONS: We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually...

  1. Efficient Approximation of Optimal Control for Markov Games

    DEFF Research Database (Denmark)

    Fearnley, John; Rabe, Markus; Schewe, Sven

    2011-01-01

    We study the time-bounded reachability problem for continuous-time Markov decision processes (CTMDPs) and games (CTMGs). Existing techniques for this problem use discretisation techniques to break time into discrete intervals, and optimal control is approximated for each interval separately...

  2. Perturbed Markov chains

    OpenAIRE

    Solan, Eilon; Vieille, Nicolas

    2015-01-01

    We study irreducible time-homogenous Markov chains with finite state space in discrete time. We obtain results on the sensitivity of the stationary distribution and other statistical quantities with respect to perturbations of the transition matrix. We define a new closeness relation between transition matrices, and use graph-theoretic techniques, in contrast with the matrix analysis techniques previously used.

  3. Markov state modeling and dynamical coarse-graining via discrete relaxation path sampling.

    Science.gov (United States)

    Fačkovec, B; Vanden-Eijnden, E; Wales, D J

    2015-07-28

    A method is derived to coarse-grain the dynamics of complex molecular systems to a Markov jump process (MJP) describing how the system jumps between cells that fully partition its state space. The main inputs are relaxation times for each pair of cells, which are shown to be robust with respect to positioning of the cell boundaries. These relaxation times can be calculated via molecular dynamics simulations performed in each cell separately and are used in an efficient estimator for the rate matrix of the MJP. The method is illustrated through applications to Sinai billiards and a cluster of Lennard-Jones discs.

  4. Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach

    Energy Technology Data Exchange (ETDEWEB)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Bordeaux INP, IMB, UMR CNRS 5251 (France); Piunovskiy, A. B., E-mail: piunov@liv.ac.uk [University of Liverpool, Department of Mathematical Sciences (United Kingdom)

    2016-08-15

    In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures of the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.

  5. a Continuous-Time Positive Linear System

    Directory of Open Access Journals (Sweden)

    Kyungsup Kim

    2013-01-01

    Full Text Available This paper discusses a computational method to construct positive realizations with sparse matrices for continuous-time positive linear systems with multiple complex poles. To construct a positive realization of a continuous-time system, we use a Markov sequence similar to the impulse response sequence that is used in the discrete-time case. The existence of the proposed positive realization can be analyzed with the concept of a polyhedral convex cone. We provide a constructive algorithm to compute positive realizations with sparse matrices of some positive systems under certain conditions. A sufficient condition for the existence of a positive realization, under which the proposed constructive algorithm works well, is analyzed.

  6. Stochastic modeling of pitting corrosion in underground pipelines using Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Velazquez, J.C.; Caleyo, F.; Hallen, J.M.; Araujo, J.E. [Instituto Politecnico Nacional (IPN), Mexico D.F. (Mexico). Escuela Superior de Ingenieria Quimica e Industrias Extractivas (ESIQIE); Valor, A. [Universidad de La Habana, La Habana (Cuba)

    2009-07-01

    A non-homogenous, linear growth (pure birth) Markov process, with discrete states in continuous time, has been used to model external pitting corrosion in underground pipelines. The transition probability function for the pit depth is obtained from the analytical solution of the forward Kolmogorov equations for this process. The parameters of the transition probability function between depth states can be identified from the observed time evolution of the mean of the pit depth distribution. Monte Carlo simulations were used to predict the time evolution of the mean value of the pit depth distribution in soils with different physicochemical characteristics. The simulated distributions have been used to create an empirical Markov-chain-based stochastic model for predicting the evolution of pitting corrosion from the observed properties of the soil in contact with the pipeline. Real- life case studies, involving simulated and measured pit depth distributions are presented to illustrate the application of the proposed Markov chains model. (author)

  7. From discrete-time models to continuous-time, asynchronous modeling of financial markets

    NARCIS (Netherlands)

    Boer, Katalin; Kaymak, Uzay; Spiering, Jaap

    2007-01-01

    Most agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modeling of financial markets. We study the behavior of a learning market maker in a market with information

  8. From Discrete-Time Models to Continuous-Time, Asynchronous Models of Financial Markets

    NARCIS (Netherlands)

    K. Boer-Sorban (Katalin); U. Kaymak (Uzay); J. Spiering (Jaap)

    2006-01-01

    textabstractMost agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modelling of financial markets. We study the behaviour of a learning market maker in a market with

  9. Time-aggregation effects on the baseline of continuous-time and discrete-time hazard models

    NARCIS (Netherlands)

    ter Hofstede, F.; Wedel, M.

    In this study we reinvestigate the effect of time-aggregation for discrete- and continuous-time hazard models. We reanalyze the results of a previous Monte Carlo study by ter Hofstede and Wedel (1998), in which the effects of time-aggregation on the parameter estimates of hazard models were

  10. A Stochastic Hybrid Systems framework for analysis of Markov reward models

    International Nuclear Information System (INIS)

    Dhople, S.V.; DeVille, L.; Domínguez-García, A.D.

    2014-01-01

    In this paper, we propose a framework to analyze Markov reward models, which are commonly used in system performability analysis. The framework builds on a set of analytical tools developed for a class of stochastic processes referred to as Stochastic Hybrid Systems (SHS). The state space of an SHS is comprised of: (i) a discrete state that describes the possible configurations/modes that a system can adopt, which includes the nominal (non-faulty) operational mode, but also those operational modes that arise due to component faults, and (ii) a continuous state that describes the reward. Discrete state transitions are stochastic, and governed by transition rates that are (in general) a function of time and the value of the continuous state. The evolution of the continuous state is described by a stochastic differential equation and reward measures are defined as functions of the continuous state. Additionally, each transition is associated with a reset map that defines the mapping between the pre- and post-transition values of the discrete and continuous states; these mappings enable the definition of impulses and losses in the reward. The proposed SHS-based framework unifies the analysis of a variety of previously studied reward models. We illustrate the application of the framework to performability analysis via analytical and numerical examples

  11. Simulation based sequential Monte Carlo methods for discretely observed Markov processes

    OpenAIRE

    Neal, Peter

    2014-01-01

    Parameter estimation for discretely observed Markov processes is a challenging problem. However, simulation of Markov processes is straightforward using the Gillespie algorithm. We exploit this ease of simulation to develop an effective sequential Monte Carlo (SMC) algorithm for obtaining samples from the posterior distribution of the parameters. In particular, we introduce two key innovations, coupled simulations, which allow us to study multiple parameter values on the basis of a single sim...

  12. An Application of Graph Theory in Markov Chains Reliability Analysis

    Directory of Open Access Journals (Sweden)

    Pavel Skalny

    2014-01-01

    Full Text Available The paper presents reliability analysis which was realized for an industrial company. The aim of the paper is to present the usage of discrete time Markov chains and the flow in network approach. Discrete Markov chains a well-known method of stochastic modelling describes the issue. The method is suitable for many systems occurring in practice where we can easily distinguish various amount of states. Markov chains are used to describe transitions between the states of the process. The industrial process is described as a graph network. The maximal flow in the network corresponds to the production. The Ford-Fulkerson algorithm is used to quantify the production for each state. The combination of both methods are utilized to quantify the expected value of the amount of manufactured products for the given time period.

  13. Bisimulation and Simulation Relations for Markov Chains

    NARCIS (Netherlands)

    Baier, Christel; Hermanns, H.; Katoen, Joost P.; Wolf, Verena; Aceto, L.; Gordon, A.

    2006-01-01

    Formal notions of bisimulation and simulation relation play a central role for any kind of process algebra. This short paper sketches the main concepts for bisimulation and simulation relations for probabilistic systems, modelled by discrete- or continuous-time Markov chains.

  14. Stochastic ℋ∞ Finite-Time Control of Discrete-Time Systems with Packet Loss

    Directory of Open Access Journals (Sweden)

    Yingqi Zhang

    2012-01-01

    Full Text Available This paper investigates the stochastic finite-time stabilization and ℋ∞ control problem for one family of linear discrete-time systems over networks with packet loss, parametric uncertainties, and time-varying norm-bounded disturbance. Firstly, the dynamic model description studied is given, which, if the packet dropout is assumed to be a discrete-time homogenous Markov process, the class of discrete-time linear systems with packet loss can be regarded as Markovian jump systems. Based on Lyapunov function approach, sufficient conditions are established for the resulting closed-loop discrete-time system with Markovian jumps to be stochastic ℋ∞ finite-time boundedness and then state feedback controllers are designed to guarantee stochastic ℋ∞ finite-time stabilization of the class of stochastic systems. The stochastic ℋ∞ finite-time boundedness criteria can be tackled in the form of linear matrix inequalities with a fixed parameter. As an auxiliary result, we also give sufficient conditions on the robust stochastic stabilization of the class of linear systems with packet loss. Finally, simulation examples are presented to illustrate the validity of the developed scheme.

  15. Bayesian inference for hybrid discrete-continuous stochastic kinetic models

    International Nuclear Information System (INIS)

    Sherlock, Chris; Golightly, Andrew; Gillespie, Colin S

    2014-01-01

    We consider the problem of efficiently performing simulation and inference for stochastic kinetic models. Whilst it is possible to work directly with the resulting Markov jump process (MJP), computational cost can be prohibitive for networks of realistic size and complexity. In this paper, we consider an inference scheme based on a novel hybrid simulator that classifies reactions as either ‘fast’ or ‘slow’ with fast reactions evolving as a continuous Markov process whilst the remaining slow reaction occurrences are modelled through a MJP with time-dependent hazards. A linear noise approximation (LNA) of fast reaction dynamics is employed and slow reaction events are captured by exploiting the ability to solve the stochastic differential equation driving the LNA. This simulation procedure is used as a proposal mechanism inside a particle MCMC scheme, thus allowing Bayesian inference for the model parameters. We apply the scheme to a simple application and compare the output with an existing hybrid approach and also a scheme for performing inference for the underlying discrete stochastic model. (paper)

  16. Monte Carlo methods for the reliability analysis of Markov systems

    International Nuclear Information System (INIS)

    Buslik, A.J.

    1985-01-01

    This paper presents Monte Carlo methods for the reliability analysis of Markov systems. Markov models are useful in treating dependencies between components. The present paper shows how the adjoint Monte Carlo method for the continuous time Markov process can be derived from the method for the discrete-time Markov process by a limiting process. The straightforward extensions to the treatment of mean unavailability (over a time interval) are given. System unavailabilities can also be estimated; this is done by making the system failed states absorbing, and not permitting repair from them. A forward Monte Carlo method is presented in which the weighting functions are related to the adjoint function. In particular, if the exact adjoint function is known then weighting factors can be constructed such that the exact answer can be obtained with a single Monte Carlo trial. Of course, if the exact adjoint function is known, there is no need to perform the Monte Carlo calculation. However, the formulation is useful since it gives insight into choices of the weight factors which will reduce the variance of the estimator

  17. On the relationship of steady states of continuous and discrete models arising from biology.

    Science.gov (United States)

    Veliz-Cuba, Alan; Arthur, Joseph; Hochstetler, Laura; Klomps, Victoria; Korpi, Erikka

    2012-12-01

    For many biological systems that have been modeled using continuous and discrete models, it has been shown that such models have similar dynamical properties. In this paper, we prove that this happens in more general cases. We show that under some conditions there is a bijection between the steady states of continuous and discrete models arising from biological systems. Our results also provide a novel method to analyze certain classes of nonlinear models using discrete mathematics.

  18. H∞ Filtering for Discrete Markov Jump Singular Systems with Mode-Dependent Time Delay Based on T-S Fuzzy Model

    Directory of Open Access Journals (Sweden)

    Cheng Gong

    2014-01-01

    Full Text Available This paper investigates the H∞ filtering problem of discrete singular Markov jump systems (SMJSs with mode-dependent time delay based on T-S fuzzy model. First, by Lyapunov-Krasovskii functional approach, a delay-dependent sufficient condition on H∞-disturbance attenuation is presented, in which both stability and prescribed H∞ performance are required to be achieved for the filtering-error systems. Then, based on the condition, the delay-dependent H∞ filter design scheme for SMJSs with mode-dependent time delay based on T-S fuzzy model is developed in term of linear matrix inequality (LMI. Finally, an example is given to illustrate the effectiveness of the result.

  19. Dynamics of continuous-time bidirectional associative memory neural networks with impulses and their discrete counterparts

    International Nuclear Information System (INIS)

    Huo Haifeng; Li Wantong

    2009-01-01

    This paper is concerned with the global stability characteristics of a system of equations modelling the dynamics of continuous-time bidirectional associative memory neural networks with impulses. Sufficient conditions which guarantee the existence of a unique equilibrium and its exponential stability of the networks are obtained. For the goal of computation, discrete-time analogues of the corresponding continuous-time bidirectional associative memory neural networks with impulses are also formulated and studied. Our results show that the above continuous-time and discrete-time systems with impulses preserve the dynamics of the networks without impulses when we make some modifications and impose some additional conditions on the systems, the convergence characteristics dynamics of the networks are preserved by both continuous-time and discrete-time systems with some restriction imposed on the impulse effect.

  20. Nuclide transport of decay chain in the fractured rock medium: a model using continuous time Markov process

    International Nuclear Information System (INIS)

    Younmyoung Lee; Kunjai Lee

    1995-01-01

    A model using continuous time Markov process for nuclide transport of decay chain of arbitrary length in the fractured rock medium has been developed. Considering the fracture in the rock matrix as a finite number of compartments, the transition probability for nuclide from the transition intensity between and out of the compartments is represented utilizing Chapman-Kolmogorov equation, with which the expectation and the variance of nuclide distribution for the fractured rock medium could be obtained. A comparison between continuous time Markov process model and available analytical solutions for the nuclide transport of three decay chains without rock matrix diffusion has been made showing comparatively good agreement. Fittings with experimental breakthrough curves obtained with nonsorbing materials such as NaLS and uranine in the artificial fractured rock are also made. (author)

  1. CSL Model Checking Algorithms for Infinite-state Structured Markov chains

    NARCIS (Netherlands)

    Remke, Anne Katharina Ingrid; Haverkort, Boudewijn R.H.M.; Raskin, J.-F.; Thiagarajan, P.S.

    2007-01-01

    Jackson queueing networks (JQNs) are a very general class of queueing networks that find their application in a variety of settings. The state space of the continuous-time Markov chain (CTMC) that underlies such a JQN, is highly structured, however, of infinite size in as many dimensions as there

  2. A mean-variance frontier in discrete and continuous time

    OpenAIRE

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation is based on the solution for the frontier in discrete time. Using the same multiperiod framework as Li and Ng (2000), I provide an alternative derivation and an alternative formulation of the solu...

  3. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains.

    Science.gov (United States)

    Tataru, Paula; Hobolth, Asger

    2011-12-05

    Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD), the second on uniformization (UNI), and the third on integrals of matrix exponentials (EXPM). The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  4. Markov chains analytic and Monte Carlo computations

    CERN Document Server

    Graham, Carl

    2014-01-01

    Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec

  5. Markov chains and mixing times

    CERN Document Server

    Levin, David A; Wilmer, Elizabeth L

    2009-01-01

    This book is an introduction to the modern approach to the theory of Markov chains. The main goal of this approach is to determine the rate of convergence of a Markov chain to the stationary distribution as a function of the size and geometry of the state space. The authors develop the key tools for estimating convergence times, including coupling, strong stationary times, and spectral methods. Whenever possible, probabilistic methods are emphasized. The book includes many examples and provides brief introductions to some central models of statistical mechanics. Also provided are accounts of r

  6. A new look at the robust control of discrete-time Markov jump linear systems

    Science.gov (United States)

    Todorov, M. G.; Fragoso, M. D.

    2016-03-01

    In this paper, we make a foray in the role played by a set of four operators on the study of robust H2 and mixed H2/H∞ control problems for discrete-time Markov jump linear systems. These operators appear in the study of mean square stability for this class of systems. By means of new linear matrix inequality (LMI) characterisations of controllers, which include slack variables that, to some extent, separate the robustness and performance objectives, we introduce four alternative approaches to the design of controllers which are robustly stabilising and at the same time provide a guaranteed level of H2 performance. Since each operator provides a different degree of conservatism, the results are unified in the form of an iterative LMI technique for designing robust H2 controllers, whose convergence is attained in a finite number of steps. The method yields a new way of computing mixed H2/H∞ controllers, whose conservatism decreases with iteration. Two numerical examples illustrate the applicability of the proposed results for the control of a small unmanned aerial vehicle, and for an underactuated robotic arm.

  7. Approximating Markov Chains: What and why

    International Nuclear Information System (INIS)

    Pincus, S.

    1996-01-01

    Much of the current study of dynamical systems is focused on geometry (e.g., chaos and bifurcations) and ergodic theory. Yet dynamical systems were originally motivated by an attempt to open-quote open-quote solve,close-quote close-quote or at least understand, a discrete-time analogue of differential equations. As such, numerical, analytical solution techniques for dynamical systems would seem desirable. We discuss an approach that provides such techniques, the approximation of dynamical systems by suitable finite state Markov Chains. Steady state distributions for these Markov Chains, a straightforward calculation, will converge to the true dynamical system steady state distribution, with appropriate limit theorems indicated. Thus (i) approximation by a computable, linear map holds the promise of vastly faster steady state solutions for nonlinear, multidimensional differential equations; (ii) the solution procedure is unaffected by the presence or absence of a probability density function for the attractor, entirely skirting singularity, fractal/multifractal, and renormalization considerations. The theoretical machinery underpinning this development also implies that under very general conditions, steady state measures are weakly continuous with control parameter evolution. This means that even though a system may change periodicity, or become chaotic in its limiting behavior, such statistical parameters as the mean, standard deviation, and tail probabilities change continuously, not abruptly with system evolution. copyright 1996 American Institute of Physics

  8. A Computationally Efficient and Robust Implementation of the Continuous-Discrete Extended Kalman Filter

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Thomsen, Per Grove; Madsen, Henrik

    2007-01-01

    for nonlinear stochastic continuous-discrete time systems is more than two orders of magnitude faster than a conventional implementation. This is of significance in nonlinear model predictive control applications, statistical process monitoring as well as grey-box modelling of systems described by stochastic......We present a novel numerically robust and computationally efficient extended Kalman filter for state estimation in nonlinear continuous-discrete stochastic systems. The resulting differential equations for the mean-covariance evolution of the nonlinear stochastic continuous-discrete time systems...

  9. Markov Chain Models for the Stochastic Modeling of Pitting Corrosion

    Directory of Open Access Journals (Sweden)

    A. Valor

    2013-01-01

    Full Text Available The stochastic nature of pitting corrosion of metallic structures has been widely recognized. It is assumed that this kind of deterioration retains no memory of the past, so only the current state of the damage influences its future development. This characteristic allows pitting corrosion to be categorized as a Markov process. In this paper, two different models of pitting corrosion, developed using Markov chains, are presented. Firstly, a continuous-time, nonhomogeneous linear growth (pure birth Markov process is used to model external pitting corrosion in underground pipelines. A closed-form solution of the system of Kolmogorov's forward equations is used to describe the transition probability function in a discrete pit depth space. The transition probability function is identified by correlating the stochastic pit depth mean with the empirical deterministic mean. In the second model, the distribution of maximum pit depths in a pitting experiment is successfully modeled after the combination of two stochastic processes: pit initiation and pit growth. Pit generation is modeled as a nonhomogeneous Poisson process, in which induction time is simulated as the realization of a Weibull process. Pit growth is simulated using a nonhomogeneous Markov process. An analytical solution of Kolmogorov's system of equations is also found for the transition probabilities from the first Markov state. Extreme value statistics is employed to find the distribution of maximum pit depths.

  10. Modeling discrete time-to-event data

    CERN Document Server

    Tutz, Gerhard

    2016-01-01

    This book focuses on statistical methods for the analysis of discrete failure times. Failure time analysis is one of the most important fields in statistical research, with applications affecting a wide range of disciplines, in particular, demography, econometrics, epidemiology and clinical research. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale. In empirical studies, however, failure times are often discrete, either because they have been measured in intervals (e.g., quarterly or yearly) or because they have been rounded or grouped. The book covers well-established methods like life-table analysis and discrete hazard regression models, but also introduces state-of-the art techniques for model evaluation, nonparametric estimation and variable selection. Throughout, the methods are illustrated by real life applications, and relationships to survival analysis in continuous time are expla...

  11. Modelling and real-time simulation of continuous-discrete systems in mechatronics

    Energy Technology Data Exchange (ETDEWEB)

    Lindow, H. [Rostocker, Magdeburg (Germany)

    1996-12-31

    This work presents a methodology for simulation and modelling of systems with continuous - discrete dynamics. It derives hybrid discrete event models from Lagrange`s equations of motion. This method combines continuous mechanical, electrical and thermodynamical submodels on one hand with discrete event models an the other hand into a hybrid discrete event model. This straight forward software development avoids numeric overhead.

  12. Parisian ruin for the dual risk process in discrete-time

    OpenAIRE

    Palmowski, Zbigniew; Ramsden, Lewis; Papaioannou, Apostolos D.

    2017-01-01

    In this paper we consider the Parisian ruin probabilities for the dual risk model in a discrete-time setting. By exploiting the strong Markov property of the risk process we derive a recursive expression for the fnite-time Parisian ruin probability, in terms of classic discrete-time dual ruin probabilities. Moreover, we obtain an explicit expression for the corresponding infnite-time Parisian ruin probability as a limiting case. In order to obtain more analytic results, we employ a conditioni...

  13. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains

    Directory of Open Access Journals (Sweden)

    Tataru Paula

    2011-12-01

    Full Text Available Abstract Background Continuous time Markov chains (CTMCs is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes are unaccessible and the past must be inferred from DNA sequence data observed in the present. Results We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD, the second on uniformization (UNI, and the third on integrals of matrix exponentials (EXPM. The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. Conclusions We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  14. Basic problems and solution methods for two-dimensional continuous 3 × 3 order hidden Markov model

    International Nuclear Information System (INIS)

    Wang, Guo-gang; Tang, Gui-jin; Gan, Zong-liang; Cui, Zi-guan; Zhu, Xiu-chang

    2016-01-01

    A novel model referred to as two-dimensional continuous 3 × 3 order hidden Markov model is put forward to avoid the disadvantages of the classical hypothesis of two-dimensional continuous hidden Markov model. This paper presents three equivalent definitions of the model, in which the state transition probability relies on not only immediate horizontal and vertical states but also immediate diagonal state, and in which the probability density of the observation relies on not only current state but also immediate horizontal and vertical states. The paper focuses on the three basic problems of the model, namely probability density calculation, parameters estimation and path backtracking. Some algorithms solving the questions are theoretically derived, by exploiting the idea that the sequences of states on rows or columns of the model can be viewed as states of a one-dimensional continuous 1 × 2 order hidden Markov model. Simulation results further demonstrate the performance of the algorithms. Because there are more statistical characteristics in the structure of the proposed new model, it can more accurately describe some practical problems, as compared to two-dimensional continuous hidden Markov model.

  15. Theoretical restrictions on longest implicit time scales in Markov state models of biomolecular dynamics

    Science.gov (United States)

    Sinitskiy, Anton V.; Pande, Vijay S.

    2018-01-01

    Markov state models (MSMs) have been widely used to analyze computer simulations of various biomolecular systems. They can capture conformational transitions much slower than an average or maximal length of a single molecular dynamics (MD) trajectory from the set of trajectories used to build the MSM. A rule of thumb claiming that the slowest implicit time scale captured by an MSM should be comparable by the order of magnitude to the aggregate duration of all MD trajectories used to build this MSM has been known in the field. However, this rule has never been formally proved. In this work, we present analytical results for the slowest time scale in several types of MSMs, supporting the above rule. We conclude that the slowest implicit time scale equals the product of the aggregate sampling and four factors that quantify: (1) how much statistics on the conformational transitions corresponding to the longest implicit time scale is available, (2) how good the sampling of the destination Markov state is, (3) the gain in statistics from using a sliding window for counting transitions between Markov states, and (4) a bias in the estimate of the implicit time scale arising from finite sampling of the conformational transitions. We demonstrate that in many practically important cases all these four factors are on the order of unity, and we analyze possible scenarios that could lead to their significant deviation from unity. Overall, we provide for the first time analytical results on the slowest time scales captured by MSMs. These results can guide further practical applications of MSMs to biomolecular dynamics and allow for higher computational efficiency of simulations.

  16. Application of Stochastic Automata Networks for Creation of Continuous Time Markov Chain Models of Voltage Gating of Gap Junction Channels

    Directory of Open Access Journals (Sweden)

    Mindaugas Snipas

    2015-01-01

    Full Text Available The primary goal of this work was to study advantages of numerical methods used for the creation of continuous time Markov chain models (CTMC of voltage gating of gap junction (GJ channels composed of connexin protein. This task was accomplished by describing gating of GJs using the formalism of the stochastic automata networks (SANs, which allowed for very efficient building and storing of infinitesimal generator of the CTMC that allowed to produce matrices of the models containing a distinct block structure. All of that allowed us to develop efficient numerical methods for a steady-state solution of CTMC models. This allowed us to accelerate CPU time, which is necessary to solve CTMC models, ∼20 times.

  17. Application of Stochastic Automata Networks for Creation of Continuous Time Markov Chain Models of Voltage Gating of Gap Junction Channels

    Science.gov (United States)

    Pranevicius, Henrikas; Pranevicius, Mindaugas; Pranevicius, Osvaldas; Bukauskas, Feliksas F.

    2015-01-01

    The primary goal of this work was to study advantages of numerical methods used for the creation of continuous time Markov chain models (CTMC) of voltage gating of gap junction (GJ) channels composed of connexin protein. This task was accomplished by describing gating of GJs using the formalism of the stochastic automata networks (SANs), which allowed for very efficient building and storing of infinitesimal generator of the CTMC that allowed to produce matrices of the models containing a distinct block structure. All of that allowed us to develop efficient numerical methods for a steady-state solution of CTMC models. This allowed us to accelerate CPU time, which is necessary to solve CTMC models, ∼20 times. PMID:25705700

  18. Continuous- and Discrete-Time Stimulus Sequences for High Stimulus Rate Paradigm in Evoked Potential Studies

    Directory of Open Access Journals (Sweden)

    Tao Wang

    2013-01-01

    Full Text Available To obtain reliable transient auditory evoked potentials (AEPs from EEGs recorded using high stimulus rate (HSR paradigm, it is critical to design the stimulus sequences of appropriate frequency properties. Traditionally, the individual stimulus events in a stimulus sequence occur only at discrete time points dependent on the sampling frequency of the recording system and the duration of stimulus sequence. This dependency likely causes the implementation of suboptimal stimulus sequences, sacrificing the reliability of resulting AEPs. In this paper, we explicate the use of continuous-time stimulus sequence for HSR paradigm, which is independent of the discrete electroencephalogram (EEG recording system. We employ simulation studies to examine the applicability of the continuous-time stimulus sequences and the impacts of sampling frequency on AEPs in traditional studies using discrete-time design. Results from these studies show that the continuous-time sequences can offer better frequency properties and improve the reliability of recovered AEPs. Furthermore, we find that the errors in the recovered AEPs depend critically on the sampling frequencies of experimental systems, and their relationship can be fitted using a reciprocal function. As such, our study contributes to the literature by demonstrating the applicability and advantages of continuous-time stimulus sequences for HSR paradigm and by revealing the relationship between the reliability of AEPs and sampling frequencies of the experimental systems when discrete-time stimulus sequences are used in traditional manner for the HSR paradigm.

  19. Assessment of bidirectional influences between family relationships and adolescent problem behavior: Discrete versus continuous time analysis

    NARCIS (Netherlands)

    Delsing, M.J.M.H.; Oud, J.H.L.; Bruyn, E.E.J. De

    2005-01-01

    In family research, bidirectional influences between the family and the individual are usually analyzed in discrete time. Results from discrete time analysis, however, have been shown to be highly dependent on the length of the observation interval. Continuous time analysis using stochastic

  20. Multi-state Markov models for disease progression in the presence of informative examination times: an application to hepatitis C.

    Science.gov (United States)

    Sweeting, M J; Farewell, V T; De Angelis, D

    2010-05-20

    In many chronic diseases it is important to understand the rate at which patients progress from infection through a series of defined disease states to a clinical outcome, e.g. cirrhosis in hepatitis C virus (HCV)-infected individuals or AIDS in HIV-infected individuals. Typically data are obtained from longitudinal studies, which often are observational in nature, and where disease state is observed only at selected examinations throughout follow-up. Transition times between disease states are therefore interval censored. Multi-state Markov models are commonly used to analyze such data, but rely on the assumption that the examination times are non-informative, and hence the examination process is ignorable in a likelihood-based analysis. In this paper we develop a Markov model that relaxes this assumption through the premise that the examination process is ignorable only after conditioning on a more regularly observed auxiliary variable. This situation arises in a study of HCV disease progression, where liver biopsies (the examinations) are sparse, irregular, and potentially informative with respect to the transition times. We use additional information on liver function tests (LFTs), commonly collected throughout follow-up, to inform current disease state and to assume an ignorable examination process. The model developed has a similar structure to a hidden Markov model and accommodates both the series of LFT measurements and the partially latent series of disease states. We show through simulation how this model compares with the commonly used ignorable Markov model, and a Markov model that assumes the examination process is non-ignorable. Copyright 2010 John Wiley & Sons, Ltd.

  1. Modeling nonhomogeneous Markov processes via time transformation.

    Science.gov (United States)

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  2. Model Checking Markov Reward Models with Impulse Rewards

    NARCIS (Netherlands)

    Cloth, Lucia; Katoen, Joost-Pieter; Khattri, Maneesh; Pulungan, Reza; Bondavalli, Andrea; Haverkort, Boudewijn; Tang, Dong

    This paper considers model checking of Markov reward models (MRMs), continuous-time Markov chains with state rewards as well as impulse rewards. The reward extension of the logic CSL (Continuous Stochastic Logic) is interpreted over such MRMs, and two numerical algorithms are provided to check the

  3. NonMarkov Ito Processes with 1- state memory

    Science.gov (United States)

    McCauley, Joseph L.

    2010-08-01

    A Markov process, by definition, cannot depend on any previous state other than the last observed state. An Ito process implies the Fokker-Planck and Kolmogorov backward time partial differential eqns. for transition densities, which in turn imply the Chapman-Kolmogorov eqn., but without requiring the Markov condition. We present a class of Ito process superficially resembling Markov processes, but with 1-state memory. In finance, such processes would obey the efficient market hypothesis up through the level of pair correlations. These stochastic processes have been mislabeled in recent literature as 'nonlinear Markov processes'. Inspired by Doob and Feller, who pointed out that the ChapmanKolmogorov eqn. is not restricted to Markov processes, we exhibit a Gaussian Ito transition density with 1-state memory in the drift coefficient that satisfies both of Kolmogorov's partial differential eqns. and also the Chapman-Kolmogorov eqn. In addition, we show that three of the examples from McKean's seminal 1966 paper are also nonMarkov Ito processes. Last, we show that the transition density of the generalized Black-Scholes type partial differential eqn. describes a martingale, and satisfies the ChapmanKolmogorov eqn. This leads to the shortest-known proof that the Green function of the Black-Scholes eqn. with variable diffusion coefficient provides the so-called martingale measure of option pricing.

  4. A scaling analysis of a cat and mouse Markov chain

    NARCIS (Netherlands)

    Litvak, Nelli; Robert, Philippe

    Motivated by an original on-line page-ranking algorithm, starting from an arbitrary Markov chain $(C_n)$ on a discrete state space ${\\cal S}$, a Markov chain $(C_n,M_n)$ on the product space ${\\cal S}^2$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain

  5. Decoding and modelling of time series count data using Poisson hidden Markov model and Markov ordinal logistic regression models.

    Science.gov (United States)

    Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I

    2018-01-01

    Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.

  6. Markov Chain Model with Catastrophe to Determine Mean Time to Default of Credit Risky Assets

    Science.gov (United States)

    Dharmaraja, Selvamuthu; Pasricha, Puneet; Tardelli, Paola

    2017-11-01

    This article deals with the problem of probabilistic prediction of the time distance to default for a firm. To model the credit risk, the dynamics of an asset is described as a function of a homogeneous discrete time Markov chain subject to a catastrophe, the default. The behaviour of the Markov chain is investigated and the mean time to the default is expressed in a closed form. The methodology to estimate the parameters is given. Numerical results are provided to illustrate the applicability of the proposed model on real data and their analysis is discussed.

  7. From Brownian Dynamics to Markov Chain: An Ion Channel Example

    KAUST Repository

    Chen, Wan

    2014-02-27

    A discrete rate theory for multi-ion channels is presented, in which the continuous dynamics of ion diffusion is reduced to transitions between Markovian discrete states. In an open channel, the ion permeation process involves three types of events: an ion entering the channel, an ion escaping from the channel, or an ion hopping between different energy minima in the channel. The continuous dynamics leads to a hierarchy of Fokker-Planck equations, indexed by channel occupancy. From these the mean escape times and splitting probabilities (denoting from which side an ion has escaped) can be calculated. By equating these with the corresponding expressions from the Markov model, one can determine the Markovian transition rates. The theory is illustrated with a two-ion one-well channel. The stationary probability of states is compared with that from both Brownian dynamics simulation and the hierarchical Fokker-Planck equations. The conductivity of the channel is also studied, and the optimal geometry maximizing ion flux is computed. © 2014 Society for Industrial and Applied Mathematics.

  8. Long memory of financial time series and hidden Markov models with time-varying parameters

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    Hidden Markov models are often used to capture stylized facts of daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time-varying behavior for the ability to reproduce the stylized...... facts have not been thoroughly examined. This paper presents an adaptive estimation approach that allows for the parameters of the estimated models to be time-varying. It is shown that a two-state Gaussian hidden Markov model with time-varying parameters is able to reproduce the long memory of squared...... daily returns that was previously believed to be the most difficult fact to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step predictions....

  9. Markov Chains and Markov Processes

    OpenAIRE

    Ogunbayo, Segun

    2016-01-01

    Markov chain, which was named after Andrew Markov is a mathematical system that transfers a state to another state. Many real world systems contain uncertainty. This study helps us to understand the basic idea of a Markov chain and how is been useful in our daily lives. For some times there had been suspense on distinct predictions and future existences. Also in different games there had been different expectations or results involved. That is the reason why we need Markov chains to predict o...

  10. A Multistep Extending Truncation Method towards Model Construction of Infinite-State Markov Chains

    Directory of Open Access Journals (Sweden)

    Kemin Wang

    2014-01-01

    Full Text Available The model checking of Infinite-State Continuous Time Markov Chains will inevitably encounter the state explosion problem when constructing the CTMCs model; our method is to get a truncated model of the infinite one; to get a sufficient truncated model to meet the model checking of Continuous Stochastic Logic based system properties, we propose a multistep extending advanced truncation method towards model construction of CTMCs and implement it in the INFAMY model checker; the experiment results show that our method is effective.

  11. Continuous time Boolean modeling for biological signaling: application of Gillespie algorithm.

    Science.gov (United States)

    Stoll, Gautier; Viara, Eric; Barillot, Emmanuel; Calzone, Laurence

    2012-08-29

    Mathematical modeling is used as a Systems Biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and predict the effect of perturbations. This article presents an algorithm for modeling biological networks in a discrete framework with continuous time. There exist two major types of mathematical modeling approaches: (1) quantitative modeling, representing various chemical species concentrations by real numbers, mainly based on differential equations and chemical kinetics formalism; (2) and qualitative modeling, representing chemical species concentrations or activities by a finite set of discrete values. Both approaches answer particular (and often different) biological questions. Qualitative modeling approach permits a simple and less detailed description of the biological systems, efficiently describes stable state identification but remains inconvenient in describing the transient kinetics leading to these states. In this context, time is represented by discrete steps. Quantitative modeling, on the other hand, can describe more accurately the dynamical behavior of biological processes as it follows the evolution of concentration or activities of chemical species as a function of time, but requires an important amount of information on the parameters difficult to find in the literature. Here, we propose a modeling framework based on a qualitative approach that is intrinsically continuous in time. The algorithm presented in this article fills the gap between qualitative and quantitative modeling. It is based on continuous time Markov process applied on a Boolean state space. In order to describe the temporal evolution of the biological process we wish to model, we explicitly specify the transition rates for each node. For that purpose, we built a language that can be seen as a generalization of Boolean equations. Mathematically, this approach can be translated in a set of ordinary differential

  12. Using multi-state markov models to identify credit card risk

    Directory of Open Access Journals (Sweden)

    Daniel Evangelista Régis

    2016-06-01

    Full Text Available Abstract The main interest of this work is to analyze the application of multi-state Markov models to evaluate credit card risk by investigating the characteristics of different state transitions in client-institution relationships over time, thereby generating score models for various purposes. We also used logistic regression models to compare the results with those obtained using multi-state Markov models. The models were applied to an actual database of a Brazilian financial institution. In this application, multi-state Markov models performed better than logistic regression models in predicting default risk, and logistic regression models performed better in predicting cancellation risk.

  13. Discrete stochastic processes and applications

    CERN Document Server

    Collet, Jean-François

    2018-01-01

    This unique text for beginning graduate students gives a self-contained introduction to the mathematical properties of stochastics and presents their applications to Markov processes, coding theory, population dynamics, and search engine design. The book is ideal for a newly designed course in an introduction to probability and information theory. Prerequisites include working knowledge of linear algebra, calculus, and probability theory. The first part of the text focuses on the rigorous theory of Markov processes on countable spaces (Markov chains) and provides the basis to developing solid probabilistic intuition without the need for a course in measure theory. The approach taken is gradual beginning with the case of discrete time and moving on to that of continuous time. The second part of this text is more applied; its core introduces various uses of convexity in probability and presents a nice treatment of entropy.

  14. Discrete-Time Nonlinear Control of VSC-HVDC System

    Directory of Open Access Journals (Sweden)

    TianTian Qian

    2015-01-01

    Full Text Available Because VSC-HVDC is a kind of strong nonlinear, coupling, and multi-input multioutput (MIMO system, its control problem is always attracting much attention from scholars. And a lot of papers have done research on its control strategy in the continuous-time domain. But the control system is implemented through the computer discrete sampling in practical engineering. It is necessary to study the mathematical model and control algorithm in the discrete-time domain. The discrete mathematical model based on output feedback linearization and discrete sliding mode control algorithm is proposed in this paper. And to ensure the effectiveness of the control system in the quasi sliding mode state, the fast output sampling method is used in the output feedback. The results from simulation experiment in MATLAB/SIMULINK prove that the proposed discrete control algorithm can make the VSC-HVDC system have good static, dynamic, and robust characteristics in discrete-time domain.

  15. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  16. Equilibrium and response properties of the integrate-and-fire neuron in discrete time

    Directory of Open Access Journals (Sweden)

    Moritz Helias

    2010-01-01

    Full Text Available The integrate-and-fire neuron with exponential postsynaptic potentials is a frequently employed model to study neural networks. Simulations in discrete time still have highest performance at moderate numerical errors, which makes them first choice for long-term simulations of plastic networks. Here we extend the population density approach to investigate how the equilibrium and response properties of the leaky integrate-and-fire neuron are affected by time discretization. We present a novel analytical treatment of the boundary condition at threshold, taking both discretization of time and finite synaptic weights into account. We uncover an increased membrane potential density just below threshold as the decisive property that explains the deviations found between simulations and the classical diffusion approximation. Temporal discretization and finite synaptic weights both contribute to this effect. Our treatment improves the standard formula to calculate the neuron’s equilibrium firing rate. Direct solution of the Markov process describing the evolution of the membrane potential density confirms our analysis and yields a method to calculate the firing rate exactly. Knowing the shape of the membrane potential distribution near threshold enables us to devise the transient response properties of the neuron model to synaptic input. We find a pronounced non-linear fast response component that has not been described by the prevailing continuous time theory for Gaussian white noise input.

  17. Stencil method: a Markov model for transport in porous media

    Science.gov (United States)

    Delgoshaie, A. H.; Tchelepi, H.; Jenny, P.

    2016-12-01

    In porous media the transport of fluid is dominated by flow-field heterogeneity resulting from the underlying transmissibility field. Since the transmissibility is highly uncertain, many realizations of a geological model are used to describe the statistics of the transport phenomena in a Monte Carlo framework. One possible way to avoid the high computational cost of physics-based Monte Carlo simulations is to model the velocity field as a Markov process and use Markov Chain Monte Carlo. In previous works multiple Markov models for discrete velocity processes have been proposed. These models can be divided into two general classes of Markov models in time and Markov models in space. Both of these choices have been shown to be effective to some extent. However some studies have suggested that the Markov property cannot be confirmed for a temporal Markov process; Therefore there is not a consensus about the validity and value of Markov models in time. Moreover, previous spacial Markov models have only been used for modeling transport on structured networks and can not be readily applied to model transport in unstructured networks. In this work we propose a novel approach for constructing a Markov model in time (stencil method) for a discrete velocity process. The results form the stencil method are compared to previously proposed spacial Markov models for structured networks. The stencil method is also applied to unstructured networks and can successfully describe the dispersion of particles in this setting. Our conclusion is that both temporal Markov models and spacial Markov models for discrete velocity processes can be valid for a range of model parameters. Moreover, we show that the stencil model can be more efficient in many practical settings and is suited to model dispersion both on structured and unstructured networks.

  18. Conditions for the Solvability of the Linear Programming Formulation for Constrained Discounted Markov Decision Processes

    Energy Technology Data Exchange (ETDEWEB)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Institut de Mathématiques de Bordeaux, INRIA Bordeaux Sud Ouest, Team: CQFD, and IMB (France); Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es [UNED, Department of Statistics and Operations Research (Spain)

    2016-08-15

    We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.

  19. Risk-based design of process systems using discrete-time Bayesian networks

    International Nuclear Information System (INIS)

    Khakzad, Nima; Khan, Faisal; Amyotte, Paul

    2013-01-01

    Temporal Bayesian networks have gained popularity as a robust technique to model dynamic systems in which the components' sequential dependency, as well as their functional dependency, cannot be ignored. In this regard, discrete-time Bayesian networks have been proposed as a viable alternative to solve dynamic fault trees without resort to Markov chains. This approach overcomes the drawbacks of Markov chains such as the state-space explosion and the error-prone conversion procedure from dynamic fault tree. It also benefits from the inherent advantages of Bayesian networks such as probability updating. However, effective mapping of the dynamic gates of dynamic fault trees into Bayesian networks while avoiding the consequent huge multi-dimensional probability tables has always been a matter of concern. In this paper, a new general formalism has been developed to model two important elements of dynamic fault tree, i.e., cold spare gate and sequential enforcing gate, with any arbitrary probability distribution functions. Also, an innovative Neutral Dependency algorithm has been introduced to model dynamic gates such as priority-AND gate, thus reducing the dimension of conditional probability tables by an order of magnitude. The second part of the paper is devoted to the application of discrete-time Bayesian networks in the risk assessment and safety analysis of complex process systems. It has been shown how dynamic techniques can effectively be applied for optimal allocation of safety systems to obtain maximum risk reduction.

  20. Noise can speed convergence in Markov chains.

    Science.gov (United States)

    Franzke, Brandon; Kosko, Bart

    2011-10-01

    A new theorem shows that noise can speed convergence to equilibrium in discrete finite-state Markov chains. The noise applies to the state density and helps the Markov chain explore improbable regions of the state space. The theorem ensures that a stochastic-resonance noise benefit exists for states that obey a vector-norm inequality. Such noise leads to faster convergence because the noise reduces the norm components. A corollary shows that a noise benefit still occurs if the system states obey an alternate norm inequality. This leads to a noise-benefit algorithm that requires knowledge of the steady state. An alternative blind algorithm uses only past state information to achieve a weaker noise benefit. Simulations illustrate the predicted noise benefits in three well-known Markov models. The first model is a two-parameter Ehrenfest diffusion model that shows how noise benefits can occur in the class of birth-death processes. The second model is a Wright-Fisher model of genotype drift in population genetics. The third model is a chemical reaction network of zeolite crystallization. A fourth simulation shows a convergence rate increase of 64% for states that satisfy the theorem and an increase of 53% for states that satisfy the corollary. A final simulation shows that even suboptimal noise can speed convergence if the noise applies over successive time cycles. Noise benefits tend to be sharpest in Markov models that do not converge quickly and that do not have strong absorbing states.

  1. Analyzing the profit-loss sharing contracts with Markov model

    Directory of Open Access Journals (Sweden)

    Imam Wahyudi

    2016-12-01

    Full Text Available The purpose of this paper is to examine how to use first order Markov chain to build a reliable monitoring system for the profit-loss sharing based contracts (PLS as the mode of financing contracts in Islamic bank with censored continuous-time observations. The paper adopts the longitudinal analysis with the first order Markov chain framework. Laplace transform was used with homogenous continuous time assumption, from discretized generator matrix, to generate the transition matrix. Various metrics, i.e.: eigenvalue and eigenvector were used to test the first order Markov chain assumption. Cox semi parametric model was used also to analyze the momentum and waiting time effect as non-Markov behavior. The result shows that first order Markov chain is powerful as a monitoring tool for Islamic banks. We find that waiting time negatively affected present rating downgrade (upgrade significantly. Likewise, momentum covariate showed negative effect. Finally, the result confirms that different origin rating have different movement behavior. The paper explores the potential of Markov chain framework as a risk management tool for Islamic banks. It provides valuable insight and integrative model for banks to manage their borrower accounts. This model can be developed to be a powerful early warning system to identify which borrower needs to be monitored intensively. Ultimately, this model could potentially increase the efficiency, productivity and competitiveness of Islamic banks in Indonesia. The analysis used only rating data. Further study should be able to give additional information about the determinant factors of rating movement of the borrowers by incorporating various factors such as contract-related factors, bank-related factors, borrower-related factors and macroeconomic factors.

  2. Applying Markov Chains for NDVI Time Series Forecasting of Latvian Regions

    Directory of Open Access Journals (Sweden)

    Stepchenko Arthur

    2015-12-01

    Full Text Available Time series of earth observation based estimates of vegetation inform about variations in vegetation at the scale of Latvia. A vegetation index is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation. NDVI index is an important variable for vegetation forecasting and management of various problems, such as climate change monitoring, energy usage monitoring, managing the consumption of natural resources, agricultural productivity monitoring, drought monitoring and forest fire detection. In this paper, we make a one-step-ahead prediction of 7-daily time series of NDVI index using Markov chains. The choice of a Markov chain is due to the fact that a Markov chain is a sequence of random variables where each variable is located in some state. And a Markov chain contains probabilities of moving from one state to other.

  3. Discrete-continuous analysis of optimal equipment replacement

    OpenAIRE

    YATSENKO, Yuri; HRITONENKO, Natali

    2008-01-01

    In Operations Research, the equipment replacement process is usually modeled in discrete time. The optimal replacement strategies are found from discrete (or integer) programming problems, well known for their analytic and computational complexity. An alternative approach is represented by continuous-time vintage capital models that explicitly involve the equipment lifetime and are described by nonlinear integral equations. Then the optimal replacement is determined via the opt...

  4. [Compared Markov with fractal models by using single-channel experimental and simulation data].

    Science.gov (United States)

    Lan, Tonghan; Wu, Hongxiu; Lin, Jiarui

    2006-10-01

    The gating mechanical kinetical of ion channels has been modeled as a Markov process. In these models it is assumed that the channel protein has a small number of discrete conformational states and kinetic rate constants connecting these states are constant, the transition rate constants among the states is independent both of time and of the previous channel activity. It is assumed in Liebovitch's fractal model that the channel exists in an infinite number of energy states, consequently, transitions from one conductance state to another would be governed by a continuum of rate constants. In this paper, a statistical comparison is presented of Markov and fractal models of ion channel gating, the analysis is based on single-channel data from ion channel voltage-dependence K+ single channel of neuron cell and simulation data from three-states Markov model.

  5. General definitions of chaos for continuous and discrete-time processes

    OpenAIRE

    Vieru, Andrei

    2008-01-01

    A precise definition of chaos for discrete processes based on iteration already exists. We shall first reformulate it in a more general frame, taking into account the fact that discrete chaotic behavior is neither necessarily based on iteration nor strictly related to compact metric spaces or to bounded functions. Then we shall apply the central idea of this definition to continuous processes. We shall try to see what chaos is, regardless of the way it is generated.

  6. Stylised facts of financial time series and hidden Markov models in continuous time

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    2015-01-01

    presents an extension to continuous time where it is possible to increase the number of states with a linear rather than quadratic growth in the number of parameters. The possibility of increasing the number of states leads to a better fit to both the distributional and temporal properties of daily returns....

  7. A toolbox for safety instrumented system evaluation based on improved continuous-time Markov chain

    Science.gov (United States)

    Wardana, Awang N. I.; Kurniady, Rahman; Pambudi, Galih; Purnama, Jaka; Suryopratomo, Kutut

    2017-08-01

    Safety instrumented system (SIS) is designed to restore a plant into a safe condition when pre-hazardous event is occur. It has a vital role especially in process industries. A SIS shall be meet with safety requirement specifications. To confirm it, SIS shall be evaluated. Typically, the evaluation is calculated by hand. This paper presents a toolbox for SIS evaluation. It is developed based on improved continuous-time Markov chain. The toolbox supports to detailed approach of evaluation. This paper also illustrates an industrial application of the toolbox to evaluate arch burner safety system of primary reformer. The results of the case study demonstrates that the toolbox can be used to evaluate industrial SIS in detail and to plan the maintenance strategy.

  8. Markov chain modelling of pitting corrosion in underground pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Caleyo, F. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico)], E-mail: fcaleyo@gmail.com; Velazquez, J.C. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico); Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado, 10400 La Habana (Cuba); Hallen, J.M. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico)

    2009-09-15

    A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.

  9. Markov chain modelling of pitting corrosion in underground pipelines

    International Nuclear Information System (INIS)

    Caleyo, F.; Velazquez, J.C.; Valor, A.; Hallen, J.M.

    2009-01-01

    A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.

  10. Clinical trial optimization: Monte Carlo simulation Markov model for planning clinical trials recruitment.

    Science.gov (United States)

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2007-05-01

    The patient recruitment process of clinical trials is an essential element which needs to be designed properly. In this paper we describe different simulation models under continuous and discrete time assumptions for the design of recruitment in clinical trials. The results of hypothetical examples of clinical trial recruitments are presented. The recruitment time is calculated and the number of recruited patients is quantified for a given time and probability of recruitment. The expected delay and the effective recruitment durations are estimated using both continuous and discrete time modeling. The proposed type of Monte Carlo simulation Markov models will enable optimization of the recruitment process and the estimation and the calibration of its parameters to aid the proposed clinical trials. A continuous time simulation may minimize the duration of the recruitment and, consequently, the total duration of the trial.

  11. Semi-Markov models for interval censored transient cognitive states with back transitions and a competing risk.

    Science.gov (United States)

    Wei, Shaoceng; Kryscio, Richard J

    2016-12-01

    Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient cognitive states and death as a competing risk. Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript, we apply a semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. © The Author(s) 2014.

  12. Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes

    Directory of Open Access Journals (Sweden)

    Tomoaki Nakamura

    2017-12-01

    Full Text Available Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM, the emission distributions of which are Gaussian processes (GPs. Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods.

  13. Discrete-Time Pricing and Optimal Exercise of American Perpetual Warrants in the Geometric Random Walk Model

    International Nuclear Information System (INIS)

    Vanderbei, Robert J.; Pınar, Mustafa Ç.; Bozkaya, Efe B.

    2013-01-01

    An American option (or, warrant) is the right, but not the obligation, to purchase or sell an underlying equity at any time up to a predetermined expiration date for a predetermined amount. A perpetual American option differs from a plain American option in that it does not expire. In this study, we solve the optimal stopping problem of a perpetual American option (both call and put) in discrete time using linear programming duality. Under the assumption that the underlying stock price follows a discrete time and discrete state Markov process, namely a geometric random walk, we formulate the pricing problem as an infinite dimensional linear programming (LP) problem using the excessive-majorant property of the value function. This formulation allows us to solve complementary slackness conditions in closed-form, revealing an optimal stopping strategy which highlights the set of stock-prices where the option should be exercised. The analysis for the call option reveals that such a critical value exists only in some cases, depending on a combination of state-transition probabilities and the economic discount factor (i.e., the prevailing interest rate) whereas it ceases to be an issue for the put.

  14. Discrete-Time Pricing and Optimal Exercise of American Perpetual Warrants in the Geometric Random Walk Model

    Energy Technology Data Exchange (ETDEWEB)

    Vanderbei, Robert J., E-mail: rvdb@princeton.edu [Princeton University, Department of Operations Research and Financial Engineering (United States); P Latin-Small-Letter-Dotless-I nar, Mustafa C., E-mail: mustafap@bilkent.edu.tr [Bilkent University, Department of Industrial Engineering (Turkey); Bozkaya, Efe B. [Sabanc Latin-Small-Letter-Dotless-I University, Faculty of Administrative Sciences (Turkey)

    2013-02-15

    An American option (or, warrant) is the right, but not the obligation, to purchase or sell an underlying equity at any time up to a predetermined expiration date for a predetermined amount. A perpetual American option differs from a plain American option in that it does not expire. In this study, we solve the optimal stopping problem of a perpetual American option (both call and put) in discrete time using linear programming duality. Under the assumption that the underlying stock price follows a discrete time and discrete state Markov process, namely a geometric random walk, we formulate the pricing problem as an infinite dimensional linear programming (LP) problem using the excessive-majorant property of the value function. This formulation allows us to solve complementary slackness conditions in closed-form, revealing an optimal stopping strategy which highlights the set of stock-prices where the option should be exercised. The analysis for the call option reveals that such a critical value exists only in some cases, depending on a combination of state-transition probabilities and the economic discount factor (i.e., the prevailing interest rate) whereas it ceases to be an issue for the put.

  15. A Martingale Decomposition of Discrete Markov Chains

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard

    We consider a multivariate time series whose increments are given from a homogeneous Markov chain. We show that the martingale component of this process can be extracted by a filtering method and establish the corresponding martingale decomposition in closed-form. This representation is useful fo...

  16. Bounding spectral gaps of Markov chains: a novel exact multi-decomposition technique

    International Nuclear Information System (INIS)

    Destainville, N

    2003-01-01

    We propose an exact technique to calculate lower bounds of spectral gaps of discrete time reversible Markov chains on finite state sets. Spectral gaps are a common tool for evaluating convergence rates of Markov chains. As an illustration, we successfully use this technique to evaluate the 'absorption time' of the 'Backgammon model', a paradigmatic model for glassy dynamics. We also discuss the application of this technique to the 'contingency table problem', a notoriously difficult problem from probability theory. The interest of this technique is that it connects spectral gaps, which are quantities related to dynamics, with static quantities, calculated at equilibrium

  17. State control of discrete-time linear systems to be bound in state variables by equality constraints

    International Nuclear Information System (INIS)

    Filasová, Anna; Krokavec, Dušan; Serbák, Vladimír

    2014-01-01

    The paper is concerned with the problem of designing the discrete-time equivalent PI controller to control the discrete-time linear systems in such a way that the closed-loop state variables satisfy the prescribed equality constraints. Since the problem is generally singular, using standard form of the Lyapunov function and a symmetric positive definite slack matrix, the design conditions are proposed in the form of the enhanced Lyapunov inequality. The results, offering the conditions of the control existence and the optimal performance with respect to the prescribed equality constraints for square discrete-time linear systems, are illustrated with the numerical example to note effectiveness and applicability of the considered approach

  18. Discrete-Time Systems

    Indian Academy of Sciences (India)

    We also describe discrete-time systems in terms of difference ... A more modern alternative, especially for larger systems, is to convert ... In other words, ..... picture?) State-variable equations are also called state-space equations because the ...

  19. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Kashyap Manohar

    2008-01-01

    Full Text Available Abstract This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  20. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Chris Winstead

    2008-04-01

    Full Text Available This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  1. Long Memory of Financial Time Series and Hidden Markov Models with Time-Varying Parameters

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    2016-01-01

    Hidden Markov models are often used to model daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time-varying behavior have not been thoroughly examined. This paper presents an adaptive...... to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step density forecasts. Finally, it is shown that the forecasting performance of the estimated models can be further improved using local smoothing to forecast the parameter variations....

  2. Estimating Lithium-Ion Battery State of Charge and Parameters Using a Continuous-Discrete Extended Kalman Filter

    Directory of Open Access Journals (Sweden)

    Yasser Diab

    2017-07-01

    Full Text Available A real-time determination of battery parameters is challenging because batteries are non-linear, time-varying systems. The transient behaviour of lithium-ion batteries is modelled by a Thevenin-equivalent circuit with two time constants characterising activation and concentration polarization. An experimental approach is proposed for directly determining battery parameters as a function of physical quantities. The model’s parameters are a function of the state of charge and of the discharge rate. These can be expressed by regression equations in the model to derive a continuous-discrete extended Kalman estimator of the state of charge and of other parameters. This technique is based on numerical integration of the ordinary differential equations to predict the state of the stochastic dynamic system and the corresponding error covariance matrix. Then a standard correction step of the extended Kalman filter (EKF is applied to increase the accuracy of estimated parameters. Simulations resulting from this proposed estimator model were compared with experimental results under a variety of operating scenarios—analysis of the results demonstrate the accuracy of the estimator for correctly identifying battery parameters.

  3. Robust filtering and prediction for systems with embedded finite-state Markov-Chain dynamics

    International Nuclear Information System (INIS)

    Pate, E.B.

    1986-01-01

    This research developed new methodologies for the design of robust near-optimal filters/predictors for a class of system models that exhibit embedded finite-state Markov-chain dynamics. These methodologies are developed through the concepts and methods of stochastic model building (including time-series analysis), game theory, decision theory, and filtering/prediction for linear dynamic systems. The methodology is based on the relationship between the robustness of a class of time-series models and quantization which is applied to the time series as part of the model identification process. This relationship is exploited by utilizing the concept of an equivalence, through invariance of spectra, between the class of Markov-chain models and the class of autoregressive moving average (ARMA) models. This spectral equivalence permits a straightforward implementation of the desirable robust properties of the Markov-chain approximation in a class of models which may be applied in linear-recursive form in a linear Kalman filter/predictor structure. The linear filter/predictor structure is shown to provide asymptotically optimal estimates of states which represent one or more integrations of the Markov-chain state. The development of a new saddle-point theorem for a game based on the Markov-chain model structure gives rise to a technique for determining a worst case Markov-chain process, upon which a robust filter/predictor design if based

  4. Filtering with Discrete State Observations

    International Nuclear Information System (INIS)

    Dufour, F.; Elliott, R. J.

    1999-01-01

    The problem of estimating a finite state Markov chain observed via a process on the same state space is discussed. Optimal solutions are given for both the 'weak' and 'strong' formulations of the problem. The 'weak' formulation proceeds using a reference probability and a measure change for the Markov chain. The 'strong' formulation considers an observation process related to perturbations of the counting processes associated with the Markov chain. In this case the 'small noise' convergence is investigated

  5. Entanglement revival can occur only when the system-environment state is not a Markov state

    Science.gov (United States)

    Sargolzahi, Iman

    2018-06-01

    Markov states have been defined for tripartite quantum systems. In this paper, we generalize the definition of the Markov states to arbitrary multipartite case and find the general structure of an important subset of them, which we will call strong Markov states. In addition, we focus on an important property of the Markov states: If the initial state of the whole system-environment is a Markov state, then each localized dynamics of the whole system-environment reduces to a localized subdynamics of the system. This provides us a necessary condition for entanglement revival in an open quantum system: Entanglement revival can occur only when the system-environment state is not a Markov state. To illustrate (a part of) our results, we consider the case that the environment is modeled as classical. In this case, though the correlation between the system and the environment remains classical during the evolution, the change of the state of the system-environment, from its initial Markov state to a state which is not a Markov one, leads to the entanglement revival in the system. This shows that the non-Markovianity of a state is not equivalent to the existence of non-classical correlation in it, in general.

  6. The ultimatum game: Discrete vs. continuous offers

    Science.gov (United States)

    Dishon-Berkovits, Miriam; Berkovits, Richard

    2014-09-01

    In many experimental setups in social-sciences, psychology and economy the subjects are requested to accept or dispense monetary compensation which is usually given in discrete units. Using computer and mathematical modeling we show that in the framework of studying the dynamics of acceptance of proposals in the ultimatum game, the long time dynamics of acceptance of offers in the game are completely different for discrete vs. continuous offers. For discrete values the dynamics follow an exponential behavior. However, for continuous offers the dynamics are described by a power-law. This is shown using an agent based computer simulation as well as by utilizing an analytical solution of a mean-field equation describing the model. These findings have implications to the design and interpretation of socio-economical experiments beyond the ultimatum game.

  7. Mission reliability of semi-Markov systems under generalized operational time requirements

    International Nuclear Information System (INIS)

    Wu, Xiaoyue; Hillston, Jane

    2015-01-01

    Mission reliability of a system depends on specific criteria for mission success. To evaluate the mission reliability of some mission systems that do not need to work normally for the whole mission time, two types of mission reliability for such systems are studied. The first type corresponds to the mission requirement that the system must remain operational continuously for a minimum time within the given mission time interval, while the second corresponds to the mission requirement that the total operational time of the system within the mission time window must be greater than a given value. Based on Markov renewal properties, matrix integral equations are derived for semi-Markov systems. Numerical algorithms and a simulation procedure are provided for both types of mission reliability. Two examples are used for illustration purposes. One is a one-unit repairable Markov system, and the other is a cold standby semi-Markov system consisting of two components. By the proposed approaches, the mission reliability of systems with time redundancy can be more precisely estimated to avoid possible unnecessary redundancy of system resources. - Highlights: • Two types of mission reliability under generalized requirements are defined. • Equations for both types of reliability are derived for semi-Markov systems. • Numerical methods are given for solving both types of reliability. • Simulation procedure is given for estimating both types of reliability. • Verification of the numerical methods is given by the results of simulation

  8. Absolute continuity of the distribution of some Markov geometric series

    Institute of Scientific and Technical Information of China (English)

    Ai-hua; FAN; Ji-hong; ZHANG

    2007-01-01

    Let (∈n)≥0 be the Markov chain of two states with respect to the probability measure of the maximal entropy on the subshift space ∑A defined by Fibonacci incident matrix A.We consider the measure μλ of the probability distribution of the random series ∑∞n=0 εnλn (0 <λ< 1).It is proved that μλ is singular if λ∈ (0,√5-1/2) and that μλ is absolutely continuous for almost all λ∈ (√5-1/2,0.739).

  9. Characterization of memory states of the Preisach operator with stochastic inputs

    International Nuclear Information System (INIS)

    Amann, A.; Brokate, M.; McCarthy, S.; Rachinskii, D.; Temnov, G.

    2012-01-01

    The Preisach operator with inputs defined by a Markov process x t is considered. The question we address is: what is the distribution of the random memory state of the Preisach operator at a given time moment t 0 in the limit r→∞ of infinitely long input history x t , t 0 -r≤t≤t 0 ? In order to answer this question, we introduce a Markov chain (called the memory state Markov chain) where the states are pairs (m k ,M k ) of elements from the monotone sequences of the local minimum input values m k and the local maximum input values M k recorded in the memory state and the index k of the elements plays the role of time. We express the transition probabilities of this Markov chain in terms of the transition probabilities of the input stochastic process and show that the memory state Markov chain and the input process generate the same distribution of the memory states. These results are illustrated by several examples of stochastic inputs such as the Wiener and Bernoulli processes and their mixture (we first discuss a discrete version of these processes and then the continuous time and state setting). The memory state Markov chain is then used to find the distribution of the random number of elements in the memory state sequence. We show that this number has the Poisson distribution for the Wiener and Bernoulli processes inputs. In particular, in the discrete setting, the mean value of the number of elements in the memory state scales as lnN, where N is the number of the input states, while the mean time it takes the input to generate this memory state scales as N 2 for the Wiener process and as N for the Bernoulli process. A similar relationship between the dimension of the memory state vector and the number of iterations in the numerical realization of the input is shown for the mixture of the Wiener and Bernoulli processes, thus confirming that the memory state Markov chain is an efficient tool for generating the distribution of the Preisach operator memory

  10. Characterization of memory states of the Preisach operator with stochastic inputs

    Energy Technology Data Exchange (ETDEWEB)

    Amann, A. [Department of Applied Mathematics, University College Cork (Ireland); Brokate, M. [Zentrum Mathematik, Technische Universitaet Muenchen (Germany); McCarthy, S. [Department of Applied Mathematics, University College Cork (Ireland); Rachinskii, D., E-mail: d.rachinskii@ucc.ie [Department of Applied Mathematics, University College Cork (Ireland); Temnov, G. [Department of Mathematics, University College Cork (Ireland)

    2012-05-01

    The Preisach operator with inputs defined by a Markov process x{sup t} is considered. The question we address is: what is the distribution of the random memory state of the Preisach operator at a given time moment t{sub 0} in the limit r{yields}{infinity} of infinitely long input history x{sup t}, t{sub 0}-r{<=}t{<=}t{sub 0}? In order to answer this question, we introduce a Markov chain (called the memory state Markov chain) where the states are pairs (m{sub k},M{sub k}) of elements from the monotone sequences of the local minimum input values m{sub k} and the local maximum input values M{sub k} recorded in the memory state and the index k of the elements plays the role of time. We express the transition probabilities of this Markov chain in terms of the transition probabilities of the input stochastic process and show that the memory state Markov chain and the input process generate the same distribution of the memory states. These results are illustrated by several examples of stochastic inputs such as the Wiener and Bernoulli processes and their mixture (we first discuss a discrete version of these processes and then the continuous time and state setting). The memory state Markov chain is then used to find the distribution of the random number of elements in the memory state sequence. We show that this number has the Poisson distribution for the Wiener and Bernoulli processes inputs. In particular, in the discrete setting, the mean value of the number of elements in the memory state scales as lnN, where N is the number of the input states, while the mean time it takes the input to generate this memory state scales as N{sup 2} for the Wiener process and as N for the Bernoulli process. A similar relationship between the dimension of the memory state vector and the number of iterations in the numerical realization of the input is shown for the mixture of the Wiener and Bernoulli processes, thus confirming that the memory state Markov chain is an efficient tool for

  11. Convergence of posteriors for discretized log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge

    2004-01-01

    In Markov chain Monte Carlo posterior computation for log Gaussian Cox processes (LGCPs) a discretization of the continuously indexed Gaussian field is required. It is demonstrated that approximate posterior expectations computed from discretized LGCPs converge to the exact posterior expectations...... when the cell sizes of the discretization tends to zero. The effect of discretization is studied in a data example....

  12. Stochastic Dynamics through Hierarchically Embedded Markov Chains.

    Science.gov (United States)

    Vasconcelos, Vítor V; Santos, Fernando P; Santos, Francisco C; Pacheco, Jorge M

    2017-02-03

    Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects-such as mutations in evolutionary dynamics and a random exploration of choices in social systems-including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.

  13. Bounding spectral gaps of Markov chains: a novel exact multi-decomposition technique

    Energy Technology Data Exchange (ETDEWEB)

    Destainville, N [Laboratoire de Physique Theorique - IRSAMC, CNRS/Universite Paul Sabatier, 118, route de Narbonne, 31062 Toulouse Cedex 04 (France)

    2003-04-04

    We propose an exact technique to calculate lower bounds of spectral gaps of discrete time reversible Markov chains on finite state sets. Spectral gaps are a common tool for evaluating convergence rates of Markov chains. As an illustration, we successfully use this technique to evaluate the 'absorption time' of the 'Backgammon model', a paradigmatic model for glassy dynamics. We also discuss the application of this technique to the 'contingency table problem', a notoriously difficult problem from probability theory. The interest of this technique is that it connects spectral gaps, which are quantities related to dynamics, with static quantities, calculated at equilibrium.

  14. Analysis of discrete and continuous distributions of ventilatory time constants from dynamic computed tomography

    International Nuclear Information System (INIS)

    Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G

    2005-01-01

    In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs

  15. Symmetries in discrete-time mechanics

    International Nuclear Information System (INIS)

    Khorrami, M.

    1996-01-01

    Based on a general formulation for discrete-time quantum mechanics, introduced by M. Khorrami (Annals Phys. 224 (1995), 101), symmetries in discrete-time quantum mechanics are investigated. It is shown that any classical continuous symmetry leads to a conserved quantity in classical mechanics, as well as quantum mechanics. The transformed wave function, however, has the correct evolution if and only if the symmetry is nonanomalous. Copyright copyright 1996 Academic Press, Inc

  16. Finite approximations in discrete-time stochastic control quantized models and asymptotic optimality

    CERN Document Server

    Saldi, Naci; Yüksel, Serdar

    2018-01-01

    In a unified form, this monograph presents fundamental results on the approximation of centralized and decentralized stochastic control problems, with uncountable state, measurement, and action spaces. It demonstrates how quantization provides a system-independent and constructive method for the reduction of a system with Borel spaces to one with finite state, measurement, and action spaces. In addition to this constructive view, the book considers both the information transmission approach for discretization of actions, and the computational approach for discretization of states and actions. Part I of the text discusses Markov decision processes and their finite-state or finite-action approximations, while Part II builds from there to finite approximations in decentralized stochastic control problems. This volume is perfect for researchers and graduate students interested in stochastic controls. With the tools presented, readers will be able to establish the convergence of approximation models to original mo...

  17. Continuity Properties of Distances for Markov Processes

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Mao, Hua; Larsen, Kim Guldstrand

    2014-01-01

    In this paper we investigate distance functions on finite state Markov processes that measure the behavioural similarity of non-bisimilar processes. We consider both probabilistic bisimilarity metrics, and trace-based distances derived from standard Lp and Kullback-Leibler distances. Two desirable...

  18. Soundness of Timed-Arc Workflow Nets in Discrete and Continuous-Time Semantics

    DEFF Research Database (Denmark)

    Mateo, Jose Antonio; Srba, Jiri; Sørensen, Mathias Grund

    2015-01-01

    Analysis of workflow processes with quantitative aspectslike timing is of interest in numerous time-critical applications. We suggest a workflow model based on timed-arc Petri nets and studythe foundational problems of soundness and strong (time-bounded) soundness.We first consider the discrete-t...

  19. Continuous strong Markov processes in dimension one a stochastic calculus approach

    CERN Document Server

    Assing, Sigurd

    1998-01-01

    The book presents an in-depth study of arbitrary one-dimensional continuous strong Markov processes using methods of stochastic calculus. Departing from the classical approaches, a unified investigation of regular as well as arbitrary non-regular diffusions is provided. A general construction method for such processes, based on a generalization of the concept of a perfect additive functional, is developed. The intrinsic decomposition of a continuous strong Markov semimartingale is discovered. The book also investigates relations to stochastic differential equations and fundamental examples of irregular diffusions.

  20. Extracting Markov Models of Peptide Conformational Dynamics from Simulation Data.

    Science.gov (United States)

    Schultheis, Verena; Hirschberger, Thomas; Carstens, Heiko; Tavan, Paul

    2005-07-01

    A high-dimensional time series obtained by simulating a complex and stochastic dynamical system (like a peptide in solution) may code an underlying multiple-state Markov process. We present a computational approach to most plausibly identify and reconstruct this process from the simulated trajectory. Using a mixture of normal distributions we first construct a maximum likelihood estimate of the point density associated with this time series and thus obtain a density-oriented partition of the data space. This discretization allows us to estimate the transfer operator as a matrix of moderate dimension at sufficient statistics. A nonlinear dynamics involving that matrix and, alternatively, a deterministic coarse-graining procedure are employed to construct respective hierarchies of Markov models, from which the model most plausibly mapping the generating stochastic process is selected by consideration of certain observables. Within both procedures the data are classified in terms of prototypical points, the conformations, marking the various Markov states. As a typical example, the approach is applied to analyze the conformational dynamics of a tripeptide in solution. The corresponding high-dimensional time series has been obtained from an extended molecular dynamics simulation.

  1. Modeling commodity salam contract between two parties for discrete and continuous time series

    Science.gov (United States)

    Hisham, Azie Farhani Badrol; Jaffar, Maheran Mohd

    2017-08-01

    In order for Islamic finance to remain competitive as the conventional, there needs a new development of Islamic compliance product such as Islamic derivative that can be used to manage the risk. However, under syariah principles and regulations, all financial instruments must not be conflicting with five syariah elements which are riba (interest paid), rishwah (corruption), gharar (uncertainty or unnecessary risk), maysir (speculation or gambling) and jahl (taking advantage of the counterparty's ignorance). This study has proposed a traditional Islamic contract namely salam that can be built as an Islamic derivative product. Although a lot of studies has been done on discussing and proposing the implementation of salam contract as the Islamic product however they are more into qualitative and law issues. Since there is lack of quantitative study of salam contract being developed, this study introduces mathematical models that can value the appropriate salam price for a commodity salam contract between two parties. In modeling the commodity salam contract, this study has modified the existing conventional derivative model and come out with some adjustments to comply with syariah rules and regulations. The cost of carry model has been chosen as the foundation to develop the commodity salam model between two parties for discrete and continuous time series. However, the conventional time value of money results from the concept of interest that is prohibited in Islam. Therefore, this study has adopted the idea of Islamic time value of money which is known as the positive time preference, in modeling the commodity salam contract between two parties for discrete and continuous time series.

  2. Adiabatic condition and the quantum hitting time of Markov chains

    International Nuclear Information System (INIS)

    Krovi, Hari; Ozols, Maris; Roland, Jeremie

    2010-01-01

    We present an adiabatic quantum algorithm for the abstract problem of searching marked vertices in a graph, or spatial search. Given a random walk (or Markov chain) P on a graph with a set of unknown marked vertices, one can define a related absorbing walk P ' where outgoing transitions from marked vertices are replaced by self-loops. We build a Hamiltonian H(s) from the interpolated Markov chain P(s)=(1-s)P+sP ' and use it in an adiabatic quantum algorithm to drive an initial superposition over all vertices to a superposition over marked vertices. The adiabatic condition implies that, for any reversible Markov chain and any set of marked vertices, the running time of the adiabatic algorithm is given by the square root of the classical hitting time. This algorithm therefore demonstrates a novel connection between the adiabatic condition and the classical notion of hitting time of a random walk. It also significantly extends the scope of previous quantum algorithms for this problem, which could only obtain a full quadratic speedup for state-transitive reversible Markov chains with a unique marked vertex.

  3. Ecological monitoring in a discrete-time prey-predator model.

    Science.gov (United States)

    Gámez, M; López, I; Rodríguez, C; Varga, Z; Garay, J

    2017-09-21

    The paper is aimed at the methodological development of ecological monitoring in discrete-time dynamic models. In earlier papers, in the framework of continuous-time models, we have shown how a systems-theoretical methodology can be applied to the monitoring of the state process of a system of interacting populations, also estimating certain abiotic environmental changes such as pollution, climatic or seasonal changes. In practice, however, there may be good reasons to use discrete-time models. (For instance, there may be discrete cycles in the development of the populations, or observations can be made only at discrete time steps.) Therefore the present paper is devoted to the development of the monitoring methodology in the framework of discrete-time models of population ecology. By monitoring we mean that, observing only certain component(s) of the system, we reconstruct the whole state process. This may be necessary, e.g., when in a complex ecosystem the observation of the densities of certain species is impossible, or too expensive. For the first presentation of the offered methodology, we have chosen a discrete-time version of the classical Lotka-Volterra prey-predator model. This is a minimal but not trivial system where the methodology can still be presented. We also show how this methodology can be applied to estimate the effect of an abiotic environmental change, using a component of the population system as an environmental indicator. Although this approach is illustrated in a simplest possible case, it can be easily extended to larger ecosystems with several interacting populations and different types of abiotic environmental effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Generating Li–Yorke chaos in a stable continuous-time T–S fuzzy model via time-delay feedback control

    International Nuclear Information System (INIS)

    Qiu-Ye, Sun; Hua-Guang, Zhang; Yan, Zhao

    2010-01-01

    This paper investigates the chaotification problem of a stable continuous-time T–S fuzzy system. A simple nonlinear state time-delay feedback controller is designed by parallel distributed compensation technique. Then, the asymptotically approximate relationship between the controlled continuous-time T–S fuzzy system with time-delay and a discrete-time T–S fuzzy system is established. Based on the discrete-time T–S fuzzy system, it proves that the chaos in the discrete-time T–S fuzzy system satisfies the Li–Yorke definition by choosing appropriate controller parameters via the revised Marotto theorem. Finally, the effectiveness of the proposed chaotic anticontrol method is verified by a practical example. (general)

  5. Continuous sweep versus discrete step protocols for studying effects of wearable robot assistance magnitude.

    Science.gov (United States)

    Malcolm, Philippe; Rossi, Denise Martineli; Siviy, Christopher; Lee, Sangjun; Quinlivan, Brendan Thomas; Grimmer, Martin; Walsh, Conor J

    2017-07-12

    Different groups developed wearable robots for walking assistance, but there is still a need for methods to quickly tune actuation parameters for each robot and population or sometimes even for individual users. Protocols where parameters are held constant for multiple minutes have traditionally been used for evaluating responses to parameter changes such as metabolic rate or walking symmetry. However, these discrete protocols are time-consuming. Recently, protocols have been proposed where a parameter is changed in a continuous way. The aim of the present study was to compare effects of continuously varying assistance magnitude with a soft exosuit against discrete step conditions. Seven participants walked on a treadmill wearing a soft exosuit that assists plantarflexion and hip flexion. In Continuous-up, peak exosuit ankle moment linearly increased from approximately 0 to 38% of biological moment over 10 min. Continuous-down was the opposite. In Discrete, participants underwent five periods of 5 min with steady peak moment levels distributed over the same range as Continuous-up and Continuous-down. We calculated metabolic rate for the entire Continuous-up and Continuous-down conditions and the last 2 min of each Discrete force level. We compared kinematics, kinetics and metabolic rate between conditions by curve fitting versus peak moment. Reduction in metabolic rate compared to Powered-off was smaller in Continuous-up than in Continuous-down at most peak moment levels, due to physiological dynamics causing metabolic measurements in Continuous-up and Continuous-down to lag behind the values expected during steady-state testing. When evaluating the average slope of metabolic reduction over the entire peak moment range there was no significant difference between Continuous-down and Discrete. Attempting to correct the lag in metabolics by taking the average of Continuous-up and Continuous-down removed all significant differences versus Discrete. For kinematic and

  6. Stability of continuous-time quantum filters with measurement imperfections

    Science.gov (United States)

    Amini, H.; Pellegrini, C.; Rouchon, P.

    2014-07-01

    The fidelity between the state of a continuously observed quantum system and the state of its associated quantum filter, is shown to be always a submartingale. The observed system is assumed to be governed by a continuous-time Stochastic Master Equation (SME), driven simultaneously by Wiener and Poisson processes and that takes into account incompleteness and errors in measurements. This stability result is the continuous-time counterpart of a similar stability result already established for discrete-time quantum systems and where the measurement imperfections are modelled by a left stochastic matrix.

  7. Discrete integration of continuous Kalman filtering equations for time invariant second-order structural systems

    Science.gov (United States)

    Park, K. C.; Belvin, W. Keith

    1990-01-01

    A general form for the first-order representation of the continuous second-order linear structural-dynamics equations is introduced to derive a corresponding form of first-order continuous Kalman filtering equations. Time integration of the resulting equations is carried out via a set of linear multistep integration formulas. It is shown that a judicious combined selection of computational paths and the undetermined matrices introduced in the general form of the first-order linear structural systems leads to a class of second-order discrete Kalman filtering equations involving only symmetric sparse N x N solution matrices.

  8. Embedding a State Space Model Into a Markov Decision Process

    DEFF Research Database (Denmark)

    Nielsen, Lars Relund; Jørgensen, Erik; Højsgaard, Søren

    2011-01-01

    In agriculture Markov decision processes (MDPs) with finite state and action space are often used to model sequential decision making over time. For instance, states in the process represent possible levels of traits of the animal and transition probabilities are based on biological models...

  9. Pemodelan Markov Switching Dengan Time-varying Transition Probability

    OpenAIRE

    Savitri, Anggita Puri; Warsito, Budi; Rahmawati, Rita

    2016-01-01

    Exchange rate or currency is an economic variable which reflects country's state of economy. It fluctuates over time because of its ability to switch the condition or regime caused by economic and political factors. The changes in the exchange rate are depreciation and appreciation. Therefore, it could be modeled using Markov Switching with Time-Varying Transition Probability which observe the conditional changes and use information variable. From this model, time-varying transition probabili...

  10. Timed Comparisons of Semi-Markov Processes

    DEFF Research Database (Denmark)

    Pedersen, Mathias Ruggaard; Larsen, Kim Guldstrand; Bacci, Giorgio

    2018-01-01

    -Markov processes, and investigate the question of how to compare two semi-Markov processes with respect to their time-dependent behaviour. To this end, we introduce the relation of being “faster than” between processes and study its algorithmic complexity. Through a connection to probabilistic automata we obtain...

  11. Identification of parameters of discrete-continuous models

    International Nuclear Information System (INIS)

    Cekus, Dawid; Warys, Pawel

    2015-01-01

    In the paper, the parameters of a discrete-continuous model have been identified on the basis of experimental investigations and formulation of optimization problem. The discrete-continuous model represents a cantilever stepped Timoshenko beam. The mathematical model has been formulated and solved according to the Lagrange multiplier formalism. Optimization has been based on the genetic algorithm. The presented proceeding’s stages make the identification of any parameters of discrete-continuous systems possible

  12. Identification of parameters of discrete-continuous models

    Energy Technology Data Exchange (ETDEWEB)

    Cekus, Dawid, E-mail: cekus@imipkm.pcz.pl; Warys, Pawel, E-mail: warys@imipkm.pcz.pl [Institute of Mechanics and Machine Design Foundations, Czestochowa University of Technology, Dabrowskiego 73, 42-201 Czestochowa (Poland)

    2015-03-10

    In the paper, the parameters of a discrete-continuous model have been identified on the basis of experimental investigations and formulation of optimization problem. The discrete-continuous model represents a cantilever stepped Timoshenko beam. The mathematical model has been formulated and solved according to the Lagrange multiplier formalism. Optimization has been based on the genetic algorithm. The presented proceeding’s stages make the identification of any parameters of discrete-continuous systems possible.

  13. Hybrid discrete-time neural networks.

    Science.gov (United States)

    Cao, Hongjun; Ibarz, Borja

    2010-11-13

    Hybrid dynamical systems combine evolution equations with state transitions. When the evolution equations are discrete-time (also called map-based), the result is a hybrid discrete-time system. A class of biological neural network models that has recently received some attention falls within this category: map-based neuron models connected by means of fast threshold modulation (FTM). FTM is a connection scheme that aims to mimic the switching dynamics of a neuron subject to synaptic inputs. The dynamic equations of the neuron adopt different forms according to the state (either firing or not firing) and type (excitatory or inhibitory) of their presynaptic neighbours. Therefore, the mathematical model of one such network is a combination of discrete-time evolution equations with transitions between states, constituting a hybrid discrete-time (map-based) neural network. In this paper, we review previous work within the context of these models, exemplifying useful techniques to analyse them. Typical map-based neuron models are low-dimensional and amenable to phase-plane analysis. In bursting models, fast-slow decomposition can be used to reduce dimensionality further, so that the dynamics of a pair of connected neurons can be easily understood. We also discuss a model that includes electrical synapses in addition to chemical synapses with FTM. Furthermore, we describe how master stability functions can predict the stability of synchronized states in these networks. The main results are extended to larger map-based neural networks.

  14. Variance Swap Replication: Discrete or Continuous?

    Directory of Open Access Journals (Sweden)

    Fabien Le Floc’h

    2018-02-01

    Full Text Available The popular replication formula to price variance swaps assumes continuity of traded option strikes. In practice, however, there is only a discrete set of option strikes traded on the market. We present here different discrete replication strategies and explain why the continuous replication price is more relevant.

  15. On discrete models of space-time

    International Nuclear Information System (INIS)

    Horzela, A.; Kempczynski, J.; Kapuscik, E.; Georgia Univ., Athens, GA; Uzes, Ch.

    1992-02-01

    Analyzing the Einstein radiolocation method we come to the conclusion that results of any measurement of space-time coordinates should be expressed in terms of rational numbers. We show that this property is Lorentz invariant and may be used in the construction of discrete models of space-time different from the models of the lattice type constructed in the process of discretization of continuous models. (author)

  16. Dense time discretization technique for verification of real time systems

    International Nuclear Information System (INIS)

    Makackas, Dalius; Miseviciene, Regina

    2016-01-01

    Verifying the real-time system there are two different models to control the time: discrete and dense time based models. This paper argues a novel verification technique, which calculates discrete time intervals from dense time in order to create all the system states that can be reached from the initial system state. The technique is designed for real-time systems specified by a piece-linear aggregate approach. Key words: real-time system, dense time, verification, model checking, piece-linear aggregate

  17. A latent class multiple constraint multiple discrete-continuous extreme value model of time use and goods consumption.

    Science.gov (United States)

    2016-06-01

    This paper develops a microeconomic theory-based multiple discrete continuous choice model that considers: (a) that both goods consumption and time allocations (to work and non-work activities) enter separately as decision variables in the utility fu...

  18. Optimization of stochastic discrete systems and control on complex networks computational networks

    CERN Document Server

    Lozovanu, Dmitrii

    2014-01-01

    This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors' new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book's final chapter is devoted to finite horizon stochastic con...

  19. The use of simple reparameterizations to improve the efficiency of Markov chain Monte Carlo estimation for multilevel models with applications to discrete time survival models.

    Science.gov (United States)

    Browne, William J; Steele, Fiona; Golalizadeh, Mousa; Green, Martin J

    2009-06-01

    We consider the application of Markov chain Monte Carlo (MCMC) estimation methods to random-effects models and in particular the family of discrete time survival models. Survival models can be used in many situations in the medical and social sciences and we illustrate their use through two examples that differ in terms of both substantive area and data structure. A multilevel discrete time survival analysis involves expanding the data set so that the model can be cast as a standard multilevel binary response model. For such models it has been shown that MCMC methods have advantages in terms of reducing estimate bias. However, the data expansion results in very large data sets for which MCMC estimation is often slow and can produce chains that exhibit poor mixing. Any way of improving the mixing will result in both speeding up the methods and more confidence in the estimates that are produced. The MCMC methodological literature is full of alternative algorithms designed to improve mixing of chains and we describe three reparameterization techniques that are easy to implement in available software. We consider two examples of multilevel survival analysis: incidence of mastitis in dairy cattle and contraceptive use dynamics in Indonesia. For each application we show where the reparameterization techniques can be used and assess their performance.

  20. Optimal Time-Abstract Schedulers for CTMDPs and Markov Games

    Directory of Open Access Journals (Sweden)

    Markus Rabe

    2010-06-01

    Full Text Available We study time-bounded reachability in continuous-time Markov decision processes for time-abstract scheduler classes. Such reachability problems play a paramount role in dependability analysis and the modelling of manufacturing and queueing systems. Consequently, their analysis has been studied intensively, and techniques for the approximation of optimal control are well understood. From a mathematical point of view, however, the question of approximation is secondary compared to the fundamental question whether or not optimal control exists. We demonstrate the existence of optimal schedulers for the time-abstract scheduler classes for all CTMDPs. Our proof is constructive: We show how to compute optimal time-abstract strategies with finite memory. It turns out that these optimal schedulers have an amazingly simple structure---they converge to an easy-to-compute memoryless scheduling policy after a finite number of steps. Finally, we show that our argument can easily be lifted to Markov games: We show that both players have a likewise simple optimal strategy in these more general structures.

  1. Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories

    Science.gov (United States)

    Matsunaga, Y.; Sugita, Y.

    2018-06-01

    A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.

  2. Process Algebra and Markov Chains

    NARCIS (Netherlands)

    Brinksma, Hendrik; Hermanns, H.; Brinksma, Hendrik; Hermanns, H.; Katoen, Joost P.

    This paper surveys and relates the basic concepts of process algebra and the modelling of continuous time Markov chains. It provides basic introductions to both fields, where we also study the Markov chains from an algebraic perspective, viz. that of Markov chain algebra. We then proceed to study

  3. Process algebra and Markov chains

    NARCIS (Netherlands)

    Brinksma, E.; Hermanns, H.; Brinksma, E.; Hermanns, H.; Katoen, J.P.

    2001-01-01

    This paper surveys and relates the basic concepts of process algebra and the modelling of continuous time Markov chains. It provides basic introductions to both fields, where we also study the Markov chains from an algebraic perspective, viz. that of Markov chain algebra. We then proceed to study

  4. Discrete and continuous simulation theory and practice

    CERN Document Server

    Bandyopadhyay, Susmita

    2014-01-01

    When it comes to discovering glitches inherent in complex systems-be it a railway or banking, chemical production, medical, manufacturing, or inventory control system-developing a simulation of a system can identify problems with less time, effort, and disruption than it would take to employ the original. Advantageous to both academic and industrial practitioners, Discrete and Continuous Simulation: Theory and Practice offers a detailed view of simulation that is useful in several fields of study.This text concentrates on the simulation of complex systems, covering the basics in detail and exploring the diverse aspects, including continuous event simulation and optimization with simulation. It explores the connections between discrete and continuous simulation, and applies a specific focus to simulation in the supply chain and manufacturing field. It discusses the Monte Carlo simulation, which is the basic and traditional form of simulation. It addresses future trends and technologies for simulation, with par...

  5. SOA thresholds for the perception of discrete/continuous tactile stimulation

    DEFF Research Database (Denmark)

    Eid, Mohamad; Korres, Georgios; Jensen, Camilla Birgitte Falk

    In this paper we present an experiment to measure the upper and lower thresholds of the Stimulus Onset Asynchrony (SOA) for continuous/discrete apparent haptic motion. We focus on three stimulation parameters: the burst duration, the SOA time, and the inter-actuator distance (between successive......-discrete boundary at lower SOA. Furthermore, the larger the inter-actuator distance, the more linear the relationship between the burst duration and the SOA timing. Finally, the large range between lower and upper thresholds for SOA can be utilized to create continuous movement stimulation on the skin at “varying...... speeds”. The results are discussed in reference to designing a tactile interface for providing continuous haptic motion with a desired speed of continuous tactile stimulation....

  6. Harmonic spectral components in time sequences of Markov correlated events

    Science.gov (United States)

    Mazzetti, Piero; Carbone, Anna

    2017-07-01

    The paper concerns the analysis of the conditions allowing time sequences of Markov correlated events give rise to a line power spectrum having a relevant physical interest. It is found that by specializing the Markov matrix in order to represent closed loop sequences of events with arbitrary distribution, generated in a steady physical condition, a large set of line spectra, covering all possible frequency values, is obtained. The amplitude of the spectral lines is given by a matrix equation based on a generalized Markov matrix involving the Fourier transform of the distribution functions representing the time intervals between successive events of the sequence. The paper is a complement of a previous work where a general expression for the continuous power spectrum was given. In that case the Markov matrix was left in a more general form, thus preventing the possibility of finding line spectra of physical interest. The present extension is also suggested by the interest of explaining the emergence of a broad set of waves found in the electro and magneto-encephalograms, whose frequency ranges from 0.5 to about 40Hz, in terms of the effects produced by chains of firing neurons within the complex neural network of the brain. An original model based on synchronized closed loop sequences of firing neurons is proposed, and a few numerical simulations are reported as an application of the above cited equation.

  7. Spectral analysis of multi-dimensional self-similar Markov processes

    International Nuclear Information System (INIS)

    Modarresi, N; Rezakhah, S

    2010-01-01

    In this paper we consider a discrete scale invariant (DSI) process {X(t), t in R + } with scale l > 1. We consider a fixed number of observations in every scale, say T, and acquire our samples at discrete points α k , k in W, where α is obtained by the equality l = α T and W = {0, 1, ...}. We thus provide a discrete time scale invariant (DT-SI) process X(.) with the parameter space {α k , k in W}. We find the spectral representation of the covariance function of such a DT-SI process. By providing the harmonic-like representation of multi-dimensional self-similar processes, spectral density functions of them are presented. We assume that the process {X(t), t in R + } is also Markov in the wide sense and provide a discrete time scale invariant Markov (DT-SIM) process with the above scheme of sampling. We present an example of the DT-SIM process, simple Brownian motion, by the above sampling scheme and verify our results. Finally, we find the spectral density matrix of such a DT-SIM process and show that its associated T-dimensional self-similar Markov process is fully specified by {R H j (1), R j H (0), j = 0, 1, ..., T - 1}, where R H j (τ) is the covariance function of jth and (j + τ)th observations of the process.

  8. Output-Feedback Control for Discrete-Time Spreading Models in Complex Networks

    Directory of Open Access Journals (Sweden)

    Luis A. Alarcón Ramos

    2018-03-01

    Full Text Available The problem of stabilizing the spreading process to a prescribed probability distribution over a complex network is considered, where the dynamics of the nodes in the network is given by discrete-time Markov-chain processes. Conditions for the positioning and identification of actuators and sensors are provided, and sufficient conditions for the exponential stability of the desired distribution are derived. Simulations results for a network of N = 10 6 corroborate our theoretical findings.

  9. On the Total Variation Distance of Semi-Markov Chains

    DEFF Research Database (Denmark)

    Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand

    2015-01-01

    Semi-Markov chains (SMCs) are continuous-time probabilistic transition systems where the residence time on states is governed by generic distributions on the positive real line. This paper shows the tight relation between the total variation distance on SMCs and their model checking problem over...

  10. Diffusion maps, clustering and fuzzy Markov modeling in peptide folding transitions

    International Nuclear Information System (INIS)

    Nedialkova, Lilia V.; Amat, Miguel A.; Kevrekidis, Ioannis G.; Hummer, Gerhard

    2014-01-01

    Using the helix-coil transitions of alanine pentapeptide as an illustrative example, we demonstrate the use of diffusion maps in the analysis of molecular dynamics simulation trajectories. Diffusion maps and other nonlinear data-mining techniques provide powerful tools to visualize the distribution of structures in conformation space. The resulting low-dimensional representations help in partitioning conformation space, and in constructing Markov state models that capture the conformational dynamics. In an initial step, we use diffusion maps to reduce the dimensionality of the conformational dynamics of Ala5. The resulting pretreated data are then used in a clustering step. The identified clusters show excellent overlap with clusters obtained previously by using the backbone dihedral angles as input, with small—but nontrivial—differences reflecting torsional degrees of freedom ignored in the earlier approach. We then construct a Markov state model describing the conformational dynamics in terms of a discrete-time random walk between the clusters. We show that by combining fuzzy C-means clustering with a transition-based assignment of states, we can construct robust Markov state models. This state-assignment procedure suppresses short-time memory effects that result from the non-Markovianity of the dynamics projected onto the space of clusters. In a comparison with previous work, we demonstrate how manifold learning techniques may complement and enhance informed intuition commonly used to construct reduced descriptions of the dynamics in molecular conformation space

  11. Diffusion maps, clustering and fuzzy Markov modeling in peptide folding transitions

    Energy Technology Data Exchange (ETDEWEB)

    Nedialkova, Lilia V.; Amat, Miguel A. [Department of Chemical and Biological Engineering, Princeton University, Princeton, New Jersey 08544 (United States); Kevrekidis, Ioannis G., E-mail: yannis@princeton.edu, E-mail: gerhard.hummer@biophys.mpg.de [Department of Chemical and Biological Engineering and Program in Applied and Computational Mathematics, Princeton University, Princeton, New Jersey 08544 (United States); Hummer, Gerhard, E-mail: yannis@princeton.edu, E-mail: gerhard.hummer@biophys.mpg.de [Department of Theoretical Biophysics, Max Planck Institute of Biophysics, Max-von-Laue-Str. 3, 60438 Frankfurt am Main (Germany)

    2014-09-21

    Using the helix-coil transitions of alanine pentapeptide as an illustrative example, we demonstrate the use of diffusion maps in the analysis of molecular dynamics simulation trajectories. Diffusion maps and other nonlinear data-mining techniques provide powerful tools to visualize the distribution of structures in conformation space. The resulting low-dimensional representations help in partitioning conformation space, and in constructing Markov state models that capture the conformational dynamics. In an initial step, we use diffusion maps to reduce the dimensionality of the conformational dynamics of Ala5. The resulting pretreated data are then used in a clustering step. The identified clusters show excellent overlap with clusters obtained previously by using the backbone dihedral angles as input, with small—but nontrivial—differences reflecting torsional degrees of freedom ignored in the earlier approach. We then construct a Markov state model describing the conformational dynamics in terms of a discrete-time random walk between the clusters. We show that by combining fuzzy C-means clustering with a transition-based assignment of states, we can construct robust Markov state models. This state-assignment procedure suppresses short-time memory effects that result from the non-Markovianity of the dynamics projected onto the space of clusters. In a comparison with previous work, we demonstrate how manifold learning techniques may complement and enhance informed intuition commonly used to construct reduced descriptions of the dynamics in molecular conformation space.

  12. Order-disorder transitions in time-discrete mean field systems with memory: a novel approach via nonlinear autoregressive models

    International Nuclear Information System (INIS)

    Frank, T D; Mongkolsakulvong, S

    2015-01-01

    In a previous study strongly nonlinear autoregressive (SNAR) models have been introduced as a generalization of the widely-used time-discrete autoregressive models that are known to apply both to Markov and non-Markovian systems. In contrast to conventional autoregressive models, SNAR models depend on process mean values. So far, only linear dependences have been studied. We consider the case in which process mean values can have a nonlinear impact on the processes under consideration. It is shown that such models describe Markov and non-Markovian many-body systems with mean field forces that exhibit a nonlinear impact on single subsystems. We exemplify that such nonlinear dependences can describe order-disorder phase transitions of time-discrete Markovian and non-Markovian many-body systems. The relevant order parameter equations are derived and issues of stability and stationarity are studied. (paper)

  13. A Hybrid Secure Scheme for Wireless Sensor Networks against Timing Attacks Using Continuous-Time Markov Chain and Queueing Model.

    Science.gov (United States)

    Meng, Tianhui; Li, Xiaofan; Zhang, Sha; Zhao, Yubin

    2016-09-28

    Wireless sensor networks (WSNs) have recently gained popularity for a wide spectrum of applications. Monitoring tasks can be performed in various environments. This may be beneficial in many scenarios, but it certainly exhibits new challenges in terms of security due to increased data transmission over the wireless channel with potentially unknown threats. Among possible security issues are timing attacks, which are not prevented by traditional cryptographic security. Moreover, the limited energy and memory resources prohibit the use of complex security mechanisms in such systems. Therefore, balancing between security and the associated energy consumption becomes a crucial challenge. This paper proposes a secure scheme for WSNs while maintaining the requirement of the security-performance tradeoff. In order to proceed to a quantitative treatment of this problem, a hybrid continuous-time Markov chain (CTMC) and queueing model are put forward, and the tradeoff analysis of the security and performance attributes is carried out. By extending and transforming this model, the mean time to security attributes failure is evaluated. Through tradeoff analysis, we show that our scheme can enhance the security of WSNs, and the optimal rekeying rate of the performance and security tradeoff can be obtained.

  14. Markov chains and semi-Markov models in time-to-event analysis.

    Science.gov (United States)

    Abner, Erin L; Charnigo, Richard J; Kryscio, Richard J

    2013-10-25

    A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields.

  15. Robust Estimation for Discrete Markov System with Time-Varying Delay and Missing Measurements

    Directory of Open Access Journals (Sweden)

    Jia You

    2013-01-01

    Full Text Available This paper addresses the ℋ∞ filtering problem for time-delayed Markov jump systems (MJSs with intermittent measurements. Within network environment, missing measurements are taken into account, since the communication channel is supposed to be imperfect. A Bernoulli process is utilized to describe the phenomenon of the missing measurements. The original system is transformed into an input-output form consisting of two interconnected subsystems. Based on scaled small gain (SSG theorem and proposed Lyapunov-Krasovskii functional (LKF, the scaled small gains of the subsystems are analyzed, respectively. New conditions for the existence of the ℋ∞ filters are established, and the corresponding ℋ∞ filter design scheme is proposed. Finally, a simulation example is provided to demonstrate the effectiveness of the proposed approach.

  16. Filtering of Discrete-Time Switched Neural Networks Ensuring Exponential Dissipative and $l_{2}$ - $l_{\\infty }$ Performances.

    Science.gov (United States)

    Choi, Hyun Duck; Ahn, Choon Ki; Karimi, Hamid Reza; Lim, Myo Taeg

    2017-10-01

    This paper studies delay-dependent exponential dissipative and l 2 - l ∞ filtering problems for discrete-time switched neural networks (DSNNs) including time-delayed states. By introducing a novel discrete-time inequality, which is a discrete-time version of the continuous-time Wirtinger-type inequality, we establish new sets of linear matrix inequality (LMI) criteria such that discrete-time filtering error systems are exponentially stable with guaranteed performances in the exponential dissipative and l 2 - l ∞ senses. The design of the desired exponential dissipative and l 2 - l ∞ filters for DSNNs can be achieved by solving the proposed sets of LMI conditions. Via numerical simulation results, we show the validity of the desired discrete-time filter design approach.

  17. Regeneration and general Markov chains

    Directory of Open Access Journals (Sweden)

    Vladimir V. Kalashnikov

    1994-01-01

    Full Text Available Ergodicity, continuity, finite approximations and rare visits of general Markov chains are investigated. The obtained results permit further quantitative analysis of characteristics, such as, rates of convergence, continuity (measured as a distance between perturbed and non-perturbed characteristics, deviations between Markov chains, accuracy of approximations and bounds on the distribution function of the first visit time to a chosen subset, etc. The underlying techniques use the embedding of the general Markov chain into a wide sense regenerative process with the help of splitting construction.

  18. Markov stochasticity coordinates

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2017-01-01

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  19. Markov stochasticity coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Eliazar, Iddo, E-mail: iddo.eliazar@intel.com

    2017-01-15

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  20. A Markov Model for Commen-Cause Failures

    DEFF Research Database (Denmark)

    Platz, Ole

    1984-01-01

    A continuous time four-state Markov chain is shown to cover several of the models that have been used for describing dependencies between failures of components in redundant systems. Among these are the models derived by Marshall and Olkin and by Freund and models for one-out-of-three and two...

  1. The Markov chain method for solving dead time problems in the space dependent model of reactor noise

    International Nuclear Information System (INIS)

    Degweker, S.B.

    1997-01-01

    The discrete time Markov chain approach for deriving the statistics of time-correlated pulses, in the presence of a non-extending dead time, is extended to include the effect of space energy distribution of the neutron field. Equations for the singlet and doublet densities of follower neutrons are derived by neglecting correlations beyond the second order. These equations are solved by the modal method. It is shown that in the unimodal approximation, the equations reduce to the point model equations with suitably defined parameters. (author)

  2. Convergence of discrete Aubry–Mather model in the continuous limit

    Science.gov (United States)

    Su, Xifeng; Thieullen, Philippe

    2018-05-01

    We develop two approximation schemes for solving the cell equation and the discounted cell equation using Aubry–Mather–Fathi theory. The Hamiltonian is supposed to be Tonelli, time-independent and periodic in space. By Legendre transform it is equivalent to find a fixed point of some nonlinear operator, called Lax-Oleinik operator, which may be discounted or not. By discretizing in time, we are led to solve an additive eigenvalue problem involving a discrete Lax–Oleinik operator. We show how to approximate the effective Hamiltonian and some weak KAM solutions by letting the time step in the discrete model tend to zero. We also obtain a selected discrete weak KAM solution as in Davini et al (2016 Invent. Math. 206 29–55), and show that it converges to a particular solution of the cell equation. In order to unify the two settings, continuous and discrete, we develop a more general formalism of the short-range interactions.

  3. Markov Chain Models for the Stochastic Modeling of Pitting Corrosion

    OpenAIRE

    Valor, A.; Caleyo, F.; Alfonso, L.; Velázquez, J. C.; Hallen, J. M.

    2013-01-01

    The stochastic nature of pitting corrosion of metallic structures has been widely recognized. It is assumed that this kind of deterioration retains no memory of the past, so only the current state of the damage influences its future development. This characteristic allows pitting corrosion to be categorized as a Markov process. In this paper, two different models of pitting corrosion, developed using Markov chains, are presented. Firstly, a continuous-time, nonhomogeneous linear growth (pure ...

  4. Modeling Uncertainty of Directed Movement via Markov Chains

    Directory of Open Access Journals (Sweden)

    YIN Zhangcai

    2015-10-01

    Full Text Available Probabilistic time geography (PTG is suggested as an extension of (classical time geography, in order to present the uncertainty of an agent located at the accessible position by probability. This may provide a quantitative basis for most likely finding an agent at a location. In recent years, PTG based on normal distribution or Brown bridge has been proposed, its variance, however, is irrelevant with the agent's speed or divergent with the increase of the speed; so they are difficult to take into account application pertinence and stability. In this paper, a new method is proposed to model PTG based on Markov chain. Firstly, a bidirectional conditions Markov chain is modeled, the limit of which, when the moving speed is large enough, can be regarded as the Brown bridge, thus has the characteristics of digital stability. Then, the directed movement is mapped to Markov chains. The essential part is to build step length, the state space and transfer matrix of Markov chain according to the space and time position of directional movement, movement speed information, to make sure the Markov chain related to the movement speed. Finally, calculating continuously the probability distribution of the directed movement at any time by the Markov chains, it can be get the possibility of an agent located at the accessible position. Experimental results show that, the variance based on Markov chains not only is related to speed, but also is tending towards stability with increasing the agent's maximum speed.

  5. Multivariable biorthogonal continuous--discrete Wilson and Racah polynomials

    International Nuclear Information System (INIS)

    Tratnik, M.V.

    1990-01-01

    Several families of multivariable, biorthogonal, partly continuous and partly discrete, Wilson polynomials are presented. These yield limit cases that are purely continuous in some of the variables and purely discrete in the others, or purely discrete in all the variables. The latter are referred to as the multivariable biorthogonal Racah polynomials. Interesting further limit cases include the multivariable biorthogonal Hahn and dual Hahn polynomials

  6. Stabilization of discrete-time LTI positive systems

    Directory of Open Access Journals (Sweden)

    Krokavec Dušan

    2017-12-01

    Full Text Available The paper mitigates the existing conditions reported in the previous literature for control design of discrete-time linear positive systems. Incorporating an associated structure of linear matrix inequalities, combined with the Lyapunov inequality guaranteing asymptotic stability of discrete-time positive system structures, new conditions are presented with which the state-feedback controllers and the system state observers can be designed. Associated solutions of the proposed design conditions are illustrated by numerical illustrative examples.

  7. Discrete Events as Units of Perceived Time

    Science.gov (United States)

    Liverence, Brandon M.; Scholl, Brian J.

    2012-01-01

    In visual images, we perceive both space (as a continuous visual medium) and objects (that inhabit space). Similarly, in dynamic visual experience, we perceive both continuous time and discrete events. What is the relationship between these units of experience? The most intuitive answer may be similar to the spatial case: time is perceived as an…

  8. Detecting critical state before phase transition of complex biological systems by hidden Markov model.

    Science.gov (United States)

    Chen, Pei; Liu, Rui; Li, Yongjun; Chen, Luonan

    2016-07-15

    Identifying the critical state or pre-transition state just before the occurrence of a phase transition is a challenging task, because the state of the system may show little apparent change before this critical transition during the gradual parameter variations. Such dynamics of phase transition is generally composed of three stages, i.e. before-transition state, pre-transition state and after-transition state, which can be considered as three different Markov processes. By exploring the rich dynamical information provided by high-throughput data, we present a novel computational method, i.e. hidden Markov model (HMM) based approach, to detect the switching point of the two Markov processes from the before-transition state (a stationary Markov process) to the pre-transition state (a time-varying Markov process), thereby identifying the pre-transition state or early-warning signals of the phase transition. To validate the effectiveness, we apply this method to detect the signals of the imminent phase transitions of complex systems based on the simulated datasets, and further identify the pre-transition states as well as their critical modules for three real datasets, i.e. the acute lung injury triggered by phosgene inhalation, MCF-7 human breast cancer caused by heregulin and HCV-induced dysplasia and hepatocellular carcinoma. Both functional and pathway enrichment analyses validate the computational results. The source code and some supporting files are available at https://github.com/rabbitpei/HMM_based-method lnchen@sibs.ac.cn or liyj@scut.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    Science.gov (United States)

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  10. State transformations and Hamiltonian structures for optimal control in discrete systems

    Science.gov (United States)

    Sieniutycz, S.

    2006-04-01

    Preserving usual definition of Hamiltonian H as the scalar product of rates and generalized momenta we investigate two basic classes of discrete optimal control processes governed by the difference rather than differential equations for the state transformation. The first class, linear in the time interval θ, secures the constancy of optimal H and satisfies a discrete Hamilton-Jacobi equation. The second class, nonlinear in θ, does not assure the constancy of optimal H and satisfies only a relationship that may be regarded as an equation of Hamilton-Jacobi type. The basic question asked is if and when Hamilton's canonical structures emerge in optimal discrete systems. For a constrained discrete control, general optimization algorithms are derived that constitute powerful theoretical and computational tools when evaluating extremum properties of constrained physical systems. The mathematical basis is Bellman's method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage optimality criterion which allows a variation of the terminal state that is otherwise fixed in Bellman's method. For systems with unconstrained intervals of the holdup time θ two powerful optimization algorithms are obtained: an unconventional discrete algorithm with a constant H and its counterpart for models nonlinear in θ. We also present the time-interval-constrained extension of the second algorithm. The results are general; namely, one arrives at: discrete canonical equations of Hamilton, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory, along with basic results of variational calculus. A vast spectrum of applications and an example are briefly discussed with particular attention paid to models nonlinear in the time interval θ.

  11. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series

    Science.gov (United States)

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  12. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series.

    Science.gov (United States)

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  13. Is Fitts' law continuous in discrete aiming?

    Directory of Open Access Journals (Sweden)

    Rita Sleimen-Malkoun

    Full Text Available The lawful continuous linear relation between movement time and task difficulty (i.e., index of difficulty; ID in a goal-directed rapid aiming task (Fitts' law has been recently challenged in reciprocal performance. Specifically, a discontinuity was observed at critical ID and was attributed to a transition between two distinct dynamic regimes that occurs with increasing difficulty. In the present paper, we show that such a discontinuity is also present in discrete aiming when ID is manipulated via target width (experiment 1 but not via target distance (experiment 2. Fitts' law's discontinuity appears, therefore, to be a suitable indicator of the underlying functional adaptations of the neuro-muscular-skeletal system to task properties/requirements, independently of reciprocal or discrete nature of the task. These findings open new perspectives to the study of dynamic regimes involved in discrete aiming and sensori-motor mechanisms underlying the speed-accuracy trade-off.

  14. Current density and continuity in discretized models

    International Nuclear Information System (INIS)

    Boykin, Timothy B; Luisier, Mathieu; Klimeck, Gerhard

    2010-01-01

    Discrete approaches have long been used in numerical modelling of physical systems in both research and teaching. Discrete versions of the Schroedinger equation employing either one or several basis functions per mesh point are often used by senior undergraduates and beginning graduate students in computational physics projects. In studying discrete models, students can encounter conceptual difficulties with the representation of the current and its divergence because different finite-difference expressions, all of which reduce to the current density in the continuous limit, measure different physical quantities. Understanding these different discrete currents is essential and requires a careful analysis of the current operator, the divergence of the current and the continuity equation. Here we develop point forms of the current and its divergence valid for an arbitrary mesh and basis. We show that in discrete models currents exist only along lines joining atomic sites (or mesh points). Using these results, we derive a discrete analogue of the divergence theorem and demonstrate probability conservation in a purely localized-basis approach.

  15. Multi-category micro-milling tool wear monitoring with continuous hidden Markov models

    Science.gov (United States)

    Zhu, Kunpeng; Wong, Yoke San; Hong, Geok Soon

    2009-02-01

    In-process monitoring of tool conditions is important in micro-machining due to the high precision requirement and high tool wear rate. Tool condition monitoring in micro-machining poses new challenges compared to conventional machining. In this paper, a multi-category classification approach is proposed for tool flank wear state identification in micro-milling. Continuous Hidden Markov models (HMMs) are adapted for modeling of the tool wear process in micro-milling, and estimation of the tool wear state given the cutting force features. For a noise-robust approach, the HMM outputs are connected via a medium filter to minimize the tool state before entry into the next state due to high noise level. A detailed study on the selection of HMM structures for tool condition monitoring (TCM) is presented. Case studies on the tool state estimation in the micro-milling of pure copper and steel demonstrate the effectiveness and potential of these methods.

  16. A Bayesian method for construction of Markov models to describe dynamics on various time-scales.

    Science.gov (United States)

    Rains, Emily K; Andersen, Hans C

    2010-10-14

    The dynamics of many biological processes of interest, such as the folding of a protein, are slow and complicated enough that a single molecular dynamics simulation trajectory of the entire process is difficult to obtain in any reasonable amount of time. Moreover, one such simulation may not be sufficient to develop an understanding of the mechanism of the process, and multiple simulations may be necessary. One approach to circumvent this computational barrier is the use of Markov state models. These models are useful because they can be constructed using data from a large number of shorter simulations instead of a single long simulation. This paper presents a new Bayesian method for the construction of Markov models from simulation data. A Markov model is specified by (τ,P,T), where τ is the mesoscopic time step, P is a partition of configuration space into mesostates, and T is an N(P)×N(P) transition rate matrix for transitions between the mesostates in one mesoscopic time step, where N(P) is the number of mesostates in P. The method presented here is different from previous Bayesian methods in several ways. (1) The method uses Bayesian analysis to determine the partition as well as the transition probabilities. (2) The method allows the construction of a Markov model for any chosen mesoscopic time-scale τ. (3) It constructs Markov models for which the diagonal elements of T are all equal to or greater than 0.5. Such a model will be called a "consistent mesoscopic Markov model" (CMMM). Such models have important advantages for providing an understanding of the dynamics on a mesoscopic time-scale. The Bayesian method uses simulation data to find a posterior probability distribution for (P,T) for any chosen τ. This distribution can be regarded as the Bayesian probability that the kinetics observed in the atomistic simulation data on the mesoscopic time-scale τ was generated by the CMMM specified by (P,T). An optimization algorithm is used to find the most

  17. Constructing Dynamic Event Trees from Markov Models

    International Nuclear Information System (INIS)

    Paolo Bucci; Jason Kirschenbaum; Tunc Aldemir; Curtis Smith; Ted Wood

    2006-01-01

    In the probabilistic risk assessment (PRA) of process plants, Markov models can be used to model accurately the complex dynamic interactions between plant physical process variables (e.g., temperature, pressure, etc.) and the instrumentation and control system that monitors and manages the process. One limitation of this approach that has prevented its use in nuclear power plant PRAs is the difficulty of integrating the results of a Markov analysis into an existing PRA. In this paper, we explore a new approach to the generation of failure scenarios and their compilation into dynamic event trees from a Markov model of the system. These event trees can be integrated into an existing PRA using software tools such as SAPHIRE. To implement our approach, we first construct a discrete-time Markov chain modeling the system of interest by: (a) partitioning the process variable state space into magnitude intervals (cells), (b) using analytical equations or a system simulator to determine the transition probabilities between the cells through the cell-to-cell mapping technique, and, (c) using given failure/repair data for all the components of interest. The Markov transition matrix thus generated can be thought of as a process model describing the stochastic dynamic behavior of the finite-state system. We can therefore search the state space starting from a set of initial states to explore all possible paths to failure (scenarios) with associated probabilities. We can also construct event trees of arbitrary depth by tracing paths from a chosen initiating event and recording the following events while keeping track of the probabilities associated with each branch in the tree. As an example of our approach, we use the simple level control system often used as benchmark in the literature with one process variable (liquid level in a tank), and three control units: a drain unit and two supply units. Each unit includes a separate level sensor to observe the liquid level in the tank

  18. ON THE ANISOTROPIC NORM OF DISCRETE TIME STOCHASTIC SYSTEMS WITH STATE DEPENDENT NOISE

    Directory of Open Access Journals (Sweden)

    Isaac Yaesh

    2013-01-01

    Full Text Available The purpose of this paper is to determine conditions for the bound-edness of the anisotropic norm of discrete-time linear stochastic sys-tems with state dependent noise. It is proved that these conditions canbe expressed in terms of the feasibility of a specific system of matrixinequalities.

  19. Musical Markov Chains

    Science.gov (United States)

    Volchenkov, Dima; Dawin, Jean René

    A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.

  20. Probabilistic Power Flow Method Considering Continuous and Discrete Variables

    Directory of Open Access Journals (Sweden)

    Xuexia Zhang

    2017-04-01

    Full Text Available This paper proposes a probabilistic power flow (PPF method considering continuous and discrete variables (continuous and discrete power flow, CDPF for power systems. The proposed method—based on the cumulant method (CM and multiple deterministic power flow (MDPF calculations—can deal with continuous variables such as wind power generation (WPG and loads, and discrete variables such as fuel cell generation (FCG. In this paper, continuous variables follow a normal distribution (loads or a non-normal distribution (WPG, and discrete variables follow a binomial distribution (FCG. Through testing on IEEE 14-bus and IEEE 118-bus power systems, the proposed method (CDPF has better accuracy compared with the CM, and higher efficiency compared with the Monte Carlo simulation method (MCSM.

  1. The combinational structure of non-homogeneous Markov chains with countable states

    Directory of Open Access Journals (Sweden)

    A. Mukherjea

    1983-01-01

    Full Text Available Let P(s,t denote a non-homogeneous continuous parameter Markov chain with countable state space E and parameter space [a,b], −∞0}. It is shown in this paper that R(s,t is reflexive, transitive, and independent of (s,t, sstate space case, cannot be expressed even as an infinite (countable product of reflexive transitive relations for certain non-homogeneous chains in the case when E is infinite.

  2. Verification of Open Interactive Markov Chains

    OpenAIRE

    Brazdil, Tomas; Hermanns, Holger; Krcal, Jan; Kretinsky, Jan; Rehak, Vojtech

    2012-01-01

    Interactive Markov chains (IMC) are compositional behavioral models extending both labeled transition systems and continuous-time Markov chains. IMC pair modeling convenience - owed to compositionality properties - with effective verification algorithms and tools - owed to Markov properties. Thus far however, IMC verification did not consider compositionality properties, but considered closed systems. This paper discusses the evaluation of IMC in an open and thus compositional interpretation....

  3. Algebraic decay in self-similar Markov chains

    International Nuclear Information System (INIS)

    Hanson, J.D.; Cary, J.R.; Meiss, J.D.

    1984-10-01

    A continuous time Markov chain is used to model motion in the neighborhood of a critical noble invariant circle in an area-preserving map. States in the infinite chain represent successive rational approximants to the frequency of the invariant circle. The nonlinear integral equation for the first passage time distribution is solved exactly. The asymptotic distribution is a power law times a function periodic in the logarithm of the time. For parameters relevant to Hamiltonian systems the decay proceeds as t -4 05

  4. Integrating continuous stocks and flows into state-and-transition simulation models of landscape change

    Science.gov (United States)

    Daniel, Colin J.; Sleeter, Benjamin M.; Frid, Leonardo; Fortin, Marie-Josée

    2018-01-01

    State-and-transition simulation models (STSMs) provide a general framework for forecasting landscape dynamics, including projections of both vegetation and land-use/land-cover (LULC) change. The STSM method divides a landscape into spatially-referenced cells and then simulates the state of each cell forward in time, as a discrete-time stochastic process using a Monte Carlo approach, in response to any number of possible transitions. A current limitation of the STSM method, however, is that all of the state variables must be discrete.Here we present a new approach for extending a STSM, in order to account for continuous state variables, called a state-and-transition simulation model with stocks and flows (STSM-SF). The STSM-SF method allows for any number of continuous stocks to be defined for every spatial cell in the STSM, along with a suite of continuous flows specifying the rates at which stock levels change over time. The change in the level of each stock is then simulated forward in time, for each spatial cell, as a discrete-time stochastic process. The method differs from the traditional systems dynamics approach to stock-flow modelling in that the stocks and flows can be spatially-explicit, and the flows can be expressed as a function of the STSM states and transitions.We demonstrate the STSM-SF method by integrating a spatially-explicit carbon (C) budget model with a STSM of LULC change for the state of Hawai'i, USA. In this example, continuous stocks are pools of terrestrial C, while the flows are the possible fluxes of C between these pools. Importantly, several of these C fluxes are triggered by corresponding LULC transitions in the STSM. Model outputs include changes in the spatial and temporal distribution of C pools and fluxes across the landscape in response to projected future changes in LULC over the next 50 years.The new STSM-SF method allows both discrete and continuous state variables to be integrated into a STSM, including interactions between

  5. Discrete-Slots Models of Visual Working-Memory Response Times

    Science.gov (United States)

    Donkin, Christopher; Nosofsky, Robert M.; Gold, Jason M.; Shiffrin, Richard M.

    2014-01-01

    Much recent research has aimed to establish whether visual working memory (WM) is better characterized by a limited number of discrete all-or-none slots or by a continuous sharing of memory resources. To date, however, researchers have not considered the response-time (RT) predictions of discrete-slots versus shared-resources models. To complement the past research in this field, we formalize a family of mixed-state, discrete-slots models for explaining choice and RTs in tasks of visual WM change detection. In the tasks under investigation, a small set of visual items is presented, followed by a test item in 1 of the studied positions for which a change judgment must be made. According to the models, if the studied item in that position is retained in 1 of the discrete slots, then a memory-based evidence-accumulation process determines the choice and the RT; if the studied item in that position is missing, then a guessing-based accumulation process operates. Observed RT distributions are therefore theorized to arise as probabilistic mixtures of the memory-based and guessing distributions. We formalize an analogous set of continuous shared-resources models. The model classes are tested on individual subjects with both qualitative contrasts and quantitative fits to RT-distribution data. The discrete-slots models provide much better qualitative and quantitative accounts of the RT and choice data than do the shared-resources models, although there is some evidence for “slots plus resources” when memory set size is very small. PMID:24015956

  6. Time Evolution Of The Wigner Function In Discrete Quantum Phase Space For A Soluble Quasi-spin Model

    CERN Document Server

    Galetti, D

    2000-01-01

    Summary: The discrete phase space approach to quantum mechanics of degrees of freedom without classical counterparts is applied to the many-fermions/quasi-spin Lipkin model. The Wigner function is written for some chosen states associated to discrete angle and angular momentum variables, and the time evolution is numerically calculated using the discrete von Neumann-Liouville equation. Direct evidences in the time evolution of the Wigner function are extracted that identify a tunnelling effect. A connection with an $SU(2)$-based semiclassical continuous approach to the Lipkin model is also presented.

  7. The constrained discrete-time state-dependent Riccati equation technique for uncertain nonlinear systems

    Science.gov (United States)

    Chang, Insu

    The objective of the thesis is to introduce a relatively general nonlinear controller/estimator synthesis framework using a special type of the state-dependent Riccati equation technique. The continuous time state-dependent Riccati equation (SDRE) technique is extended to discrete-time under input and state constraints, yielding constrained (C) discrete-time (D) SDRE, referred to as CD-SDRE. For the latter, stability analysis and calculation of a region of attraction are carried out. The derivation of the D-SDRE under state-dependent weights is provided. Stability of the D-SDRE feedback system is established using Lyapunov stability approach. Receding horizon strategy is used to take into account the constraints on D-SDRE controller. Stability condition of the CD-SDRE controller is analyzed by using a switched system. The use of CD-SDRE scheme in the presence of constraints is then systematically demonstrated by applying this scheme to problems of spacecraft formation orbit reconfiguration under limited performance on thrusters. Simulation results demonstrate the efficacy and reliability of the proposed CD-SDRE. The CD-SDRE technique is further investigated in a case where there are uncertainties in nonlinear systems to be controlled. First, the system stability under each of the controllers in the robust CD-SDRE technique is separately established. The stability of the closed-loop system under the robust CD-SDRE controller is then proven based on the stability of each control system comprising switching configuration. A high fidelity dynamical model of spacecraft attitude motion in 3-dimensional space is derived with a partially filled fuel tank, assumed to have the first fuel slosh mode. The proposed robust CD-SDRE controller is then applied to the spacecraft attitude control system to stabilize its motion in the presence of uncertainties characterized by the first fuel slosh mode. The performance of the robust CD-SDRE technique is discussed. Subsequently

  8. Time Discretization Techniques

    KAUST Repository

    Gottlieb, S.; Ketcheson, David I.

    2016-01-01

    The time discretization of hyperbolic partial differential equations is typically the evolution of a system of ordinary differential equations obtained by spatial discretization of the original problem. Methods for this time evolution include

  9. Bounded Model Checking and Inductive Verification of Hybrid Discrete-Continuous Systems

    DEFF Research Database (Denmark)

    Becker, Bernd; Behle, Markus; Eisenbrand, Fritz

    2004-01-01

    We present a concept to signicantly advance the state of the art for bounded model checking (BMC) and inductive verication (IV) of hybrid discrete-continuous systems. Our approach combines the expertise of partners coming from dierent domains, like hybrid systems modeling and digital circuit veri...

  10. Discrete density of states

    International Nuclear Information System (INIS)

    Aydin, Alhun; Sisman, Altug

    2016-01-01

    By considering the quantum-mechanically minimum allowable energy interval, we exactly count number of states (NOS) and introduce discrete density of states (DOS) concept for a particle in a box for various dimensions. Expressions for bounded and unbounded continua are analytically recovered from discrete ones. Even though substantial fluctuations prevail in discrete DOS, they're almost completely flattened out after summation or integration operation. It's seen that relative errors of analytical expressions of bounded/unbounded continua rapidly decrease for high NOS values (weak confinement or high energy conditions), while the proposed analytical expressions based on Weyl's conjecture always preserve their lower error characteristic. - Highlights: • Discrete density of states considering minimum energy difference is proposed. • Analytical DOS and NOS formulas based on Weyl conjecture are given. • Discrete DOS and NOS functions are examined for various dimensions. • Relative errors of analytical formulas are much better than the conventional ones.

  11. Decisive Markov Chains

    OpenAIRE

    Abdulla, Parosh Aziz; Henda, Noomene Ben; Mayr, Richard

    2007-01-01

    We consider qualitative and quantitative verification problems for infinite-state Markov chains. We call a Markov chain decisive w.r.t. a given set of target states F if it almost certainly eventually reaches either F or a state from which F can no longer be reached. While all finite Markov chains are trivially decisive (for every set F), this also holds for many classes of infinite Markov chains. Infinite Markov chains which contain a finite attractor are decisive w.r.t. every set F. In part...

  12. Analysis of transtheoretical model of health behavioral changes in a nutrition intervention study--a continuous time Markov chain model with Bayesian approach.

    Science.gov (United States)

    Ma, Junsheng; Chan, Wenyaw; Tsai, Chu-Lin; Xiong, Momiao; Tilley, Barbara C

    2015-11-30

    Continuous time Markov chain (CTMC) models are often used to study the progression of chronic diseases in medical research but rarely applied to studies of the process of behavioral change. In studies of interventions to modify behaviors, a widely used psychosocial model is based on the transtheoretical model that often has more than three states (representing stages of change) and conceptually permits all possible instantaneous transitions. Very little attention is given to the study of the relationships between a CTMC model and associated covariates under the framework of transtheoretical model. We developed a Bayesian approach to evaluate the covariate effects on a CTMC model through a log-linear regression link. A simulation study of this approach showed that model parameters were accurately and precisely estimated. We analyzed an existing data set on stages of change in dietary intake from the Next Step Trial using the proposed method and the generalized multinomial logit model. We found that the generalized multinomial logit model was not suitable for these data because it ignores the unbalanced data structure and temporal correlation between successive measurements. Our analysis not only confirms that the nutrition intervention was effective but also provides information on how the intervention affected the transitions among the stages of change. We found that, compared with the control group, subjects in the intervention group, on average, spent substantively less time in the precontemplation stage and were more/less likely to move from an unhealthy/healthy state to a healthy/unhealthy state. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Continuous Markovian Logics

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Cardelli, Luca; Larsen, Kim Guldstrand

    2012-01-01

    Continuous Markovian Logic (CML) is a multimodal logic that expresses quantitative and qualitative properties of continuous-time labelled Markov processes with arbitrary (analytic) state-spaces, henceforth called continuous Markov processes (CMPs). The modalities of CML evaluate the rates...... of the exponentially distributed random variables that characterize the duration of the labeled transitions of a CMP. In this paper we present weak and strong complete axiomatizations for CML and prove a series of metaproperties, including the finite model property and the construction of canonical models. CML...... characterizes stochastic bisimilarity and it supports the definition of a quantified extension of the satisfiability relation that measures the "compatibility" between a model and a property. In this context, the metaproperties allows us to prove two robustness theorems for the logic stating that one can...

  14. On synchronized regions of discrete-time complex dynamical networks

    International Nuclear Information System (INIS)

    Duan Zhisheng; Chen Guanrong

    2011-01-01

    In this paper, the local synchronization of discrete-time complex networks is studied. First, it is shown that for any natural number n, there exists a discrete-time network which has at least left floor n/2 right floor +1 disconnected synchronized regions for local synchronization, which implies the possibility of intermittent synchronization behaviors. Different from the continuous-time networks, the existence of an unbounded synchronized region is impossible for discrete-time networks. The convexity of the synchronized regions is also characterized based on the stability of a class of matrix pencils, which is useful for enlarging the stability region so as to improve the network synchronizability.

  15. The existence and global attractivity of almost periodic sequence solution of discrete-time neural networks

    International Nuclear Information System (INIS)

    Huang Zhenkun; Wang Xinghua; Gao Feng

    2006-01-01

    In this Letter, we discuss discrete-time analogue of a continuous-time cellular neural network. Sufficient conditions are obtained for the existence of a unique almost periodic sequence solution which is globally attractive. Our results demonstrate dynamics of the formulated discrete-time analogue as mathematical models for the continuous-time cellular neural network in almost periodic case. Finally, a computer simulation illustrates the suitability of our discrete-time analogue as numerical algorithms in simulating the continuous-time cellular neural network conveniently

  16. Discrete density of states

    Energy Technology Data Exchange (ETDEWEB)

    Aydin, Alhun; Sisman, Altug, E-mail: sismanal@itu.edu.tr

    2016-03-22

    By considering the quantum-mechanically minimum allowable energy interval, we exactly count number of states (NOS) and introduce discrete density of states (DOS) concept for a particle in a box for various dimensions. Expressions for bounded and unbounded continua are analytically recovered from discrete ones. Even though substantial fluctuations prevail in discrete DOS, they're almost completely flattened out after summation or integration operation. It's seen that relative errors of analytical expressions of bounded/unbounded continua rapidly decrease for high NOS values (weak confinement or high energy conditions), while the proposed analytical expressions based on Weyl's conjecture always preserve their lower error characteristic. - Highlights: • Discrete density of states considering minimum energy difference is proposed. • Analytical DOS and NOS formulas based on Weyl conjecture are given. • Discrete DOS and NOS functions are examined for various dimensions. • Relative errors of analytical formulas are much better than the conventional ones.

  17. From the continuous PV to discrete Painleve equations

    International Nuclear Information System (INIS)

    Tokihiro, T.; Grammaticos, B.; Ramani, A.

    2002-01-01

    We study the discrete transformations that are associated with the auto-Baecklund of the (continuous) P V equation. We show that several two-parameter discrete Painleve equations can be obtained as contiguity relations of P V . Among them we find the asymmetric d-P II equation which is a well-known form of discrete P III . The relation between the ternary P I (previously obtained through the discrete dressing approach) and P V is also established. A new discrete Painleve equation is also derived. (author)

  18. Discrete-Time Filter Synthesis using Product of Gegenbauer Polynomials

    OpenAIRE

    N. Stojanovic; N. Stamenkovic; I. Krstic

    2016-01-01

    A new approximation to design continuoustime and discrete-time low-pass filters, presented in this paper, based on the product of Gegenbauer polynomials, provides the ability of more flexible adjustment of passband and stopband responses. The design is achieved taking into account a prescribed specification, leading to a better trade-off among the magnitude and group delay responses. Many well-known continuous-time and discrete-time transitional filter based on the classical polynomial approx...

  19. Single-crossover recombination in discrete time.

    Science.gov (United States)

    von Wangenheim, Ute; Baake, Ellen; Baake, Michael

    2010-05-01

    Modelling the process of recombination leads to a large coupled nonlinear dynamical system. Here, we consider a particular case of recombination in discrete time, allowing only for single crossovers. While the analogous dynamics in continuous time admits a closed solution (Baake and Baake in Can J Math 55:3-41, 2003), this no longer works for discrete time. A more general model (i.e. without the restriction to single crossovers) has been studied before (Bennett in Ann Hum Genet 18:311-317, 1954; Dawson in Theor Popul Biol 58:1-20, 2000; Linear Algebra Appl 348:115-137, 2002) and was solved algorithmically by means of Haldane linearisation. Using the special formalism introduced by Baake and Baake (Can J Math 55:3-41, 2003), we obtain further insight into the single-crossover dynamics and the particular difficulties that arise in discrete time. We then transform the equations to a solvable system in a two-step procedure: linearisation followed by diagonalisation. Still, the coefficients of the second step must be determined in a recursive manner, but once this is done for a given system, they allow for an explicit solution valid for all times.

  20. Process algebra with timing : real time and discrete time

    NARCIS (Netherlands)

    Baeten, J.C.M.; Middelburg, C.A.; Bergstra, J.A.; Ponse, A.J.; Smolka, S.A.

    2001-01-01

    We present real time and discrete time versions of ACP with absolute timing and relative timing. The starting-point is a new real time version with absolute timing, called ACPsat, featuring urgent actions and a delay operator. The discrete time versions are conservative extensions of the discrete

  1. Process algebra with timing: Real time and discrete time

    NARCIS (Netherlands)

    Baeten, J.C.M.; Middelburg, C.A.

    1999-01-01

    We present real time and discrete time versions of ACP with absolute timing and relative timing. The startingpoint is a new real time version with absolute timing, called ACPsat , featuring urgent actions and a delay operator. The discrete time versions are conservative extensions of the discrete

  2. Asymptotic evolution of quantum Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Novotny, Jaroslav [FNSPE, CTU in Prague, 115 19 Praha 1 - Stare Mesto (Czech Republic); Alber, Gernot [Institut fuer Angewandte Physik, Technische Universitaet Darmstadt, D-64289 Darmstadt (Germany)

    2012-07-01

    The iterated quantum operations, so called quantum Markov chains, play an important role in various branches of physics. They constitute basis for many discrete models capable to explore fundamental physical problems, such as the approach to thermal equilibrium, or the asymptotic dynamics of macroscopic physical systems far from thermal equilibrium. On the other hand, in the more applied area of quantum technology they also describe general characteristic properties of quantum networks or they can describe different quantum protocols in the presence of decoherence. A particularly, an interesting aspect of these quantum Markov chains is their asymptotic dynamics and its characteristic features. We demonstrate there is always a vector subspace (typically low-dimensional) of so-called attractors on which the resulting superoperator governing the iterative time evolution of quantum states can be diagonalized and in which the asymptotic quantum dynamics takes place. As the main result interesting algebraic relations are presented for this set of attractors which allow to specify their dual basis and to determine them in a convenient way. Based on this general theory we show some generalizations concerning the theory of fixed points or asymptotic evolution of random quantum operations.

  3. Integrated simulation of continuous-scale and discrete-scale radiative transfer in metal foams

    Science.gov (United States)

    Xia, Xin-Lin; Li, Yang; Sun, Chuang; Ai, Qing; Tan, He-Ping

    2018-06-01

    A novel integrated simulation of radiative transfer in metal foams is presented. It integrates the continuous-scale simulation with the direct discrete-scale simulation in a single computational domain. It relies on the coupling of the real discrete-scale foam geometry with the equivalent continuous-scale medium through a specially defined scale-coupled zone. This zone holds continuous but nonhomogeneous volumetric radiative properties. The scale-coupled approach is compared to the traditional continuous-scale approach using volumetric radiative properties in the equivalent participating medium and to the direct discrete-scale approach employing the real 3D foam geometry obtained by computed tomography. All the analyses are based on geometrical optics. The Monte Carlo ray-tracing procedure is used for computations of the absorbed radiative fluxes and the apparent radiative behaviors of metal foams. The results obtained by the three approaches are in tenable agreement. The scale-coupled approach is fully validated in calculating the apparent radiative behaviors of metal foams composed of very absorbing to very reflective struts and that composed of very rough to very smooth struts. This new approach leads to a reduction in computational time by approximately one order of magnitude compared to the direct discrete-scale approach. Meanwhile, it can offer information on the local geometry-dependent feature and at the same time the equivalent feature in an integrated simulation. This new approach is promising to combine the advantages of the continuous-scale approach (rapid calculations) and direct discrete-scale approach (accurate prediction of local radiative quantities).

  4. First Passage Moments of Finite-State Semi-Markov Processes

    Energy Technology Data Exchange (ETDEWEB)

    Warr, Richard [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cordeiro, James [Air Force Research Lab. (AFRL), Wright-Patterson AFB, OH (United States)

    2014-03-31

    In this paper, we discuss the computation of first-passage moments of a regular time-homogeneous semi-Markov process (SMP) with a finite state space to certain of its states that possess the property of universal accessibility (UA). A UA state is one which is accessible from any other state of the SMP, but which may or may not connect back to one or more other states. An important characteristic of UA is that it is the state-level version of the oft-invoked process-level property of irreducibility. We adapt existing results for irreducible SMPs to the derivation of an analytical matrix expression for the first passage moments to a single UA state of the SMP. In addition, consistent point estimators for these first passage moments, together with relevant R code, are provided.

  5. Discrete-time control system design with applications

    CERN Document Server

    Rabbath, C A

    2014-01-01

    This book presents practical techniques of discrete-time control system design. In general, the design techniques lead to low-order dynamic compensators that ensure satisfactory closed-loop performance for a wide range of sampling rates. The theory is given in the form of theorems, lemmas, and propositions. The design of the control systems is presented as step-by-step procedures and algorithms. The proposed feedback control schemes are applied to well-known dynamic system models. This book also discusses: Closed-loop performance of generic models of mobile robot and airborne pursuer dynamic systems under discrete-time feedback control with limited computing capabilities Concepts of discrete-time models and sampled-data models of continuous-time systems, for both single- and dual-rate operation Local versus global digital redesign Optimal, closed-loop digital redesign methods Plant input mapping design Generalized holds and samplers for use in feedback control loops, Numerical simulation of fixed-point arithm...

  6. Stochastic Games for Continuous-Time Jump Processes Under Finite-Horizon Payoff Criterion

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Qingda, E-mail: weiqd@hqu.edu.cn [Huaqiao University, School of Economics and Finance (China); Chen, Xian, E-mail: chenxian@amss.ac.cn [Peking University, School of Mathematical Sciences (China)

    2016-10-15

    In this paper we study two-person nonzero-sum games for continuous-time jump processes with the randomized history-dependent strategies under the finite-horizon payoff criterion. The state space is countable, and the transition rates and payoff functions are allowed to be unbounded from above and from below. Under the suitable conditions, we introduce a new topology for the set of all randomized Markov multi-strategies and establish its compactness and metrizability. Then by constructing the approximating sequences of the transition rates and payoff functions, we show that the optimal value function for each player is a unique solution to the corresponding optimality equation and obtain the existence of a randomized Markov Nash equilibrium. Furthermore, we illustrate the applications of our main results with a controlled birth and death system.

  7. Stochastic Games for Continuous-Time Jump Processes Under Finite-Horizon Payoff Criterion

    International Nuclear Information System (INIS)

    Wei, Qingda; Chen, Xian

    2016-01-01

    In this paper we study two-person nonzero-sum games for continuous-time jump processes with the randomized history-dependent strategies under the finite-horizon payoff criterion. The state space is countable, and the transition rates and payoff functions are allowed to be unbounded from above and from below. Under the suitable conditions, we introduce a new topology for the set of all randomized Markov multi-strategies and establish its compactness and metrizability. Then by constructing the approximating sequences of the transition rates and payoff functions, we show that the optimal value function for each player is a unique solution to the corresponding optimality equation and obtain the existence of a randomized Markov Nash equilibrium. Furthermore, we illustrate the applications of our main results with a controlled birth and death system.

  8. Monte Carlo Simulation of Markov, Semi-Markov, and Generalized Semi- Markov Processes in Probabilistic Risk Assessment

    Science.gov (United States)

    English, Thomas

    2005-01-01

    A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.

  9. Simultaneous Robust Fault and State Estimation for Linear Discrete-Time Uncertain Systems

    Directory of Open Access Journals (Sweden)

    Feten Gannouni

    2017-01-01

    Full Text Available We consider the problem of robust simultaneous fault and state estimation for linear uncertain discrete-time systems with unknown faults which affect both the state and the observation matrices. Using transformation of the original system, a new robust proportional integral filter (RPIF having an error variance with an optimized guaranteed upper bound for any allowed uncertainty is proposed to improve robust estimation of unknown time-varying faults and to improve robustness against uncertainties. In this study, the minimization problem of the upper bound of the estimation error variance is formulated as a convex optimization problem subject to linear matrix inequalities (LMI for all admissible uncertainties. The proportional and the integral gains are optimally chosen by solving the convex optimization problem. Simulation results are given in order to illustrate the performance of the proposed filter, in particular to solve the problem of joint fault and state estimation.

  10. A high-fidelity weather time series generator using the Markov Chain process on a piecewise level

    Science.gov (United States)

    Hersvik, K.; Endrerud, O.-E. V.

    2017-12-01

    A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.

  11. Linear discrete-time state space realization of a modified quadruple tank system with state estimation using Kalman filter

    DEFF Research Database (Denmark)

    Mohd. Azam, Sazuan Nazrah

    2017-01-01

    In this paper, we used the modified quadruple tank system that represents a multi-input-multi-output (MIMO) system as an example to present the realization of a linear discrete-time state space model and to obtain the state estimation using Kalman filter in a methodical mannered. First, an existing...... part of the Kalman filter is used to estimates the current state, based on the model and the measurements. The static and dynamic Kalman filter is compared and all results is demonstrated through simulations....

  12. Design of an optimal preview controller for linear discrete-time descriptor systems with state delay

    Science.gov (United States)

    Cao, Mengjuan; Liao, Fucheng

    2015-04-01

    In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.

  13. Local bounds preserving stabilization for continuous Galerkin discretization of hyperbolic systems

    Science.gov (United States)

    Mabuza, Sibusiso; Shadid, John N.; Kuzmin, Dmitri

    2018-05-01

    The objective of this paper is to present a local bounds preserving stabilized finite element scheme for hyperbolic systems on unstructured meshes based on continuous Galerkin (CG) discretization in space. A CG semi-discrete scheme with low order artificial dissipation that satisfies the local extremum diminishing (LED) condition for systems is used to discretize a system of conservation equations in space. The low order artificial diffusion is based on approximate Riemann solvers for hyperbolic conservation laws. In this case we consider both Rusanov and Roe artificial diffusion operators. In the Rusanov case, two designs are considered, a nodal based diffusion operator and a local projection stabilization operator. The result is a discretization that is LED and has first order convergence behavior. To achieve high resolution, limited antidiffusion is added back to the semi-discrete form where the limiter is constructed from a linearity preserving local projection stabilization operator. The procedure follows the algebraic flux correction procedure usually used in flux corrected transport algorithms. To further deal with phase errors (or terracing) common in FCT type methods, high order background dissipation is added to the antidiffusive correction. The resulting stabilized semi-discrete scheme can be discretized in time using a wide variety of time integrators. Numerical examples involving nonlinear scalar Burgers equation, and several shock hydrodynamics simulations for the Euler system are considered to demonstrate the performance of the method. For time discretization, Crank-Nicolson scheme and backward Euler scheme are utilized.

  14. Discrete coherent and squeezed states of many-qudit systems

    International Nuclear Information System (INIS)

    Klimov, Andrei B.; Munoz, Carlos; Sanchez-Soto, Luis L.

    2009-01-01

    We consider the phase space for n identical qudits (each one of dimension d, with d a primer number) as a grid of d n xd n points and use the finite Galois field GF(d n ) to label the corresponding axes. The associated displacement operators permit to define s-parametrized quasidistributions on this grid, with properties analogous to their continuous counterparts. These displacements allow also for the construction of finite coherent states, once a fiducial state is fixed. We take this reference as one eigenstate of the discrete Fourier transform and study the factorization properties of the resulting coherent states. We extend these ideas to include discrete squeezed states, and show their intriguing relation with entangled states of different qudits.

  15. Multi-rate h2 tracking control with mixed continuous-discrete performance criteria

    International Nuclear Information System (INIS)

    Kahane, A.C.; Palmor, Z.J.; Mirkin, L.

    1998-01-01

    Control goals defined both in continuous and discrete time arise naturally in many sampled-data tracking control problems. The design methods found in the literature deal with each kind of those control goals separately, over-emphasizing one kind at the expense of the other. We formulate and solve these tracking control problems as an H2 optimization problem with a mixed continuous/discrete performance criterion. It is argued that the proposed setup enables tradeoff between the various control goals in a natural manner and thus leads to better tracking characteristics

  16. Model Checking Structured Infinite Markov Chains

    NARCIS (Netherlands)

    Remke, Anne Katharina Ingrid

    2008-01-01

    In the past probabilistic model checking hast mostly been restricted to finite state models. This thesis explores the possibilities of model checking with continuous stochastic logic (CSL) on infinite-state Markov chains. We present an in-depth treatment of model checking algorithms for two special

  17. Time dependence linear transport III convergence of the discrete ordinate method

    International Nuclear Information System (INIS)

    Wilson, D.G.

    1983-01-01

    In this paper the uniform pointwise convergence of the discrete ordinate method for weak and strong solutions of the time dependent, linear transport equation posed in a multidimensional, rectangular parallelepiped with partially reflecting walls is established. The first result is that a sequence of discrete ordinate solutions converges uniformly on the quadrature points to a solution of the continuous problem provided that the corresponding sequence of truncation errors for the solution of the continuous problem converges to zero in the same manner. The second result is that continuity of the solution with respect to the velocity variables guarantees that the truncation erros in the quadrature formula go the zero and hence that the discrete ordinate approximations converge to the solution of the continuous problem as the discrete ordinate become dense. An existence theory for strong solutions of the the continuous problem follows as a result

  18. Stochastic demand patterns for Markov service facilities with neutral and active periods

    International Nuclear Information System (INIS)

    Csenki, Attila

    2009-01-01

    In an earlier paper, a closed form expression was obtained for the joint interval reliability of a Markov system with a partitioned state space S=U union D, i.e. for the probability that the system will reside in the set of up states U throughout the union of some specific disjoint time intervals I l =[θ l ,θ l +ζ l ],l=1,...,k. The deterministic time intervals I l formed a demand pattern specifying the desired active periods. In the present paper, we admit stochastic demand patterns by assuming that the lengths of the active periods, ζ l , as well as the lengths of the neutral periods, θ l -(θ l-1 +ζ l-1 ), are random. We explore two mechanisms for modelling random demand: (1) by alternating renewal processes; (2) by sojourn times of some continuous time Markov chain with a partitioned state space. The first construction results in an expression in terms of a revised version of the moment generating functions of the sojourns of the alternating renewal process. The second construction involves the probability that a Markov chain follows certain patterns of visits to some groups of states and yields an expression using Kronecker matrix operations. The model of a small computer system is analysed to exemplify the ideas

  19. A discrete single server queue with Markovian arrivals and phase type group services

    Directory of Open Access Journals (Sweden)

    Attahiru Sule Alfa

    1995-01-01

    Full Text Available We consider a single-server discrete queueing system in which arrivals occur according to a Markovian arrival process. Service is provided in groups of size no more than M customers. The service times are assumed to follow a discrete phase type distribution, whose representation may depend on the group size. Under a probabilistic service rule, which depends on the number of customers waiting in the queue, this system is studied as a Markov process. This type of queueing system is encountered in the operations of an automatic storage retrieval system. The steady-state probability vector is shown to be of (modified matrix-geometric type. Efficient algorithmic procedures for the computation of the rate matrix, steady-state probability vector, and some important system performance measures are developed. The steady-state waiting time distribution is derived explicitly. Some numerical examples are presented.

  20. A transition-constrained discrete hidden Markov model for automatic sleep staging

    Directory of Open Access Journals (Sweden)

    Pan Shing-Tai

    2012-08-01

    Full Text Available Abstract Background Approximately one-third of the human lifespan is spent sleeping. To diagnose sleep problems, all-night polysomnographic (PSG recordings including electroencephalograms (EEGs, electrooculograms (EOGs and electromyograms (EMGs, are usually acquired from the patient and scored by a well-trained expert according to Rechtschaffen & Kales (R&K rules. Visual sleep scoring is a time-consuming and subjective process. Therefore, the development of an automatic sleep scoring method is desirable. Method The EEG, EOG and EMG signals from twenty subjects were measured. In addition to selecting sleep characteristics based on the 1968 R&K rules, features utilized in other research were collected. Thirteen features were utilized including temporal and spectrum analyses of the EEG, EOG and EMG signals, and a total of 158 hours of sleep data were recorded. Ten subjects were used to train the Discrete Hidden Markov Model (DHMM, and the remaining ten were tested by the trained DHMM for recognition. Furthermore, the 2-fold cross validation was performed during this experiment. Results Overall agreement between the expert and the results presented is 85.29%. With the exception of S1, the sensitivities of each stage were more than 81%. The most accurate stage was SWS (94.9%, and the least-accurately classified stage was S1 ( Conclusion The results of the experiments demonstrate that the proposed method significantly enhances the recognition rate when compared with prior studies.

  1. Discrete- vs. Continuous-Time Modeling of Unequally Spaced Experience Sampling Method Data

    Directory of Open Access Journals (Sweden)

    Silvia de Haan-Rietdijk

    2017-10-01

    Full Text Available The Experience Sampling Method is a common approach in psychological research for collecting intensive longitudinal data with high ecological validity. One characteristic of ESM data is that it is often unequally spaced, because the measurement intervals within a day are deliberately varied, and measurement continues over several days. This poses a problem for discrete-time (DT modeling approaches, which are based on the assumption that all measurements are equally spaced. Nevertheless, DT approaches such as (vector autoregressive modeling are often used to analyze ESM data, for instance in the context of affective dynamics research. There are equivalent continuous-time (CT models, but they are more difficult to implement. In this paper we take a pragmatic approach and evaluate the practical relevance of the violated model assumption in DT AR(1 and VAR(1 models, for the N = 1 case. We use simulated data under an ESM measurement design to investigate the bias in the parameters of interest under four different model implementations, ranging from the true CT model that accounts for all the exact measurement times, to the crudest possible DT model implementation, where even the nighttime is treated as a regular interval. An analysis of empirical affect data illustrates how the differences between DT and CT modeling can play out in practice. We find that the size and the direction of the bias in DT (VAR models for unequally spaced ESM data depend quite strongly on the true parameter in addition to data characteristics. Our recommendation is to use CT modeling whenever possible, especially now that new software implementations have become available.

  2. Markov Trends in Macroeconomic Time Series

    NARCIS (Netherlands)

    R. Paap (Richard)

    1997-01-01

    textabstractMany macroeconomic time series are characterised by long periods of positive growth, expansion periods, and short periods of negative growth, recessions. A popular model to describe this phenomenon is the Markov trend, which is a stochastic segmented trend where the slope depends on the

  3. Utilization of two web-based continuing education courses evaluated by Markov chain model.

    Science.gov (United States)

    Tian, Hao; Lin, Jin-Mann S; Reeves, William C

    2012-01-01

    To evaluate the web structure of two web-based continuing education courses, identify problems and assess the effects of web site modifications. Markov chain models were built from 2008 web usage data to evaluate the courses' web structure and navigation patterns. The web site was then modified to resolve identified design issues and the improvement in user activity over the subsequent 12 months was quantitatively evaluated. Web navigation paths were collected between 2008 and 2010. The probability of navigating from one web page to another was analyzed. The continuing education courses' sequential structure design was clearly reflected in the resulting actual web usage models, and none of the skip transitions provided was heavily used. The web navigation patterns of the two different continuing education courses were similar. Two possible design flaws were identified and fixed in only one of the two courses. Over the following 12 months, the drop-out rate in the modified course significantly decreased from 41% to 35%, but remained unchanged in the unmodified course. The web improvement effects were further verified via a second-order Markov chain model. The results imply that differences in web content have less impact than web structure design on how learners navigate through continuing education courses. Evaluation of user navigation can help identify web design flaws and guide modifications. This study showed that Markov chain models provide a valuable tool to evaluate web-based education courses. Both the results and techniques in this study would be very useful for public health education and research specialists.

  4. Synthesis of the Markov model of the thermochemical degradation of a polymer in solution

    Directory of Open Access Journals (Sweden)

    V. K. Bityukov

    2017-01-01

    Full Text Available The paper deals with the problem of mathematical modeling of thermochemical destruction process. The apparatus of Markov's chains is used to synthesize a mathematical model. The authors of the study suggest to consider the destruction process as a random one, where the system state changes, which is characterized by the proportion of macromolecules in each fraction of the molecular- and weight distribution. The intensities of transitions from one state to another characterize the corresponding rates of destruction processes for each fraction of the molecular- and weight distribution. The processes of crosslinking and polymerization in this work were neglected, and it was accepted that there is a probability of transition from any state with a lower order index (corresponding to fractions with higher molecular weights to any state with a higher index (corresponding to fractions with lower molecular weights. Markov's chain with discrete states and continuous time was taken as the mathematical model basis. Interactive graphical simulation environment MathWorksSimulink was used as a simulation environment. Experimental studies of polybutadiene destruction in solution were carried out to evaluate the mathematical model parameters. The GPC (gel-penetration chromatography data of the polybutadiene solution were used as the initial (starting data for estimating the polymer WMD (molecular weight distribution. Mean-square deviation of the calculated data from the experimental data for each fraction and at specified times was minimized for the numerical search of parameter values. The results of comparison of experimental and calculated on mathematical model data showed an error of calculations on the average about 5%, which indicates an acceptable error in estimating of polymer fractions proportions change during the process of destruction for the process under consideration and conditions.

  5. Information-Theoretic Performance Analysis of Sensor Networks via Markov Modeling of Time Series Data.

    Science.gov (United States)

    Li, Yue; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Yue Li; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Wettergren, Thomas A; Li, Yue; Ray, Asok; Jha, Devesh K

    2018-06-01

    This paper presents information-theoretic performance analysis of passive sensor networks for detection of moving targets. The proposed method falls largely under the category of data-level information fusion in sensor networks. To this end, a measure of information contribution for sensors is formulated in a symbolic dynamics framework. The network information state is approximately represented as the largest principal component of the time series collected across the network. To quantify each sensor's contribution for generation of the information content, Markov machine models as well as x-Markov (pronounced as cross-Markov) machine models, conditioned on the network information state, are constructed; the difference between the conditional entropies of these machines is then treated as an approximate measure of information contribution by the respective sensors. The x-Markov models represent the conditional temporal statistics given the network information state. The proposed method has been validated on experimental data collected from a local area network of passive sensors for target detection, where the statistical characteristics of environmental disturbances are similar to those of the target signal in the sense of time scale and texture. A distinctive feature of the proposed algorithm is that the network decisions are independent of the behavior and identity of the individual sensors, which is desirable from computational perspectives. Results are presented to demonstrate the proposed method's efficacy to correctly identify the presence of a target with very low false-alarm rates. The performance of the underlying algorithm is compared with that of a recent data-driven, feature-level information fusion algorithm. It is shown that the proposed algorithm outperforms the other algorithm.

  6. Bayesian inference for multivariate point processes observed at sparsely distributed times

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl; Møller, Jesper; Aukema, B.H.

    We consider statistical and computational aspects of simulation-based Bayesian inference for a multivariate point process which is only observed at sparsely distributed times. For specicity we consider a particular data set which has earlier been analyzed by a discrete time model involving unknown...... normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared to discrete time processes in the setting of the present paper as well as other spatial-temporal situations. Keywords: Bark beetle, conditional intensity, forest entomology, Markov chain Monte Carlo...

  7. Projected metastable Markov processes and their estimation with observable operator models

    International Nuclear Information System (INIS)

    Wu, Hao; Prinz, Jan-Hendrik; Noé, Frank

    2015-01-01

    The determination of kinetics of high-dimensional dynamical systems, such as macromolecules, polymers, or spin systems, is a difficult and generally unsolved problem — both in simulation, where the optimal reaction coordinate(s) are generally unknown and are difficult to compute, and in experimental measurements, where only specific coordinates are observable. Markov models, or Markov state models, are widely used but suffer from the fact that the dynamics on a coarsely discretized state spaced are no longer Markovian, even if the dynamics in the full phase space are. The recently proposed projected Markov models (PMMs) are a formulation that provides a description of the kinetics on a low-dimensional projection without making the Markovianity assumption. However, as yet no general way of estimating PMMs from data has been available. Here, we show that the observed dynamics of a PMM can be exactly described by an observable operator model (OOM) and derive a PMM estimator based on the OOM learning

  8. Distribution of return point memory states for systems with stochastic inputs

    International Nuclear Information System (INIS)

    Amann, A; Brokate, M; Rachinskii, D; Temnov, G

    2011-01-01

    We consider the long term effect of stochastic inputs on the state of an open loop system which exhibits the so-called return point memory. An example of such a system is the Preisach model; more generally, systems with the Preisach type input-state relationship, such as in spin-interaction models, are considered. We focus on the characterisation of the expected memory configuration after the system has been effected by the input for sufficiently long period of time. In the case where the input is given by a discrete time random walk process, or the Wiener process, simple closed form expressions for the probability density of the vector of the main input extrema recorded by the memory state, and scaling laws for the dimension of this vector, are derived. If the input is given by a general continuous Markov process, we show that the distribution of previous memory elements can be obtained from a Markov chain scheme which is derived from the solution of an associated one-dimensional escape type problem. Formulas for transition probabilities defining this Markov chain scheme are presented. Moreover, explicit formulas for the conditional probability densities of previous main extrema are obtained for the Ornstein-Uhlenbeck input process. The analytical results are confirmed by numerical experiments.

  9. Accurate Lithium-ion battery parameter estimation with continuous-time system identification methods

    International Nuclear Information System (INIS)

    Xia, Bing; Zhao, Xin; Callafon, Raymond de; Garnier, Hugues; Nguyen, Truong; Mi, Chris

    2016-01-01

    Highlights: • Continuous-time system identification is applied in Lithium-ion battery modeling. • Continuous-time and discrete-time identification methods are compared in detail. • The instrumental variable method is employed to further improve the estimation. • Simulations and experiments validate the advantages of continuous-time methods. - Abstract: The modeling of Lithium-ion batteries usually utilizes discrete-time system identification methods to estimate parameters of discrete models. However, in real applications, there is a fundamental limitation of the discrete-time methods in dealing with sensitivity when the system is stiff and the storage resolutions are limited. To overcome this problem, this paper adopts direct continuous-time system identification methods to estimate the parameters of equivalent circuit models for Lithium-ion batteries. Compared with discrete-time system identification methods, the continuous-time system identification methods provide more accurate estimates to both fast and slow dynamics in battery systems and are less sensitive to disturbances. A case of a 2"n"d-order equivalent circuit model is studied which shows that the continuous-time estimates are more robust to high sampling rates, measurement noises and rounding errors. In addition, the estimation by the conventional continuous-time least squares method is further improved in the case of noisy output measurement by introducing the instrumental variable method. Simulation and experiment results validate the analysis and demonstrate the advantages of the continuous-time system identification methods in battery applications.

  10. State space orderings for Gauss-Seidel in Markov chains revisited

    Energy Technology Data Exchange (ETDEWEB)

    Dayar, T. [Bilkent Univ., Ankara (Turkey)

    1996-12-31

    Symmetric state space orderings of a Markov chain may be used to reduce the magnitude of the subdominant eigenvalue of the (Gauss-Seidel) iteration matrix. Orderings that maximize the elemental mass or the number of nonzero elements in the dominant term of the Gauss-Seidel splitting (that is, the term approximating the coefficient matrix) do not necessarily converge faster. An ordering of a Markov chain that satisfies Property-R is semi-convergent. On the other hand, there are semi-convergent symmetric state space orderings that do not satisfy Property-R. For a given ordering, a simple approach for checking Property-R is shown. An algorithm that orders the states of a Markov chain so as to increase the likelihood of satisfying Property-R is presented. The computational complexity of the ordering algorithm is less than that of a single Gauss-Seidel iteration (for sparse matrices). In doing all this, the aim is to gain an insight for faster converging orderings. Results from a variety of applications improve the confidence in the algorithm.

  11. Markov modeling and discrete event simulation in health care: a systematic comparison.

    Science.gov (United States)

    Standfield, Lachlan; Comans, Tracy; Scuffham, Paul

    2014-04-01

    The aim of this study was to assess if the use of Markov modeling (MM) or discrete event simulation (DES) for cost-effectiveness analysis (CEA) may alter healthcare resource allocation decisions. A systematic literature search and review of empirical and non-empirical studies comparing MM and DES techniques used in the CEA of healthcare technologies was conducted. Twenty-two pertinent publications were identified. Two publications compared MM and DES models empirically, one presented a conceptual DES and MM, two described a DES consensus guideline, and seventeen drew comparisons between MM and DES through the authors' experience. The primary advantages described for DES over MM were the ability to model queuing for limited resources, capture individual patient histories, accommodate complexity and uncertainty, represent time flexibly, model competing risks, and accommodate multiple events simultaneously. The disadvantages of DES over MM were the potential for model overspecification, increased data requirements, specialized expensive software, and increased model development, validation, and computational time. Where individual patient history is an important driver of future events an individual patient simulation technique like DES may be preferred over MM. Where supply shortages, subsequent queuing, and diversion of patients through other pathways in the healthcare system are likely to be drivers of cost-effectiveness, DES modeling methods may provide decision makers with more accurate information on which to base resource allocation decisions. Where these are not major features of the cost-effectiveness question, MM remains an efficient, easily validated, parsimonious, and accurate method of determining the cost-effectiveness of new healthcare interventions.

  12. Description of quantum-mechanical motion by using the formalism of non-Markov stochastic process

    International Nuclear Information System (INIS)

    Skorobogatov, G.A.; Svertilov, S.I.

    1999-01-01

    The principle possibilities of mathematical modeling of quantum mechanical motion by the theory of a real stochastic processes is considered. The set of equations corresponding to the simplest case of a two-level system undergoing transitions under the influence of electromagnetic field are obtained. It is shown that quantum-mechanical processes are purely discrete processes of non-Markovian type. They are continuous processes in the space of probability amplitudes and posses the properties of quantum Markovity. The formulation of quantum mechanics in terms of the theory of stochastic processes is necessary for its generalization on small space-time intervals [ru

  13. Well-posedness and accuracy of the ensemble Kalman filter in discrete and continuous time

    KAUST Repository

    Kelly, D. T B

    2014-09-22

    The ensemble Kalman filter (EnKF) is a method for combining a dynamical model with data in a sequential fashion. Despite its widespread use, there has been little analysis of its theoretical properties. Many of the algorithmic innovations associated with the filter, which are required to make a useable algorithm in practice, are derived in an ad hoc fashion. The aim of this paper is to initiate the development of a systematic analysis of the EnKF, in particular to do so for small ensemble size. The perspective is to view the method as a state estimator, and not as an algorithm which approximates the true filtering distribution. The perturbed observation version of the algorithm is studied, without and with variance inflation. Without variance inflation well-posedness of the filter is established; with variance inflation accuracy of the filter, with respect to the true signal underlying the data, is established. The algorithm is considered in discrete time, and also for a continuous time limit arising when observations are frequent and subject to large noise. The underlying dynamical model, and assumptions about it, is sufficiently general to include the Lorenz \\'63 and \\'96 models, together with the incompressible Navier-Stokes equation on a two-dimensional torus. The analysis is limited to the case of complete observation of the signal with additive white noise. Numerical results are presented for the Navier-Stokes equation on a two-dimensional torus for both complete and partial observations of the signal with additive white noise.

  14. Well-posedness and accuracy of the ensemble Kalman filter in discrete and continuous time

    International Nuclear Information System (INIS)

    Kelly, D T B; Stuart, A M; Law, K J H

    2014-01-01

    The ensemble Kalman filter (EnKF) is a method for combining a dynamical model with data in a sequential fashion. Despite its widespread use, there has been little analysis of its theoretical properties. Many of the algorithmic innovations associated with the filter, which are required to make a useable algorithm in practice, are derived in an ad hoc fashion. The aim of this paper is to initiate the development of a systematic analysis of the EnKF, in particular to do so for small ensemble size. The perspective is to view the method as a state estimator, and not as an algorithm which approximates the true filtering distribution. The perturbed observation version of the algorithm is studied, without and with variance inflation. Without variance inflation well-posedness of the filter is established; with variance inflation accuracy of the filter, with respect to the true signal underlying the data, is established. The algorithm is considered in discrete time, and also for a continuous time limit arising when observations are frequent and subject to large noise. The underlying dynamical model, and assumptions about it, is sufficiently general to include the Lorenz '63 and '96 models, together with the incompressible Navier–Stokes equation on a two-dimensional torus. The analysis is limited to the case of complete observation of the signal with additive white noise. Numerical results are presented for the Navier–Stokes equation on a two-dimensional torus for both complete and partial observations of the signal with additive white noise. (paper)

  15. Discrete-State and Continuous Models of Recognition Memory: Testing Core Properties under Minimal Assumptions

    Science.gov (United States)

    Kellen, David; Klauer, Karl Christoph

    2014-01-01

    A classic discussion in the recognition-memory literature concerns the question of whether recognition judgments are better described by continuous or discrete processes. These two hypotheses are instantiated by the signal detection theory model (SDT) and the 2-high-threshold model, respectively. Their comparison has almost invariably relied on…

  16. Algebraic decay in self-similar Markov chains

    International Nuclear Information System (INIS)

    Hanson, J.D.; Cary, J.R.; Meiss, J.D.

    1985-01-01

    A continuous-time Markov chain is used to model motion in the neighborhood of a critical invariant circle for a Hamiltonian map. States in the infinite chain represent successive rational approximants to the frequency of the invariant circle. For the case of a noble frequency, the chain is self-similar and the nonlinear integral equation for the first passage time distribution is solved exactly. The asymptotic distribution is a power law times a function periodic in the logarithm of the time. For parameters relevant to the critical noble circle, the decay proceeds as t/sup -4.05/

  17. Discrete time population dynamics of a two-stage species with recruitment and capture

    International Nuclear Information System (INIS)

    Ladino, Lilia M.; Mammana, Cristiana; Michetti, Elisabetta; Valverde, Jose C.

    2016-01-01

    This work models and analyzes the dynamics of a two-stage species with recruitment and capture factors. It arises from the discretization of a previous model developed by Ladino and Valverde (2013), which represents a progress in the knowledge of the dynamics of exploited populations. Although the methods used here are related to the study of discrete-time systems and are different from those related to continuous version, the results are similar in both the discrete and the continuous case what confirm the skill in the selection of the factors to design the model. Unlike for the continuous-time case, for the discrete-time one some (non-negative) parametric constraints are derived from the biological significance of the model and become fundamental for the proofs of such results. Finally, numerical simulations show different scenarios of dynamics related to the analytical results which confirm the validity of the model.

  18. Discretization of space and time in wave mechanics: the validity limit

    OpenAIRE

    Roatta , Luca

    2017-01-01

    Assuming that space and time can only have discrete values, it is shown that wave mechanics must necessarily have a specific applicability limit: in a discrete context, unlike in a continuous one, frequencies can not have arbitrarily high values.

  19. The discretized Schroedinger equation for the finite square well and its relationship to solid-state physics

    International Nuclear Information System (INIS)

    Boykin, Timothy B; Klimeck, Gerhard

    2005-01-01

    The discretized Schroedinger equation is most often used to solve one-dimensional quantum mechanics problems numerically. While it has been recognized for some time that this equation is equivalent to a simple tight-binding model and that the discretization imposes an underlying bandstructure unlike free-space quantum mechanics on the problem, the physical implications of this equivalence largely have been unappreciated and the pedagogical advantages accruing from presenting the problem as one of solid-state physics (and not numerics) remain generally unexplored. This is especially true for the analytically solvable discretized finite square well presented here. There are profound differences in the physics of this model and its continuous-space counterpart which are direct consequences of the imposed bandstructure. For example, in the discrete model the number of bound states plus transmission resonances equals the number of atoms in the quantum well

  20. Discrete-Time Filter Synthesis using Product of Gegenbauer Polynomials

    Directory of Open Access Journals (Sweden)

    N. Stojanovic

    2016-09-01

    Full Text Available A new approximation to design continuoustime and discrete-time low-pass filters, presented in this paper, based on the product of Gegenbauer polynomials, provides the ability of more flexible adjustment of passband and stopband responses. The design is achieved taking into account a prescribed specification, leading to a better trade-off among the magnitude and group delay responses. Many well-known continuous-time and discrete-time transitional filter based on the classical polynomial approximations(Chebyshev, Legendre, Butterworth are shown to be a special cases of proposed approximation method.

  1. Simplification of Markov chains with infinite state space and the mathematical theory of random gene expression bursts

    Science.gov (United States)

    Jia, Chen

    2017-09-01

    Here we develop an effective approach to simplify two-time-scale Markov chains with infinite state spaces by removal of states with fast leaving rates, which improves the simplification method of finite Markov chains. We introduce the concept of fast transition paths and show that the effective transitions of the reduced chain can be represented as the superposition of the direct transitions and the indirect transitions via all the fast transition paths. Furthermore, we apply our simplification approach to the standard Markov model of single-cell stochastic gene expression and provide a mathematical theory of random gene expression bursts. We give the precise mathematical conditions for the bursting kinetics of both mRNAs and proteins. It turns out that random bursts exactly correspond to the fast transition paths of the Markov model. This helps us gain a better understanding of the physics behind the bursting kinetics as an emergent behavior from the fundamental multiscale biochemical reaction kinetics of stochastic gene expression.

  2. Direct output feedback control of discrete-time systems

    International Nuclear Information System (INIS)

    Lin, C.C.; Chung, L.L.; Lu, K.H.

    1993-01-01

    An optimal direct output feedback control algorithm is developed for discrete-time systems with the consideration of time delay in control force action. Optimal constant output feedback gains are obtained through variational process such that certain prescribed quadratic performance index is minimized. Discrete-time control forces are then calculated from the multiplication of output measurements by these pre-calculated feedback gains. According to the proposed algorithm, structural system is assured to remain stable even in the presence of time delay. The number of sensors and controllers may be very small as compared with the dimension of states. Numerical results show that direct velocity feedback control is more sensitive to time delay than state feedback but, is still quite effective in reducing the dynamic responses under earthquake excitation. (author)

  3. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory

    Science.gov (United States)

    Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A.

    2016-01-01

    There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of “maximum flow-minimum cut” graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered. PMID:27089174

  4. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory.

    Science.gov (United States)

    Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A

    2016-08-25

    There are several applications in computational biophysics that require the optimization of discrete interacting states, for example, amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of "maximum flow-minimum cut" graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.

  5. Markov chain model helps predict pitting corrosion depth and rate in underground pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Caleyo, F.; Velazquez, J.C.; Hallen, J. M. [ESIQIE, Instituto Politecnico Nacional, Mexico D. F. (Mexico); Esquivel-Amezcua, A. [PEMEX PEP Region Sur, Villahermosa, Tabasco (Mexico); Valor, A. [Universidad de la Habana, Vedado, La Habana (Cuba)

    2010-07-01

    Recent reports place pipeline corrosion costs in North America at seven billion dollars per year. Pitting corrosion causes the higher percentage of failures among other corrosion mechanisms. This has motivated multiple modelling studies to be focused on corrosion pitting of underground pipelines. In this study, a continuous-time, non-homogenous pure birth Markov chain serves to model external pitting corrosion in buried pipelines. The analytical solution of Kolmogorov's forward equations for this type of Markov process gives the transition probability function in a discrete space of pit depths. The transition probability function can be completely identified by making a correlation between the stochastic pit depth mean and the deterministic mean obtained experimentally. The model proposed in this study can be applied to pitting corrosion data from repeated in-line pipeline inspections. Case studies presented in this work show how pipeline inspection and maintenance planning can be improved by using the proposed Markovian model for pitting corrosion.

  6. Nonlinearly perturbed semi-Markov processes

    CERN Document Server

    Silvestrov, Dmitrii

    2017-01-01

    The book presents new methods of asymptotic analysis for nonlinearly perturbed semi-Markov processes with a finite phase space. These methods are based on special time-space screening procedures for sequential phase space reduction of semi-Markov processes combined with the systematical use of operational calculus for Laurent asymptotic expansions. Effective recurrent algorithms are composed for getting asymptotic expansions, without and with explicit upper bounds for remainders, for power moments of hitting times, stationary and conditional quasi-stationary distributions for nonlinearly perturbed semi-Markov processes. These results are illustrated by asymptotic expansions for birth-death-type semi-Markov processes, which play an important role in various applications. The book will be a useful contribution to the continuing intensive studies in the area. It is an essential reference for theoretical and applied researchers in the field of stochastic processes and their applications that will cont...

  7. A stochastic estimation procedure for intermittently-observed semi-Markov multistate models with back transitions.

    Science.gov (United States)

    Aralis, Hilary; Brookmeyer, Ron

    2017-01-01

    Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.

  8. The continuous 1.5D terrain guarding problem: Discretization, optimal solutions, and PTAS

    Directory of Open Access Journals (Sweden)

    Stephan Friedrichs

    2016-05-01

    Full Text Available In the NP-hard continuous 1.5D Terrain Guarding Problem (TGP we are given an $x$-monotone chain of line segments in $R^2$ (the terrain $T$, and ask for the minimum number of guards (located anywhere on $T$ required to guard all of $T$. We construct guard candidate and witness sets $G, W \\subset T$ of polynomial size such that any feasible (optimal guard cover $G^* \\subseteq G$ for $W$ is also feasible (optimal for the continuous TGP. This discretization allows us to: (1 settle NP-completeness for the continuous TGP; (2 provide a Polynomial Time Approximation Scheme (PTAS for the continuous TGP using the PTAS for the discrete TGP by Gibson et al.; (3 formulate the continuous TGP as an Integer Linear Program (IP. Furthermore, we propose several filtering techniques reducing the size of our discretization, allowing us to devise an efficient IP-based algorithm that reliably provides optimal guard placements for terrains with up to $10^6$ vertices within minutes on a standard desktop computer.

  9. Observer-based adaptive control of chaos in nonlinear discrete-time systems using time-delayed state feedback

    International Nuclear Information System (INIS)

    Goharrizi, Amin Yazdanpanah; Khaki-Sedigh, Ali; Sepehri, Nariman

    2009-01-01

    A new approach to adaptive control of chaos in a class of nonlinear discrete-time-varying systems, using a delayed state feedback scheme, is presented. It is discussed that such systems can show chaotic behavior as their parameters change. A strategy is employed for on-line calculation of the Lyapunov exponents that will be used within an adaptive scheme that decides on the control effort to suppress the chaotic behavior once detected. The scheme is further augmented with a nonlinear observer for estimation of the states that are required by the controller but are hard to measure. Simulation results for chaotic control problem of Jin map are provided to show the effectiveness of the proposed scheme.

  10. A collection of integrable systems of the Toda type in continuous and discrete time, with 2x2 Lax representations

    OpenAIRE

    Suris, Yuri B.

    1997-01-01

    A fairly complete list of Toda-like integrable lattice systems, both in the continuous and discrete time, is given. For each system the Newtonian, Lagrangian and Hamiltonian formulations are presented, as well as the 2x2 Lax representation and r-matrix structure. The material is given in the "no comment" style, in particular, all proofs are omitted.

  11. Simulation from endpoint-conditioned, continuous-time Markov chains on a finite state space, with applications to molecular evolution

    DEFF Research Database (Denmark)

    Hobolth, Asger; Stone, Eric

    2009-01-01

    computational finance to human genetics and genomics. A common theme among these diverse applications is the need to simulate sample paths of a CTMC conditional on realized data that is discretely observed. Here we present a general solution to this sampling problem when the CTMC is defined on a discrete....... In doing so, we show that no method dominates the others across all model specifications, and we give explicit proof of which method prevails for any given Q, T, and endpoints. Finally, we introduce and compare three applications of CTMCs to demonstrate the pitfalls of choosing an inefficient sampler....

  12. On the application of Discrete Time Optimal Control Concepts to ...

    African Journals Online (AJOL)

    On the application of Discrete Time Optimal Control Concepts to Economic Problems. ... Journal of the Nigerian Association of Mathematical Physics ... Abstract. An extension of the use of the maximum principle to solve Discrete-time Optimal Control Problems (DTOCP), in which the state equations are in the form of general ...

  13. Interaction of discrete and continuous boundary layer modes to cause transition

    International Nuclear Information System (INIS)

    Durbin, Paul A.; Zaki, Tamer A.; Liu Yang

    2009-01-01

    The interaction of discrete and continuous Orr-Sommerfeld modes in a boundary layer is studied by computer simulation. The discrete mode is an unstable Tollmien-Schlichting wave. The continuous modes generate jet-like disturbances inside the boundary layer. Either mode alone does not cause transition to turbulence; however, the interaction between them does. The continuous mode jets distort the discrete modes, producing Λ shaped vortices. Breakdown to turbulence is subsequent. The lateral spacing of the Λ's is sometimes the same as the wavelength of the continuous mode, sometimes it differs, depending on the ratio of wavelength to boundary layer thickness.

  14. Belief Bisimulation for Hidden Markov Models Logical Characterisation and Decision Algorithm

    DEFF Research Database (Denmark)

    Jansen, David N.; Nielson, Flemming; Zhang, Lijun

    2012-01-01

    This paper establishes connections between logical equivalences and bisimulation relations for hidden Markov models (HMM). Both standard and belief state bisimulations are considered. We also present decision algorithms for the bisimilarities. For standard bisimilarity, an extension of the usual...... partition refinement algorithm is enough. Belief bisimilarity, being a relation on the continuous space of belief states, cannot be described directly. Instead, we show how to generate a linear equation system in time cubic in the number of states....

  15. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    Science.gov (United States)

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  16. Continuous limit of discrete systems with long-range interaction

    International Nuclear Information System (INIS)

    Tarasov, Vasily E

    2006-01-01

    Discrete systems with long-range interactions are considered. Continuous medium models as continuous limit of discrete chain system are defined. Long-range interactions of chain elements that give the fractional equations for the medium model are discussed. The chain equations of motion with long-range interaction are mapped into the continuum equation with the Riesz fractional derivative. We formulate the consistent definition of continuous limit for the systems with long-range interactions. In this paper, we consider a wide class of long-range interactions that give fractional medium equations in the continuous limit. The power-law interaction is a special case of this class

  17. Quantum Markov Chain Mixing and Dissipative Engineering

    DEFF Research Database (Denmark)

    Kastoryano, Michael James

    2012-01-01

    This thesis is the fruit of investigations on the extension of ideas of Markov chain mixing to the quantum setting, and its application to problems of dissipative engineering. A Markov chain describes a statistical process where the probability of future events depends only on the state...... of the system at the present point in time, but not on the history of events. Very many important processes in nature are of this type, therefore a good understanding of their behaviour has turned out to be very fruitful for science. Markov chains always have a non-empty set of limiting distributions...... (stationary states). The aim of Markov chain mixing is to obtain (upper and/or lower) bounds on the number of steps it takes for the Markov chain to reach a stationary state. The natural quantum extensions of these notions are density matrices and quantum channels. We set out to develop a general mathematical...

  18. CMOS continuous-time adaptive equalizers for high-speed serial links

    CERN Document Server

    Gimeno Gasca, Cecilia; Aldea Chagoyen, Concepción

    2015-01-01

    This book introduces readers to the design of adaptive equalization solutions integrated in standard CMOS technology for high-speed serial links. Since continuous-time equalizers offer various advantages as an alternative to discrete-time equalizers at multi-gigabit rates, this book provides a detailed description of continuous-time adaptive equalizers design - both at transistor and system levels-, their main characteristics and performances. The authors begin with a complete review and analysis of the state of the art of equalizers for wireline applications, describing why they are necessary, their types, and their main applications. Next, theoretical fundamentals of continuous-time adaptive equalizers are explored. Then, new structures are proposed to implement the different building blocks of the adaptive equalizer: line equalizer, loop-filters, power comparator, etc.  The authors demonstrate the design of a complete low-power, low-voltage, high-speed, continuous-time adaptive equalizer. Finally, a cost-...

  19. Adjoint sensitivity analysis procedure of Markov chains with applications on reliability of IFMIF accelerator-system facilities

    Energy Technology Data Exchange (ETDEWEB)

    Balan, I.

    2005-05-01

    This work presents the implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for the Continuous Time, Discrete Space Markov chains (CTMC), as an alternative to the other computational expensive methods. In order to develop this procedure as an end product in reliability studies, the reliability of the physical systems is analyzed using a coupled Fault-Tree - Markov chain technique, i.e. the abstraction of the physical system is performed using as the high level interface the Fault-Tree and afterwards this one is automatically converted into a Markov chain. The resulting differential equations based on the Markov chain model are solved in order to evaluate the system reliability. Further sensitivity analyses using ASAP applied to CTMC equations are performed to study the influence of uncertainties in input data to the reliability measures and to get the confidence in the final reliability results. The methods to generate the Markov chain and the ASAP for the Markov chain equations have been implemented into the new computer code system QUEFT/MARKOMAGS/MCADJSEN for reliability and sensitivity analysis of physical systems. The validation of this code system has been carried out by using simple problems for which analytical solutions can be obtained. Typical sensitivity results show that the numerical solution using ASAP is robust, stable and accurate. The method and the code system developed during this work can be used further as an efficient and flexible tool to evaluate the sensitivities of reliability measures for any physical system analyzed using the Markov chain. Reliability and sensitivity analyses using these methods have been performed during this work for the IFMIF Accelerator System Facilities. The reliability studies using Markov chain have been concentrated around the availability of the main subsystems of this complex physical system for a typical mission time. The sensitivity studies for two typical responses using ASAP have been

  20. Principles of discrete time mechanics

    CERN Document Server

    Jaroszkiewicz, George

    2014-01-01

    Could time be discrete on some unimaginably small scale? Exploring the idea in depth, this unique introduction to discrete time mechanics systematically builds the theory up from scratch, beginning with the historical, physical and mathematical background to the chronon hypothesis. Covering classical and quantum discrete time mechanics, this book presents all the tools needed to formulate and develop applications of discrete time mechanics in a number of areas, including spreadsheet mechanics, classical and quantum register mechanics, and classical and quantum mechanics and field theories. A consistent emphasis on contextuality and the observer-system relationship is maintained throughout.

  1. Stochastic exponential stability of the delayed reaction-diffusion recurrent neural networks with Markovian jumping parameters

    International Nuclear Information System (INIS)

    Wang Linshan; Zhang Zhe; Wang Yangfan

    2008-01-01

    Some criteria for the global stochastic exponential stability of the delayed reaction-diffusion recurrent neural networks with Markovian jumping parameters are presented. The jumping parameters considered here are generated from a continuous-time discrete-state homogeneous Markov process, which are governed by a Markov process with discrete and finite state space. By employing a new Lyapunov-Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish some easy-to-test criteria of global exponential stability in the mean square for the stochastic neural networks. The criteria are computationally efficient, since they are in the forms of some linear matrix inequalities

  2. Stochastic analysis in discrete and continuous settings with normal martingales

    CERN Document Server

    Privault, Nicolas

    2009-01-01

    This volume gives a unified presentation of stochastic analysis for continuous and discontinuous stochastic processes, in both discrete and continuous time. It is mostly self-contained and accessible to graduate students and researchers having already received a basic training in probability. The simultaneous treatment of continuous and jump processes is done in the framework of normal martingales; that includes the Brownian motion and compensated Poisson processes as specific cases. In particular, the basic tools of stochastic analysis (chaos representation, gradient, divergence, integration by parts) are presented in this general setting. Applications are given to functional and deviation inequalities and mathematical finance.

  3. Markov-modulated infinite-server queues driven by a common background process

    OpenAIRE

    Mandjes , Michel; De Turck , Koen

    2016-01-01

    International audience; This paper studies a system with multiple infinite-server queues which are modulated by a common background process. If this background process, being modeled as a finite-state continuous-time Markov chain, is in state j, then the arrival rate into the i-th queue is λi,j, whereas the service times of customers present in this queue are exponentially distributed with mean µ −1 i,j ; at each of the individual queues all customers present are served in parallel (thus refl...

  4. Sampling trace organic compounds in water: a comparison of a continuous active sampler to continuous passive and discrete sampling methods.

    Science.gov (United States)

    Coes, Alissa L; Paretti, Nicholas V; Foreman, William T; Iverson, Jana L; Alvarez, David A

    2014-03-01

    A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19-23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method. Published by Elsevier B.V.

  5. Sampling trace organic compounds in water: a comparison of a continuous active sampler to continuous passive and discrete sampling methods

    Science.gov (United States)

    Coes, Alissa L.; Paretti, Nicholas V.; Foreman, William T.; Iverson, Jana L.; Alvarez, David A.

    2014-01-01

    A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19–23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method.

  6. A Discrete-Time Geo/G/1 Retrial Queue with Two Different Types of Vacations

    Directory of Open Access Journals (Sweden)

    Feng Zhang

    2015-01-01

    Full Text Available We analyze a discrete-time Geo/G/1 retrial queue with two different types of vacations and general retrial times. Two different types of vacation policies are investigated in this model, one of which is nonexhaustive urgent vacation during serving and the other is normal exhaustive vacation. For this model, we give the steady-state analysis for the considered queueing system. Firstly, we obtain the generating functions of the number of customers in our model. Then, we obtain the closed-form expressions of some performance measures and also give a stochastic decomposition result for the system size. Moreover, the relationship between this discrete-time model and the corresponding continuous-time model is also investigated. Finally, some numerical results are provided to illustrate the effect of nonexhaustive urgent vacation on some performance characteristics of the system.

  7. Reinforcement learning in continuous state and action spaces

    NARCIS (Netherlands)

    H. P. van Hasselt (Hado); M.A. Wiering; M. van Otterlo

    2012-01-01

    textabstractMany traditional reinforcement-learning algorithms have been designed for problems with small finite state and action spaces. Learning in such discrete problems can been difficult, due to noise and delayed reinforcements. However, many real-world problems have continuous state or action

  8. Planning "discrete" movements using a continuous system: insights from a dynamic field theory of movement preparation.

    Science.gov (United States)

    Schutte, Anne R; Spencer, John P

    2007-04-01

    The timed-initiation paradigm developed by Ghez and colleagues (1997) has revealed two modes of motor planning: continuous and discrete. Continuous responding occurs when targets are separated by less than 60 degrees of spatial angle, and discrete responding occurs when targets are separated by greater than 60 degrees . Although these two modes are thought to reflect the operation of separable strategic planning systems, a new theory of movement preparation, the Dynamic Field Theory, suggests that two modes emerge flexibly from the same system. Experiment 1 replicated continuous and discrete performance using a task modified to allow for a critical test of the single system view. In Experiment 2, participants were allowed to correct their movements following movement initiation (the standard task does not allow corrections). Results showed continuous planning performance at large and small target separations. These results are consistent with the proposal that the two modes reflect the time-dependent "preshaping" of a single planning system.

  9. Zero velocity interval detection based on a continuous hidden Markov model in micro inertial pedestrian navigation

    Science.gov (United States)

    Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli

    2018-06-01

    Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.

  10. Sampled-data and discrete-time H2 optimal control

    NARCIS (Netherlands)

    Trentelman, Harry L.; Stoorvogel, Anton A.

    1993-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  11. Discrete-Time Local Value Iteration Adaptive Dynamic Programming: Admissibility and Termination Analysis.

    Science.gov (United States)

    Wei, Qinglai; Liu, Derong; Lin, Qiao

    In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.

  12. Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium

    Science.gov (United States)

    Kapfer, Sebastian C.; Krauth, Werner

    2017-12-01

    We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.

  13. Markov processes and controlled Markov chains

    CERN Document Server

    Filar, Jerzy; Chen, Anyue

    2002-01-01

    The general theory of stochastic processes and the more specialized theory of Markov processes evolved enormously in the second half of the last century. In parallel, the theory of controlled Markov chains (or Markov decision processes) was being pioneered by control engineers and operations researchers. Researchers in Markov processes and controlled Markov chains have been, for a long time, aware of the synergies between these two subject areas. However, this may be the first volume dedicated to highlighting these synergies and, almost certainly, it is the first volume that emphasizes the contributions of the vibrant and growing Chinese school of probability. The chapters that appear in this book reflect both the maturity and the vitality of modern day Markov processes and controlled Markov chains. They also will provide an opportunity to trace the connections that have emerged between the work done by members of the Chinese school of probability and the work done by the European, US, Central and South Ameri...

  14. Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains

    Science.gov (United States)

    Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.

    2018-01-01

    We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.

  15. Time step rescaling recovers continuous-time dynamical properties for discrete-time Langevin integration of nonequilibrium systems.

    Science.gov (United States)

    Sivak, David A; Chodera, John D; Crooks, Gavin E

    2014-06-19

    When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.

  16. Space-Time Discrete KPZ Equation

    Science.gov (United States)

    Cannizzaro, G.; Matetski, K.

    2018-03-01

    We study a general family of space-time discretizations of the KPZ equation and show that they converge to its solution. The approach we follow makes use of basic elements of the theory of regularity structures (Hairer in Invent Math 198(2):269-504, 2014) as well as its discrete counterpart (Hairer and Matetski in Discretizations of rough stochastic PDEs, 2015. arXiv:1511.06937). Since the discretization is in both space and time and we allow non-standard discretization for the product, the methods mentioned above have to be suitably modified in order to accommodate the structure of the models under study.

  17. The Green-Kubo formula for general Markov processes with a continuous time parameter

    International Nuclear Information System (INIS)

    Yang Fengxia; Liu Yong; Chen Yong

    2010-01-01

    For general Markov processes, the Green-Kubo formula is shown to be valid under a mild condition. A class of stochastic evolution equations on a separable Hilbert space and three typical infinite systems of locally interacting diffusions on Z d (irreversible in most cases) are shown to satisfy the Green-Kubo formula, and the Einstein relations for these stochastic evolution equations are shown explicitly as a corollary.

  18. An introduction to stochastic processes with applications to biology

    CERN Document Server

    Allen, Linda J S

    2010-01-01

    An Introduction to Stochastic Processes with Applications to Biology, Second Edition presents the basic theory of stochastic processes necessary in understanding and applying stochastic methods to biological problems in areas such as population growth and extinction, drug kinetics, two-species competition and predation, the spread of epidemics, and the genetics of inbreeding. Because of their rich structure, the text focuses on discrete and continuous time Markov chains and continuous time and state Markov processes.New to the Second EditionA new chapter on stochastic differential equations th

  19. Adaptive Control and Function Projective Synchronization in 2D Discrete-Time Chaotic Systems

    International Nuclear Information System (INIS)

    Li Yin; Chen Yong; Li Biao

    2009-01-01

    This study addresses the adaptive control and function projective synchronization problems between 2D Rulkov discrete-time system and Network discrete-time system. Based on backstepping design with three controllers, a systematic, concrete and automatic scheme is developed to investigate the function projective synchronization of discrete-time chaotic systems. In addition, the adaptive control function is applied to achieve the state synchronization of two discrete-time systems. Numerical results demonstrate the effectiveness of the proposed control scheme.

  20. Predicting hepatitis B monthly incidence rates using weighted Markov chains and time series methods.

    Science.gov (United States)

    Shahdoust, Maryam; Sadeghifar, Majid; Poorolajal, Jalal; Javanrooh, Niloofar; Amini, Payam

    2015-01-01

    Hepatitis B (HB) is a major global mortality. Accurately predicting the trend of the disease can provide an appropriate view to make health policy disease prevention. This paper aimed to apply three different to predict monthly incidence rates of HB. This historical cohort study was conducted on the HB incidence data of Hamadan Province, the west of Iran, from 2004 to 2012. Weighted Markov Chain (WMC) method based on Markov chain theory and two time series models including Holt Exponential Smoothing (HES) and SARIMA were applied on the data. The results of different applied methods were compared to correct percentages of predicted incidence rates. The monthly incidence rates were clustered into two clusters as state of Markov chain. The correct predicted percentage of the first and second clusters for WMC, HES and SARIMA methods was (100, 0), (84, 67) and (79, 47) respectively. The overall incidence rate of HBV is estimated to decrease over time. The comparison of results of the three models indicated that in respect to existing seasonality trend and non-stationarity, the HES had the most accurate prediction of the incidence rates.

  1. Classification of customer lifetime value models using Markov chain

    Science.gov (United States)

    Permana, Dony; Pasaribu, Udjianna S.; Indratno, Sapto W.; Suprayogi

    2017-10-01

    A firm’s potential reward in future time from a customer can be determined by customer lifetime value (CLV). There are some mathematic methods to calculate it. One method is using Markov chain stochastic model. Here, a customer is assumed through some states. Transition inter the states follow Markovian properties. If we are given some states for a customer and the relationships inter states, then we can make some Markov models to describe the properties of the customer. As Markov models, CLV is defined as a vector contains CLV for a customer in the first state. In this paper we make a classification of Markov Models to calculate CLV. Start from two states of customer model, we make develop in many states models. The development a model is based on weaknesses in previous model. Some last models can be expected to describe how real characters of customers in a firm.

  2. A Markov Chain Estimator of Multivariate Volatility from High Frequency Data

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Horel, Guillaume; Lunde, Asger

    We introduce a multivariate estimator of financial volatility that is based on the theory of Markov chains. The Markov chain framework takes advantage of the discreteness of high-frequency returns. We study the finite sample properties of the estimation in a simulation study and apply...

  3. Search Parameter Optimization for Discrete, Bayesian, and Continuous Search Algorithms

    Science.gov (United States)

    2017-09-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS SEARCH PARAMETER OPTIMIZATION FOR DISCRETE , BAYESIAN, AND CONTINUOUS SEARCH ALGORITHMS by...to 09-22-2017 4. TITLE AND SUBTITLE SEARCH PARAMETER OPTIMIZATION FOR DISCRETE , BAYESIAN, AND CON- TINUOUS SEARCH ALGORITHMS 5. FUNDING NUMBERS 6...simple search and rescue acts to prosecuting aerial/surface/submersible targets on mission. This research looks at varying the known discrete and

  4. Comparing the Discrete and Continuous Logistic Models

    Science.gov (United States)

    Gordon, Sheldon P.

    2008-01-01

    The solutions of the discrete logistic growth model based on a difference equation and the continuous logistic growth model based on a differential equation are compared and contrasted. The investigation is conducted using a dynamic interactive spreadsheet. (Contains 5 figures.)

  5. Discrete-time Calogero-Moser system and Lagrangian 1-form structure

    International Nuclear Information System (INIS)

    Yoo-Kong, Sikarin; Lobb, Sarah; Nijhoff, Frank

    2011-01-01

    We study the Lagrange formalism of the (rational) Calogero-Moser (CM) system, both in discrete time and continuous time, as a first example of a Lagrangian 1-form structure in the sense of the recent paper (Lobb and Nijhoff 2009 J. Phys. A: Math. Theor.42 454013). The discrete-time model of the CM system was established some time ago arising as a pole reduction of a semi-discrete version of the Kadomtsev-Petviashvili (KP) equation, and was shown to lead to an exactly integrable correspondence (multivalued map). In this paper, we present the full KP solution based on the commutativity of the discrete-time flows in the two discrete KP variables. The compatibility of the corresponding Lax matrices is shown to lead directly to the relevant closure relation on the level of the Lagrangians. Performing successive continuum limits on both the level of the KP equation and the level of the CM system, we establish the proper Lagrangian 1-form structure for the continuum case of the CM model. We use the example of the three-particle case to elucidate the implementation of the novel least-action principle, which was presented in Lobb and Nijhoff (2009), for the simpler case of Lagrangian 1-forms. (paper)

  6. Active Affordance Learning in Continuous State and Action Spaces

    NARCIS (Netherlands)

    Wang, C.; Hindriks, K.V.; Babuska, R.

    2014-01-01

    Learning object affordances and manipulation skills is essential for developing cognitive service robots. We propose an active affordance learning approach in continuous state and action spaces without manual discretization of states or exploratory motor primitives. During exploration in the action

  7. Minimum Energy Control of 2D Positive Continuous-Discrete Linear Systems

    Directory of Open Access Journals (Sweden)

    Kaczorek Tadeusz

    2014-09-01

    Full Text Available The minimum energy control problem for the 2D positive continuous-discrete linear systems is formulated and solved. Necessary and sufficient conditions for the reachability at the point of the systems are given. Sufficient conditions for the existence of solution to the problem are established. It is shown that if the system is reachable then there exists an optimal input that steers the state from zero boundary conditions to given final state and minimizing the performance index for only one step (q = 1. A procedure for solving of the problem is proposed and illustrated by a numerical example.

  8. Continuous time modeling of panel data by means of SEM

    NARCIS (Netherlands)

    Oud, J.H.L.; Delsing, M.J.M.H.; Montfort, C.A.G.M.; Oud, J.H.L.; Satorra, A.

    2010-01-01

    After a brief history of continuous time modeling and its implementation in panel analysis by means of structural equation modeling (SEM), the problems of discrete time modeling are discussed in detail. This is done by means of the popular cross-lagged panel design. Next, the exact discrete model

  9. Robust Moving Horizon H∞ Control of Discrete Time-Delayed Systems with Interval Time-Varying Delays

    Directory of Open Access Journals (Sweden)

    F. Yıldız Tascikaraoglu

    2014-01-01

    Full Text Available In this study, design of a delay-dependent type moving horizon state-feedback control (MHHC is considered for a class of linear discrete-time system subject to time-varying state delays, norm-bounded uncertainties, and disturbances with bounded energies. The closed-loop robust stability and robust performance problems are considered to overcome the instability and poor disturbance rejection performance due to the existence of parametric uncertainties and time-delay appeared in the system dynamics. Utilizing a discrete-time Lyapunov-Krasovskii functional, some delay-dependent linear matrix inequality (LMI based conditions are provided. It is shown that if one can find a feasible solution set for these LMI conditions iteratively at each step of run-time, then we can construct a control law which guarantees the closed-loop asymptotic stability, maximum disturbance rejection performance, and closed-loop dissipativity in view of the actuator limitations. Two numerical examples with simulations on a nominal and uncertain discrete-time, time-delayed systems, are presented at the end, in order to demonstrate the efficiency of the proposed method.

  10. Model documentation for relations between continuous real-time and discrete water-quality constituents in Cheney Reservoir near Cheney, Kansas, 2001--2009

    Science.gov (United States)

    Stone, Mandy L.; Graham, Jennifer L.; Gatotho, Jackline W.

    2013-01-01

    Cheney Reservoir, located in south-central Kansas, is one of the primary water supplies for the city of Wichita, Kansas. The U.S. Geological Survey has operated a continuous real-time water-quality monitoring station in Cheney Reservoir since 2001; continuously measured physicochemical properties include specific conductance, pH, water temperature, dissolved oxygen, turbidity, fluorescence (wavelength range 650 to 700 nanometers; estimate of total chlorophyll), and reservoir elevation. Discrete water-quality samples were collected during 2001 through 2009 and analyzed for sediment, nutrients, taste-and-odor compounds, cyanotoxins, phytoplankton community composition, actinomycetes bacteria, and other water-quality measures. Regression models were developed to establish relations between discretely sampled constituent concentrations and continuously measured physicochemical properties to compute concentrations of constituents that are not easily measured in real time. The water-quality information in this report is important to the city of Wichita because it allows quantification and characterization of potential constituents of concern in Cheney Reservoir. This report updates linear regression models published in 2006 that were based on data collected during 2001 through 2003. The update uses discrete and continuous data collected during May 2001 through December 2009. Updated models to compute dissolved solids, sodium, chloride, and suspended solids were similar to previously published models. However, several other updated models changed substantially from previously published models. In addition to updating relations that were previously developed, models also were developed for four new constituents, including magnesium, dissolved phosphorus, actinomycetes bacteria, and the cyanotoxin microcystin. In addition, a conversion factor of 0.74 was established to convert the Yellow Springs Instruments (YSI) model 6026 turbidity sensor measurements to the newer YSI

  11. Prediction of inspection intervals using the Markov analysis; Prediccion de intervalos de inspeccion utilizando analisis de Markov

    Energy Technology Data Exchange (ETDEWEB)

    Rea, R.; Arellano, J. [IIE, Calle Reforma 113, Col. Palmira, Cuernavaca, Morelos (Mexico)]. e-mail: rrea@iie.org.mx

    2005-07-01

    To solve the unmanageable number of states of Markov of systems that have a great number of components, it is intends a modification to the method of Markov, denominated Markov truncated analysis, in which is assumed that it is worthless the dependence among faults of components. With it the number of states is increased in a lineal way (not exponential) with the number of components of the system, simplifying the analysis vastly. As example, the proposed method was applied to the system HPCS of the CLV considering its 18 main components. It thinks about that each component can take three states: operational, with hidden fault and with revealed fault. Additionally, it takes into account the configuration of the system HPCS by means of a block diagram of dependability to estimate their unavailability at level system. The results of the model here proposed are compared with other methods and approaches used to simplify the Markov analysis. It also intends the modification of the intervals of inspection of three components of the system HPCS. This finishes with base in the developed Markov model and in the maximum time allowed by the code ASME (NUREG-1482) to inspect components of systems that are in reservation in nuclear power plants. (Author)

  12. Discretely Integrated Condition Event (DICE) Simulation for Pharmacoeconomics.

    Science.gov (United States)

    Caro, J Jaime

    2016-07-01

    Several decision-analytic modeling techniques are in use for pharmacoeconomic analyses. Discretely integrated condition event (DICE) simulation is proposed as a unifying approach that has been deliberately designed to meet the modeling requirements in a straightforward transparent way, without forcing assumptions (e.g., only one transition per cycle) or unnecessary complexity. At the core of DICE are conditions that represent aspects that persist over time. They have levels that can change and many may coexist. Events reflect instantaneous occurrences that may modify some conditions or the timing of other events. The conditions are discretely integrated with events by updating their levels at those times. Profiles of determinant values allow for differences among patients in the predictors of the disease course. Any number of valuations (e.g., utility, cost, willingness-to-pay) of conditions and events can be applied concurrently in a single run. A DICE model is conveniently specified in a series of tables that follow a consistent format and the simulation can be implemented fully in MS Excel, facilitating review and validation. DICE incorporates both state-transition (Markov) models and non-resource-constrained discrete event simulation in a single formulation; it can be executed as a cohort or a microsimulation; and deterministically or stochastically.

  13. Clustering Multivariate Time Series Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Shima Ghassempour

    2014-03-01

    Full Text Available In this paper we describe an algorithm for clustering multivariate time series with variables taking both categorical and continuous values. Time series of this type are frequent in health care, where they represent the health trajectories of individuals. The problem is challenging because categorical variables make it difficult to define a meaningful distance between trajectories. We propose an approach based on Hidden Markov Models (HMMs, where we first map each trajectory into an HMM, then define a suitable distance between HMMs and finally proceed to cluster the HMMs with a method based on a distance matrix. We test our approach on a simulated, but realistic, data set of 1,255 trajectories of individuals of age 45 and over, on a synthetic validation set with known clustering structure, and on a smaller set of 268 trajectories extracted from the longitudinal Health and Retirement Survey. The proposed method can be implemented quite simply using standard packages in R and Matlab and may be a good candidate for solving the difficult problem of clustering multivariate time series with categorical variables using tools that do not require advanced statistic knowledge, and therefore are accessible to a wide range of researchers.

  14. Markov Chain Analysis of Musical Dice Games

    Science.gov (United States)

    Volchenkov, D.; Dawin, J. R.

    2012-07-01

    A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.

  15. Two Monthly Continuous Dynamic Model Based on Nash Bargaining Theory for Conflict Resolution in Reservoir System.

    Science.gov (United States)

    Homayounfar, Mehran; Zomorodian, Mehdi; Martinez, Christopher J; Lai, Sai Hin

    2015-01-01

    So far many optimization models based on Nash Bargaining Theory associated with reservoir operation have been developed. Most of them have aimed to provide practical and efficient solutions for water allocation in order to alleviate conflicts among water users. These models can be discussed from two viewpoints: (i) having a discrete nature; and (ii) working on an annual basis. Although discrete dynamic game models provide appropriate reservoir operator policies, their discretization of variables increases the run time and causes dimensionality problems. In this study, two monthly based non-discrete optimization models based on the Nash Bargaining Solution are developed for a reservoir system. In the first model, based on constrained state formulation, the first and second moments (mean and variance) of the state variable (water level in the reservoir) is calculated. Using moment equations as the constraint, the long-term utility of the reservoir manager and water users are optimized. The second model is a dynamic approach structured based on continuous state Markov decision models. The corresponding solution based on the collocation method is structured for a reservoir system. In this model, the reward function is defined based on the Nash Bargaining Solution. Indeed, it is used to yield equilibrium in every proper sub-game, thereby satisfying the Markov perfect equilibrium. Both approaches are applicable for water allocation in arid and semi-arid regions. A case study was carried out at the Zayandeh-Rud river basin located in central Iran to identify the effectiveness of the presented methods. The results are compared with the results of an annual form of dynamic game, a classical stochastic dynamic programming model (e.g. Bayesian Stochastic Dynamic Programming model, BSDP), and a discrete stochastic dynamic game model (PSDNG). By comparing the results of alternative methods, it is shown that both models are capable of tackling conflict issues in water allocation

  16. Two Monthly Continuous Dynamic Model Based on Nash Bargaining Theory for Conflict Resolution in Reservoir System.

    Directory of Open Access Journals (Sweden)

    Mehran Homayounfar

    Full Text Available So far many optimization models based on Nash Bargaining Theory associated with reservoir operation have been developed. Most of them have aimed to provide practical and efficient solutions for water allocation in order to alleviate conflicts among water users. These models can be discussed from two viewpoints: (i having a discrete nature; and (ii working on an annual basis. Although discrete dynamic game models provide appropriate reservoir operator policies, their discretization of variables increases the run time and causes dimensionality problems. In this study, two monthly based non-discrete optimization models based on the Nash Bargaining Solution are developed for a reservoir system. In the first model, based on constrained state formulation, the first and second moments (mean and variance of the state variable (water level in the reservoir is calculated. Using moment equations as the constraint, the long-term utility of the reservoir manager and water users are optimized. The second model is a dynamic approach structured based on continuous state Markov decision models. The corresponding solution based on the collocation method is structured for a reservoir system. In this model, the reward function is defined based on the Nash Bargaining Solution. Indeed, it is used to yield equilibrium in every proper sub-game, thereby satisfying the Markov perfect equilibrium. Both approaches are applicable for water allocation in arid and semi-arid regions. A case study was carried out at the Zayandeh-Rud river basin located in central Iran to identify the effectiveness of the presented methods. The results are compared with the results of an annual form of dynamic game, a classical stochastic dynamic programming model (e.g. Bayesian Stochastic Dynamic Programming model, BSDP, and a discrete stochastic dynamic game model (PSDNG. By comparing the results of alternative methods, it is shown that both models are capable of tackling conflict issues in

  17. Discrete-space versus continuous-space lesion boundary and area definitions

    International Nuclear Information System (INIS)

    Sensakovic, William F.; Starkey, Adam; Roberts, Rachael Y.; Armato, Samuel G. III

    2008-01-01

    Measurement of the size of anatomic regions of interest in medical images is used to diagnose disease, track growth, and evaluate response to therapy. The discrete nature of medical images allows for both continuous and discrete definitions of region boundary. These definitions may, in turn, support several methods of area calculation that give substantially different quantitative values. This study investigated several boundary definitions (e.g., continuous polygon, internal discrete, and external discrete) and area calculation methods (pixel counting and Green's theorem). These methods were applied to three separate databases: A synthetic image database, the Lung Image Database Consortium database of lung nodules and a database of adrenal gland outlines. Average percent differences in area on the order of 20% were found among the different methods applied to the clinical databases. These results support the idea that inconsistent application of region boundary definition and area calculation may substantially impact measurement accuracy

  18. Computing Fault-Containment Times of Self-Stabilizing Algorithms Using Lumped Markov Chains

    Directory of Open Access Journals (Sweden)

    Volker Turau

    2018-05-01

    Full Text Available The analysis of self-stabilizing algorithms is often limited to the worst case stabilization time starting from an arbitrary state, i.e., a state resulting from a sequence of faults. Considering the fact that these algorithms are intended to provide fault tolerance in the long run, this is not the most relevant metric. A common situation is that a running system is an a legitimate state when hit by a single fault. This event has a much higher probability than multiple concurrent faults. Therefore, the worst case time to recover from a single fault is more relevant than the recovery time from a large number of faults. This paper presents techniques to derive upper bounds for the mean time to recover from a single fault for self-stabilizing algorithms based on Markov chains in combination with lumping. To illustrate the applicability of the techniques they are applied to a new self-stabilizing coloring algorithm.

  19. A Semi-Continuous State-Transition Probability HMM-Based Voice Activity Detector

    Directory of Open Access Journals (Sweden)

    H. Othman

    2007-02-01

    Full Text Available We introduce an efficient hidden Markov model-based voice activity detection (VAD algorithm with time-variant state-transition probabilities in the underlying Markov chain. The transition probabilities vary in an exponential charge/discharge scheme and are softly merged with state conditional likelihood into a final VAD decision. Working in the domain of ITU-T G.729 parameters, with no additional cost for feature extraction, the proposed algorithm significantly outperforms G.729 Annex B VAD while providing a balanced tradeoff between clipping and false detection errors. The performance compares very favorably with the adaptive multirate VAD, option 2 (AMR2.

  20. A Semi-Continuous State-Transition Probability HMM-Based Voice Activity Detector

    Directory of Open Access Journals (Sweden)

    Othman H

    2007-01-01

    Full Text Available We introduce an efficient hidden Markov model-based voice activity detection (VAD algorithm with time-variant state-transition probabilities in the underlying Markov chain. The transition probabilities vary in an exponential charge/discharge scheme and are softly merged with state conditional likelihood into a final VAD decision. Working in the domain of ITU-T G.729 parameters, with no additional cost for feature extraction, the proposed algorithm significantly outperforms G.729 Annex B VAD while providing a balanced tradeoff between clipping and false detection errors. The performance compares very favorably with the adaptive multirate VAD, option 2 (AMR2.

  1. Markov and mixed models with applications

    DEFF Research Database (Denmark)

    Mortensen, Stig Bousgaard

    This thesis deals with mathematical and statistical models with focus on applications in pharmacokinetic and pharmacodynamic (PK/PD) modelling. These models are today an important aspect of the drug development in the pharmaceutical industry and continued research in statistical methodology within...... or uncontrollable factors in an individual. Modelling using SDEs also provides new tools for estimation of unknown inputs to a system and is illustrated with an application to estimation of insulin secretion rates in diabetic patients. Models for the eect of a drug is a broader area since drugs may affect...... for non-parametric estimation of Markov processes are proposed to give a detailed description of the sleep process during the night. Statistically the Markov models considered for sleep states are closely related to the PK models based on SDEs as both models share the Markov property. When the models...

  2. The application of Markov decision process in restaurant delivery robot

    Science.gov (United States)

    Wang, Yong; Hu, Zhen; Wang, Ying

    2017-05-01

    As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional path planning algorithm is not very ideal. To solve this problem, this paper proposes the Markov dynamic state immediate reward (MDR) path planning algorithm according to the traditional Markov decision process. First of all, it uses MDR to plan a global path, then navigates along this path. When the sensor detects there is no obstructions in front state, increase its immediate state reward value; when the sensor detects there is an obstacle in front, plan a global path that can avoid obstacle with the current position as the new starting point and reduce its state immediate reward value. This continues until the target is reached. When the robot learns for a period of time, it can avoid those places where obstacles are often present when planning the path. By analyzing the simulation experiment, the algorithm has achieved good results in the global path planning under the dynamic environment.

  3. Optimal use of data in parallel tempering simulations for the construction of discrete-state Markov models of biomolecular dynamics.

    Science.gov (United States)

    Prinz, Jan-Hendrik; Chodera, John D; Pande, Vijay S; Swope, William C; Smith, Jeremy C; Noé, Frank

    2011-06-28

    Parallel tempering (PT) molecular dynamics simulations have been extensively investigated as a means of efficient sampling of the configurations of biomolecular systems. Recent work has demonstrated how the short physical trajectories generated in PT simulations of biomolecules can be used to construct the Markov models describing biomolecular dynamics at each simulated temperature. While this approach describes the temperature-dependent kinetics, it does not make optimal use of all available PT data, instead estimating the rates at a given temperature using only data from that temperature. This can be problematic, as some relevant transitions or states may not be sufficiently sampled at the temperature of interest, but might be readily sampled at nearby temperatures. Further, the comparison of temperature-dependent properties can suffer from the false assumption that data collected from different temperatures are uncorrelated. We propose here a strategy in which, by a simple modification of the PT protocol, the harvested trajectories can be reweighted, permitting data from all temperatures to contribute to the estimated kinetic model. The method reduces the statistical uncertainty in the kinetic model relative to the single temperature approach and provides estimates of transition probabilities even for transitions not observed at the temperature of interest. Further, the method allows the kinetics to be estimated at temperatures other than those at which simulations were run. We illustrate this method by applying it to the generation of a Markov model of the conformational dynamics of the solvated terminally blocked alanine peptide.

  4. Memorized discrete systems and time-delay

    CERN Document Server

    Luo, Albert C J

    2017-01-01

    This book examines discrete dynamical systems with memory—nonlinear systems that exist extensively in biological organisms and financial and economic organizations, and time-delay systems that can be discretized into the memorized, discrete dynamical systems. It book further discusses stability and bifurcations of time-delay dynamical systems that can be investigated through memorized dynamical systems as well as bifurcations of memorized nonlinear dynamical systems, discretization methods of time-delay systems, and periodic motions to chaos in nonlinear time-delay systems. The book helps readers find analytical solutions of MDS, change traditional perturbation analysis in time-delay systems, detect motion complexity and singularity in MDS; and determine stability, bifurcation, and chaos in any time-delay system.

  5. Discrete time process algebra and the semantics of SDL

    NARCIS (Netherlands)

    J.A. Bergstra; C.A. Middelburg; Y.S. Usenko (Yaroslav)

    1998-01-01

    htmlabstractWe present an extension of discrete time process algebra with relative timing where recursion, propositional signals and conditions, a counting process creation operator, and the state operator are combined. Except the counting process creation operator, which subsumes the original

  6. Online soft sensor for hybrid systems with mixed continuous and discrete measurements

    Czech Academy of Sciences Publication Activity Database

    Suzdaleva, Evgenia; Nagy, Ivan

    2012-01-01

    Roč. 36, č. 10 (2012), s. 294-300 ISSN 0098-1354 R&D Projects: GA MŠk 1M0572; GA TA ČR TA01030123 Grant - others:Skoda Auto, a.s.(CZ) ENS/2009/UTIA Institutional research plan: CEZ:AV0Z10750506 Keywords : online state prediction * hybrid filter * state-space model * mixed data Subject RIV: BC - Control Systems Theory Impact factor: 2.091, year: 2012 http://library.utia.cas.cz/separaty/2011/AS/suzdaleva-online soft sensor for hybrid systems with mixed continuous and discrete measurements.pdf

  7. In-plane material continuity for the discrete material optimization method

    DEFF Research Database (Denmark)

    Sørensen, Rene; Lund, Erik

    2015-01-01

    When performing discrete material optimization of laminated composite structures, the variation of the in-plane material continuity is typically governed by the size of the finite element discretization. For a fine mesh, this can lead to designs that cannot be manufactured due to the complexity...

  8. Time Discretization Techniques

    KAUST Repository

    Gottlieb, S.

    2016-10-12

    The time discretization of hyperbolic partial differential equations is typically the evolution of a system of ordinary differential equations obtained by spatial discretization of the original problem. Methods for this time evolution include multistep, multistage, or multiderivative methods, as well as a combination of these approaches. The time step constraint is mainly a result of the absolute stability requirement, as well as additional conditions that mimic physical properties of the solution, such as positivity or total variation stability. These conditions may be required for stability when the solution develops shocks or sharp gradients. This chapter contains a review of some of the methods historically used for the evolution of hyperbolic PDEs, as well as cutting edge methods that are now commonly used.

  9. Algorithms for a parallel implementation of Hidden Markov Models with a small state space

    DEFF Research Database (Denmark)

    Nielsen, Jesper; Sand, Andreas

    2011-01-01

    Two of the most important algorithms for Hidden Markov Models are the forward and the Viterbi algorithms. We show how formulating these using linear algebra naturally lends itself to parallelization. Although the obtained algorithms are slow for Hidden Markov Models with large state spaces...

  10. Computational Procedures for a Class of GI/D/k Systems in Discrete Time

    Directory of Open Access Journals (Sweden)

    Md. Mostafizur Rahman

    2009-01-01

    Full Text Available A class of discrete time GI/D/k systems is considered for which the interarrival times have finite support and customers are served in first-in first-out (FIFO order. The system is formulated as a single server queue with new general independent interarrival times and constant service duration by assuming cyclic assignment of customers to the identical servers. Then the queue length is set up as a quasi-birth-death (QBD type Markov chain. It is shown that this transformed GI/D/1 system has special structures which make the computation of the matrix R simple and efficient, thereby reducing the number of multiplications in each iteration significantly. As a result we were able to keep the computation time very low. Moreover, use of the resulting structural properties makes the computation of the distribution of queue length of the transformed system efficient. The computation of the distribution of waiting time is also shown to be simple by exploiting the special structures.

  11. A hidden Markov model approach to analyze longitudinal ternary outcomes when some observed states are possibly misclassified.

    Science.gov (United States)

    Benoit, Julia S; Chan, Wenyaw; Luo, Sheng; Yeh, Hung-Wen; Doody, Rachelle

    2016-04-30

    Understanding the dynamic disease process is vital in early detection, diagnosis, and measuring progression. Continuous-time Markov chain (CTMC) methods have been used to estimate state-change intensities but challenges arise when stages are potentially misclassified. We present an analytical likelihood approach where the hidden state is modeled as a three-state CTMC model allowing for some observed states to be possibly misclassified. Covariate effects of the hidden process and misclassification probabilities of the hidden state are estimated without information from a 'gold standard' as comparison. Parameter estimates are obtained using a modified expectation-maximization (EM) algorithm, and identifiability of CTMC estimation is addressed. Simulation studies and an application studying Alzheimer's disease caregiver stress-levels are presented. The method was highly sensitive to detecting true misclassification and did not falsely identify error in the absence of misclassification. In conclusion, we have developed a robust longitudinal method for analyzing categorical outcome data when classification of disease severity stage is uncertain and the purpose is to study the process' transition behavior without a gold standard. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Clinical Prediction Performance of Glaucoma Progression Using a 2-Dimensional Continuous-Time Hidden Markov Model with Structural and Functional Measurements.

    Science.gov (United States)

    Song, Youngseok; Ishikawa, Hiroshi; Wu, Mengfei; Liu, Yu-Ying; Lucy, Katie A; Lavinsky, Fabio; Liu, Mengling; Wollstein, Gadi; Schuman, Joel S

    2018-03-20

    Previously, we introduced a state-based 2-dimensional continuous-time hidden Markov model (2D CT HMM) to model the pattern of detected glaucoma changes using structural and functional information simultaneously. The purpose of this study was to evaluate the detected glaucoma change prediction performance of the model in a real clinical setting using a retrospective longitudinal dataset. Longitudinal, retrospective study. One hundred thirty-four eyes from 134 participants diagnosed with glaucoma or as glaucoma suspects (average follow-up, 4.4±1.2 years; average number of visits, 7.1±1.8). A 2D CT HMM model was trained using OCT (Cirrus HD-OCT; Zeiss, Dublin, CA) average circumpapillary retinal nerve fiber layer (cRNFL) thickness and visual field index (VFI) or mean deviation (MD; Humphrey Field Analyzer; Zeiss). The model was trained using a subset of the data (107 of 134 eyes [80%]) including all visits except for the last visit, which was used to test the prediction performance (training set). Additionally, the remaining 27 eyes were used for secondary performance testing as an independent group (validation set). The 2D CT HMM predicts 1 of 4 possible detected state changes based on 1 input state. Prediction accuracy was assessed as the percentage of correct prediction against the patient's actual recorded state. In addition, deviations of the predicted long-term detected change paths from the actual detected change paths were measured. Baseline mean ± standard deviation age was 61.9±11.4 years, VFI was 90.7±17.4, MD was -3.50±6.04 dB, and cRNFL thickness was 74.9±12.2 μm. The accuracy of detected glaucoma change prediction using the training set was comparable with the validation set (57.0% and 68.0%, respectively). Prediction deviation from the actual detected change path showed stability throughout patient follow-up. The 2D CT HMM demonstrated promising prediction performance in detecting glaucoma change performance in a simulated clinical setting

  13. Model Checking Multivariate State Rewards

    DEFF Research Database (Denmark)

    Nielsen, Bo Friis; Nielson, Flemming; Nielson, Hanne Riis

    2010-01-01

    We consider continuous stochastic logics with state rewards that are interpreted over continuous time Markov chains. We show how results from multivariate phase type distributions can be used to obtain higher-order moments for multivariate state rewards (including covariance). We also generalise...

  14. A discretized algorithm for the solution of a constrained, continuous ...

    African Journals Online (AJOL)

    A discretized algorithm for the solution of a constrained, continuous quadratic control problem. ... The results obtained show that the Discretized constrained algorithm (DCA) is much more accurate and more efficient than some of these techniques, particularly the FSA. Journal of the Nigerian Association of Mathematical ...

  15. First and second order Markov chain models for synthetic generation of wind speed time series

    International Nuclear Information System (INIS)

    Shamshad, A.; Bawadi, M.A.; Wan Hussin, W.M.A.; Majid, T.A.; Sanusi, S.A.M.

    2005-01-01

    Hourly wind speed time series data of two meteorological stations in Malaysia have been used for stochastic generation of wind speed data using the transition matrix approach of the Markov chain process. The transition probability matrices have been formed using two different approaches: the first approach involves the use of the first order transition probability matrix of a Markov chain, and the second involves the use of a second order transition probability matrix that uses the current and preceding values to describe the next wind speed value. The algorithm to generate the wind speed time series from the transition probability matrices is described. Uniform random number generators have been used for transition between successive time states and within state wind speed values. The ability of each approach to retain the statistical properties of the generated speed is compared with the observed ones. The main statistical properties used for this purpose are mean, standard deviation, median, percentiles, Weibull distribution parameters, autocorrelations and spectral density of wind speed values. The comparison of the observed wind speed and the synthetically generated ones shows that the statistical characteristics are satisfactorily preserved

  16. A nonstationary Markov transition model for computing the relative risk of dementia before death

    Science.gov (United States)

    Yu, Lei; Griffith, William S.; Tyas, Suzanne L.; Snowdon, David A.; Kryscio, Richard J.

    2010-01-01

    This paper investigates the long-term behavior of the k-step transition probability matrix for a nonstationary discrete time Markov chain in the context of modeling transitions from intact cognition to dementia with mild cognitive impairment (MCI) and global impairment (GI) as intervening cognitive states. The authors derive formulas for the following absorption statistics: (1) the relative risk of absorption between competing absorbing states, and (2) the mean and variance of the number of visits among the transient states before absorption. Since absorption is not guaranteed, sufficient conditions are discussed to ensure that the substochastic matrix associated with transitions among transient states converges to zero in limit. Results are illustrated with an application to the Nun Study, a cohort of 678 participants, 75 to 107 years of age, followed longitudinally with up to ten cognitive assessments over a fifteen-year period. PMID:20087848

  17. A Multi-State Physics Modeling approach for the reliability assessment of Nuclear Power Plants piping systems

    International Nuclear Information System (INIS)

    Di Maio, Francesco; Colli, Davide; Zio, Enrico; Tao, Liu; Tong, Jiejuan

    2015-01-01

    Highlights: • We model piping systems degradation of Nuclear Power Plants under uncertainty. • We use Multi-State Physics Modeling (MSPM) to describe a continuous degradation process. • We propose a Monte Carlo (MC) method for calculating time-dependent transition rates. • We apply MSPM to a piping system undergoing thermal fatigue. - Abstract: A Multi-State Physics Modeling (MSPM) approach is here proposed for degradation modeling and failure probability quantification of Nuclear Power Plants (NPPs) piping systems. This approach integrates multi-state modeling to describe the degradation process by transitions among discrete states (e.g., no damage, micro-crack, flaw, rupture, etc.), with physics modeling by (physic) equations to describe the continuous degradation process within the states. We propose a Monte Carlo (MC) simulation method for the evaluation of the time-dependent transition rates between the states of the MSPM. Accountancy is given for the uncertainty in the parameters and external factors influencing the degradation process. The proposed modeling approach is applied to a benchmark problem of a piping system of a Pressurized Water Reactor (PWR) undergoing thermal fatigue. The results are compared with those obtained by a continuous-time homogeneous Markov Chain Model

  18. Robust uniform persistence in discrete and continuous dynamical systems using Lyapunov exponents.

    Science.gov (United States)

    Salceanu, Paul L

    2011-07-01

    This paper extends the work of Salceanu and Smith [12, 13] where Lyapunov exponents were used to obtain conditions for uniform persistence ina class of dissipative discrete-time dynamical systems on the positive orthant of R(m), generated by maps. Here a united approach is taken, for both discrete and continuous time, and the dissipativity assumption is relaxed. Sufficient conditions are given for compact subsets of an invariant part of the boundary of R(m+) to be robust uniform weak repellers. These conditions require Lyapunov exponents be positive on such sets. It is shown how this leads to robust uniform persistence. The results apply to the investigation of robust uniform persistence of the disease in host populations, as shown in an application.

  19. The number of bound states for a discrete Schroedinger operator on ZN, N≥1, lattices

    International Nuclear Information System (INIS)

    Karachalios, N I

    2008-01-01

    We consider the discrete Schroedinger operator -Δ d +U in Z N , N≥1 in the case of a potential with negative part in an appropriate l σ -space (decays with an appropriate rate). We present a discrete analog of the method of Li and Yau (1983 Commun. Math. Phys. 88 309-18), proving an explicit upper estimate on the number of bound states N d (0)={j:μ j ≤0}, which is independent of the dimension of the lattice. This is a major difference with the continuous counterpart estimate, which is not valid when N = 1, 2. As a consequence, a dimension-independent smallness criterion for the existence of bound states is derived in contrast to the continuous case as well as to the discrete case of vanishing potential. A short comment is made on possible applications of the results to the study of the dynamics of some particular spatially discrete nonlinear systems

  20. Continuous Time Structural Equation Modeling with R Package ctsem

    Directory of Open Access Journals (Sweden)

    Charles C. Driver

    2017-04-01

    Full Text Available We introduce ctsem, an R package for continuous time structural equation modeling of panel (N > 1 and time series (N = 1 data, using full information maximum likelihood. Most dynamic models (e.g., cross-lagged panel models in the social and behavioural sciences are discrete time models. An assumption of discrete time models is that time intervals between measurements are equal, and that all subjects were assessed at the same intervals. Violations of this assumption are often ignored due to the difficulty of accounting for varying time intervals, therefore parameter estimates can be biased and the time course of effects becomes ambiguous. By using stochastic differential equations to estimate an underlying continuous process, continuous time models allow for any pattern of measurement occasions. By interfacing to OpenMx, ctsem combines the flexible specification of structural equation models with the enhanced data gathering opportunities and improved estimation of continuous time models. ctsem can estimate relationships over time for multiple latent processes, measured by multiple noisy indicators with varying time intervals between observations. Within and between effects are estimated simultaneously by modeling both observed covariates and unobserved heterogeneity. Exogenous shocks with different shapes, group differences, higher order diffusion effects and oscillating processes can all be simply modeled. We first introduce and define continuous time models, then show how to specify and estimate a range of continuous time models using ctsem.

  1. Canonical Structure and Orthogonality of Forces and Currents in Irreversible Markov Chains

    Science.gov (United States)

    Kaiser, Marcus; Jack, Robert L.; Zimmer, Johannes

    2018-03-01

    We discuss a canonical structure that provides a unifying description of dynamical large deviations for irreversible finite state Markov chains (continuous time), Onsager theory, and Macroscopic Fluctuation Theory (MFT). For Markov chains, this theory involves a non-linear relation between probability currents and their conjugate forces. Within this framework, we show how the forces can be split into two components, which are orthogonal to each other, in a generalised sense. This splitting allows a decomposition of the pathwise rate function into three terms, which have physical interpretations in terms of dissipation and convergence to equilibrium. Similar decompositions hold for rate functions at level 2 and level 2.5. These results clarify how bounds on entropy production and fluctuation theorems emerge from the underlying dynamical rules. We discuss how these results for Markov chains are related to similar structures within MFT, which describes hydrodynamic limits of such microscopic models.

  2. A Multi-Armed Bandit Approach to Following a Markov Chain

    Science.gov (United States)

    2017-06-01

    Introduction to online convex optimization ,” Foundations and Trends in Optimization , vol. 2, no. 3-4, pp. 157–325, 2016. [3] A. Mahajan and D. Teneketzis...stochastic optimization , machine learning, discrete time Markov chains, stochastic Multi-Armed Bandit, combinatorial Multi-Armed Bandit, online learning, and...fulfillment of the requirements for the degree of MASTER OF SCIENCE IN OPERATIONS RESEARCH from the NAVAL POSTGRADUATE SCHOOL June 2017 Approved by: Roberto

  3. Fermion systems in discrete space-time

    International Nuclear Information System (INIS)

    Finster, Felix

    2007-01-01

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure

  4. Fermion systems in discrete space-time

    Energy Technology Data Exchange (ETDEWEB)

    Finster, Felix [NWF I - Mathematik, Universitaet Regensburg, 93040 Regensburg (Germany)

    2007-05-15

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.

  5. Fermion Systems in Discrete Space-Time

    OpenAIRE

    Finster, Felix

    2006-01-01

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.

  6. Fermion systems in discrete space-time

    Science.gov (United States)

    Finster, Felix

    2007-05-01

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.

  7. Objective classification of latent behavioral states in bio-logging data using multivariate-normal hidden Markov models.

    Science.gov (United States)

    Phillips, Joe Scutt; Patterson, Toby A; Leroy, Bruno; Pilling, Graham M; Nicol, Simon J

    2015-07-01

    Analysis of complex time-series data from ecological system study requires quantitative tools for objective description and classification. These tools must take into account largely ignored problems of bias in manual classification, autocorrelation, and noise. Here we describe a method using existing estimation techniques for multivariate-normal hidden Markov models (HMMs) to develop such a classification. We use high-resolution behavioral data from bio-loggers attached to free-roaming pelagic tuna as an example. Observed patterns are assumed to be generated by an unseen Markov process that switches between several multivariate-normal distributions. Our approach is assessed in two parts. The first uses simulation experiments, from which the ability of the HMM to estimate known parameter values is examined using artificial time series of data consistent with hypotheses about pelagic predator foraging ecology. The second is the application to time series of continuous vertical movement data from yellowfin and bigeye tuna taken from tuna tagging experiments. These data were compressed into summary metrics capturing the variation of patterns in diving behavior and formed into a multivariate time series used to estimate a HMM. Each observation was associated with covariate information incorporating the effect of day and night on behavioral switching. Known parameter values were well recovered by the HMMs in our simulation experiments, resulting in mean correct classification rates of 90-97%, although some variance-covariance parameters were estimated less accurately. HMMs with two distinct behavioral states were selected for every time series of real tuna data, predicting a shallow warm state, which was similar across all individuals, and a deep colder state, which was more variable. Marked diurnal behavioral switching was predicted, consistent with many previous empirical studies on tuna. HMMs provide easily interpretable models for the objective classification of

  8. Flux through a Markov chain

    International Nuclear Information System (INIS)

    Floriani, Elena; Lima, Ricardo; Ourrad, Ouerdia; Spinelli, Lionel

    2016-01-01

    Highlights: • The flux through a Markov chain of a conserved quantity (mass) is studied. • Mass is supplied by an external source and ends in the absorbing states of the chain. • Meaningful for modeling open systems whose dynamics has a Markov property. • The analytical expression of mass distribution is given for a constant source. • The expression of mass distribution is given for periodic or random sources. - Abstract: In this paper we study the flux through a finite Markov chain of a quantity, that we will call mass, which moves through the states of the chain according to the Markov transition probabilities. Mass is supplied by an external source and accumulates in the absorbing states of the chain. We believe that studying how this conserved quantity evolves through the transient (non-absorbing) states of the chain could be useful for the modelization of open systems whose dynamics has a Markov property.

  9. Modeling Dyadic Processes Using Hidden Markov Models: A Time Series Approach to Mother-Infant Interactions during Infant Immunization

    Science.gov (United States)

    Stifter, Cynthia A.; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at 2 and 6?months of age, used hidden Markov modelling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a…

  10. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    Science.gov (United States)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  11. Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems

    Science.gov (United States)

    Mahdi Alavi, S. M.; Saif, Mehrdad

    2013-12-01

    This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.

  12. Time-Discrete Higher-Order ALE Formulations: Stability

    KAUST Repository

    Bonito, Andrea; Kyza, Irene; Nochetto, Ricardo H.

    2013-01-01

    on the stability of the PDE but may influence that of a discrete scheme. We examine this critical issue for higher-order time stepping without space discretization. We propose time-discrete discontinuous Galerkin (dG) numerical schemes of any order for a time

  13. Similarities of discrete and continuous Sturm-Liouville problems

    Directory of Open Access Journals (Sweden)

    Kazem Ghanbari

    2007-12-01

    Full Text Available In this paper we present a study on the analogous properties of discrete and continuous Sturm-Liouville problems arising in matrix analysis and differential equations, respectively. Green's functions in both cases have analogous expressions in terms of the spectral data. Most of the results associated to inverse problems in both cases are identical. In particular, in both cases Weyl's m-function determines the Sturm-Liouville operators uniquely. Moreover, the well known Rayleigh-Ritz Theorem in linear algebra can be proved by using the concept of Green's function in discrete case.

  14. Data-based inference of generators for Markov jump processes using convex optimization

    NARCIS (Netherlands)

    D.T. Crommelin (Daan); E. Vanden-Eijnden (Eric)

    2009-01-01

    textabstractA variational approach to the estimation of generators for Markov jump processes from discretely sampled data is discussed and generalized. In this approach, one first calculates the spectrum of the discrete maximum likelihood estimator for the transition matrix consistent with

  15. A novel seizure detection algorithm informed by hidden Markov model event states

    Science.gov (United States)

    Baldassano, Steven; Wulsin, Drausin; Ung, Hoameng; Blevins, Tyler; Brown, Mesha-Gay; Fox, Emily; Litt, Brian

    2016-06-01

    Objective. Recently the FDA approved the first responsive, closed-loop intracranial device to treat epilepsy. Because these devices must respond within seconds of seizure onset and not miss events, they are tuned to have high sensitivity, leading to frequent false positive stimulations and decreased battery life. In this work, we propose a more robust seizure detection model. Approach. We use a Bayesian nonparametric Markov switching process to parse intracranial EEG (iEEG) data into distinct dynamic event states. Each event state is then modeled as a multidimensional Gaussian distribution to allow for predictive state assignment. By detecting event states highly specific for seizure onset zones, the method can identify precise regions of iEEG data associated with the transition to seizure activity, reducing false positive detections associated with interictal bursts. The seizure detection algorithm was translated to a real-time application and validated in a small pilot study using 391 days of continuous iEEG data from two dogs with naturally occurring, multifocal epilepsy. A feature-based seizure detector modeled after the NeuroPace RNS System was developed as a control. Main results. Our novel seizure detection method demonstrated an improvement in false negative rate (0/55 seizures missed versus 2/55 seizures missed) as well as a significantly reduced false positive rate (0.0012 h versus 0.058 h-1). All seizures were detected an average of 12.1 ± 6.9 s before the onset of unequivocal epileptic activity (unequivocal epileptic onset (UEO)). Significance. This algorithm represents a computationally inexpensive, individualized, real-time detection method suitable for implantable antiepileptic devices that may considerably reduce false positive rate relative to current industry standards.

  16. Non-cooperative stochastic differential game theory of generalized Markov jump linear systems

    CERN Document Server

    Zhang, Cheng-ke; Zhou, Hai-ying; Bin, Ning

    2017-01-01

    This book systematically studies the stochastic non-cooperative differential game theory of generalized linear Markov jump systems and its application in the field of finance and insurance. The book is an in-depth research book of the continuous time and discrete time linear quadratic stochastic differential game, in order to establish a relatively complete framework of dynamic non-cooperative differential game theory. It uses the method of dynamic programming principle and Riccati equation, and derives it into all kinds of existence conditions and calculating method of the equilibrium strategies of dynamic non-cooperative differential game. Based on the game theory method, this book studies the corresponding robust control problem, especially the existence condition and design method of the optimal robust control strategy. The book discusses the theoretical results and its applications in the risk control, option pricing, and the optimal investment problem in the field of finance and insurance, enriching the...

  17. Stochastic modelling of a single ion channel: an alternating renewal approach with application to limited time resolution.

    Science.gov (United States)

    Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W

    1988-04-22

    Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.

  18. Non-equilibrium steady states: maximization of the Shannon entropy associated with the distribution of dynamical trajectories in the presence of constraints

    International Nuclear Information System (INIS)

    Monthus, Cécile

    2011-01-01

    Filyokov and Karpov (1967 Inzh.-Fiz. Zh. 13 624) have proposed a theory of non-equilibrium steady states in direct analogy with the theory of equilibrium states: the principle is to maximize the Shannon entropy associated with the probability distribution of dynamical trajectories in the presence of constraints, including the macroscopic current of interest, via the method of Lagrange multipliers. This maximization leads directly to the generalized Gibbs distribution for the probability distribution of dynamical trajectories, and to some fluctuation relation of the integrated current. The simplest stochastic dynamics where these ideas can be applied are discrete-time Markov chains, defined by transition probabilities W i→j between configurations i and j: instead of choosing the dynamical rules W i→j a priori, one determines the transition probabilities and the associate stationary state that maximize the entropy of dynamical trajectories with the other physical constraints that one wishes to impose. We give a self-contained and unified presentation of this type of approach, both for discrete-time Markov chains and for continuous-time master equations. The obtained results are in full agreement with the Bayesian approach introduced by Evans (2004 Phys. Rev. Lett. 92 150601) under the name 'Non-equilibrium Counterpart to detailed balance', and with the 'invariant quantities' derived by Baule and Evans (2008 Phys. Rev. Lett. 101 240601), but provide a slightly different perspective via the formulation in terms of an eigenvalue problem

  19. Computational Techniques for Model Predictive Control of Large-Scale Systems with Continuous-Valued and Discrete-Valued Inputs

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2013-01-01

    Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.

  20. An Evaluation of Different Training Sample Allocation Schemes for Discrete and Continuous Land Cover Classification Using Decision Tree-Based Algorithms

    Directory of Open Access Journals (Sweden)

    René Roland Colditz

    2015-07-01

    Full Text Available Land cover mapping for large regions often employs satellite images of medium to coarse spatial resolution, which complicates mapping of discrete classes. Class memberships, which estimate the proportion of each class for every pixel, have been suggested as an alternative. This paper compares different strategies of training data allocation for discrete and continuous land cover mapping using classification and regression tree algorithms. In addition to measures of discrete and continuous map accuracy the correct estimation of the area is another important criteria. A subset of the 30 m national land cover dataset of 2006 (NLCD2006 of the United States was used as reference set to classify NADIR BRDF-adjusted surface reflectance time series of MODIS at 900 m spatial resolution. Results show that sampling of heterogeneous pixels and sample allocation according to the expected area of each class is best for classification trees. Regression trees for continuous land cover mapping should be trained with random allocation, and predictions should be normalized with a linear scaling function to correctly estimate the total area. From the tested algorithms random forest classification yields lower errors than boosted trees of C5.0, and Cubist shows higher accuracies than random forest regression.

  1. Density Control of Multi-Agent Systems with Safety Constraints: A Markov Chain Approach

    Science.gov (United States)

    Demirer, Nazli

    systems with a single agent and systems with large number of agents due to the probabilistic nature, where the probability distribution of each agent's state evolves according to a finite-state and discrete-time Markov chain (MC). Hence, designing proper decision control policies requires numerically tractable solution methods for the synthesis of Markov chains. The synthesis problem has the form of a Linear Matrix Inequality Problem (LMI), with LMI formulation of the constraints. To this end, we propose convex necessary and sufficient conditions for safety constraints in Markov chains, which is a novel result in the Markov chain literature. In addition to LMI-based, offline, Markov matrix synthesis method, we also propose a QP-based, online, method to compute a time-varying Markov matrix based on the real-time density feedback. Both problems are convex optimization problems that can be solved in a reliable and tractable way, utilizing existing tools in the literature. A Low Earth Orbit (LEO) swarm simulations are presented to validate the effectiveness of the proposed algorithms. Another problem tackled as a part of this research is the generalization of the density control problem to autonomous mobile agents with two control modes: ON and OFF. Here, each mode consists of a (possibly overlapping) finite set of actions, that is, there exist a set of actions for the ON mode and another set for the OFF mode. We give formulation for a new Markov chain synthesis problem, with additional measurements for the state transitions, where a policy is designed to ensure desired safety and convergence properties for the underlying Markov chain.

  2. The algebra of the general Markov model on phylogenetic trees and networks.

    Science.gov (United States)

    Sumner, J G; Holland, B R; Jarvis, P D

    2012-04-01

    It is known that the Kimura 3ST model of sequence evolution on phylogenetic trees can be extended quite naturally to arbitrary split systems. However, this extension relies heavily on mathematical peculiarities of the associated Hadamard transformation, and providing an analogous augmentation of the general Markov model has thus far been elusive. In this paper, we rectify this shortcoming by showing how to extend the general Markov model on trees to include incompatible edges; and even further to more general network models. This is achieved by exploring the algebra of the generators of the continuous-time Markov chain together with the “splitting” operator that generates the branching process on phylogenetic trees. For simplicity, we proceed by discussing the two state case and then show that our results are easily extended to more states with little complication. Intriguingly, upon restriction of the two state general Markov model to the parameter space of the binary symmetric model, our extension is indistinguishable from the Hadamard approach only on trees; as soon as any incompatible splits are introduced the two approaches give rise to differing probability distributions with disparate structure. Through exploration of a simple example, we give an argument that our extension to more general networks has desirable properties that the previous approaches do not share. In particular, our construction allows for convergent evolution of previously divergent lineages; a property that is of significant interest for biological applications.

  3. Discrete event simulation tool for analysis of qualitative models of continuous processing systems

    Science.gov (United States)

    Malin, Jane T. (Inventor); Basham, Bryan D. (Inventor); Harris, Richard A. (Inventor)

    1990-01-01

    An artificial intelligence design and qualitative modeling tool is disclosed for creating computer models and simulating continuous activities, functions, and/or behavior using developed discrete event techniques. Conveniently, the tool is organized in four modules: library design module, model construction module, simulation module, and experimentation and analysis. The library design module supports the building of library knowledge including component classes and elements pertinent to a particular domain of continuous activities, functions, and behavior being modeled. The continuous behavior is defined discretely with respect to invocation statements, effect statements, and time delays. The functionality of the components is defined in terms of variable cluster instances, independent processes, and modes, further defined in terms of mode transition processes and mode dependent processes. Model construction utilizes the hierarchy of libraries and connects them with appropriate relations. The simulation executes a specialized initialization routine and executes events in a manner that includes selective inherency of characteristics through a time and event schema until the event queue in the simulator is emptied. The experimentation and analysis module supports analysis through the generation of appropriate log files and graphics developments and includes the ability of log file comparisons.

  4. On Stochastic Finite-Time Control of Discrete-Time Fuzzy Systems with Packet Dropout

    Directory of Open Access Journals (Sweden)

    Yingqi Zhang

    2012-01-01

    Full Text Available This paper is concerned with the stochastic finite-time stability and stochastic finite-time boundedness problems for one family of fuzzy discrete-time systems over networks with packet dropout, parametric uncertainties, and time-varying norm-bounded disturbance. Firstly, we present the dynamic model description studied, in which the discrete-time fuzzy T-S systems with packet loss can be described by one class of fuzzy Markovian jump systems. Then, the concepts of stochastic finite-time stability and stochastic finite-time boundedness and problem formulation are given. Based on Lyapunov function approach, sufficient conditions on stochastic finite-time stability and stochastic finite-time boundedness are established for the resulting closed-loop fuzzy discrete-time system with Markovian jumps, and state-feedback controllers are designed to ensure stochastic finite-time stability and stochastic finite-time boundedness of the class of fuzzy systems. The stochastic finite-time stability and stochastic finite-time boundedness criteria can be tackled in the form of linear matrix inequalities with a fixed parameter. As an auxiliary result, we also give sufficient conditions on the stochastic stability of the class of fuzzy T-S systems with packet loss. Finally, two illustrative examples are presented to show the validity of the developed methodology.

  5. Quantum circuit dynamics via path integrals: Is there a classical action for discrete-time paths?

    Science.gov (United States)

    Penney, Mark D.; Enshan Koh, Dax; Spekkens, Robert W.

    2017-07-01

    It is straightforward to compute the transition amplitudes of a quantum circuit using the sum-over-paths methodology when the gates in the circuit are balanced, where a balanced gate is one for which all non-zero transition amplitudes are of equal magnitude. Here we consider the question of whether, for such circuits, the relative phases of different discrete-time paths through the configuration space can be defined in terms of a classical action, as they are for continuous-time paths. We show how to do so for certain kinds of quantum circuits, namely, Clifford circuits where the elementary systems are continuous-variable systems or discrete systems of odd-prime dimension. These types of circuit are distinguished by having phase-space representations that serve to define their classical counterparts. For discrete systems, the phase-space coordinates are also discrete variables. We show that for each gate in the generating set, one can associate a symplectomorphism on the phase-space and to each of these one can associate a generating function, defined on two copies of the configuration space. For discrete systems, the latter association is achieved using tools from algebraic geometry. Finally, we show that if the action functional for a discrete-time path through a sequence of gates is defined using the sum of the corresponding generating functions, then it yields the correct relative phases for the path-sum expression. These results are likely to be relevant for quantizing physical theories where time is fundamentally discrete, characterizing the classical limit of discrete-time quantum dynamics, and proving complexity results for quantum circuits.

  6. Modeling dyadic processes using Hidden Markov Models: A time series approach to mother-infant interactions during infant immunization.

    Science.gov (United States)

    Stifter, Cynthia A; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at two and six months of age, used hidden Markov modeling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a 4-state model for the dyadic responses to a two-month inoculation whereas a 6-state model best described the dyadic process at six months. Two of the states at two months and three of the states at six months suggested a progression from high intensity crying to no crying with parents using vestibular and auditory soothing methods. The use of feeding and/or pacifying to soothe the infant characterized one two-month state and two six-month states. These data indicate that with maturation and experience, the mother-infant dyad is becoming more organized around the soothing interaction. Using hidden Markov modeling to describe individual differences, as well as normative processes, is also presented and discussed.

  7. A mean-variance frontier in discrete and continuous time

    NARCIS (Netherlands)

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation

  8. Discrete-Time LPV Current Control of an Induction Motor

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, Klaus

    2003-01-01

    In this paper we apply a new method for gain-scheduled output feedback control of nonlinear systems to current control of an induction motor. The method relies on recently developed controller synthesis results for linear parameter-varying (LPV) systems, where the controller synthesis is formulated...... as a set of linear matrix inequalities with full-block multipliers. A standard nonlinear model of the motor is constructed and written on LPV form. We then show that, although originally developed in continuous time, the controller synthesis results can be applied to a discrete-time model as well without...... further complications. The synthesis method is applied to the model, yielding an LPV discrete-time controller. Finally, the efficiency of the control scheme is validated via simulations as well as on the actual induction motor, both in open-loop current control and when an outer speed control loop...

  9. An Analytical Solution for Probabilistic Guarantees of Reservation Based Soft Real--Time Systems

    OpenAIRE

    Palopoli, Luigi; Fontanelli, Daniele; Abeni, Luca; Villalba Frias, Bernardo

    2015-01-01

    We show a methodology for the computation of the probability of deadline miss for a periodic real-time task scheduled by a resource reservation algorithm. We propose a modelling technique for the system that reduces the computation of such a probability to that of the steady state probability of an infinite state Discrete Time Markov Chain with a periodic structure. This structure is exploited to develop an efficient numeric solution where different accuracy/computation time trade-offs can be...

  10. Exact Markov chains versus diffusion theory for haploid random mating.

    Science.gov (United States)

    Tyvand, Peder A; Thorvaldsen, Steinar

    2010-05-01

    Exact discrete Markov chains are applied to the Wright-Fisher model and the Moran model of haploid random mating. Selection and mutations are neglected. At each discrete value of time t there is a given number n of diploid monoecious organisms. The evolution of the population distribution is given in diffusion variables, to compare the two models of random mating with their common diffusion limit. Only the Moran model converges uniformly to the diffusion limit near the boundary. The Wright-Fisher model allows the population size to change with the generations. Diffusion theory tends to under-predict the loss of genetic information when a population enters a bottleneck. 2010 Elsevier Inc. All rights reserved.

  11. A discrete classical space-time could require 6 extra-dimensions

    Science.gov (United States)

    Guillemant, Philippe; Medale, Marc; Abid, Cherifa

    2018-01-01

    We consider a discrete space-time in which conservation laws are computed in such a way that the density of information is kept bounded. We use a 2D billiard as a toy model to compute the uncertainty propagation in ball positions after every shock and the corresponding loss of phase information. Our main result is the computation of a critical time step above which billiard calculations are no longer deterministic, meaning that a multiverse of distinct billiard histories begins to appear, caused by the lack of information. Then, we highlight unexpected properties of this critical time step and the subsequent exponential evolution of the number of histories with time, to observe that after certain duration all billiard states could become possible final states, independent of initial conditions. We conclude that if our space-time is really a discrete one, one would need to introduce extra-dimensions in order to provide supplementary constraints that specify which history should be played.

  12. Pairwise Choice Markov Chains

    OpenAIRE

    Ragain, Stephen; Ugander, Johan

    2016-01-01

    As datasets capturing human choices grow in richness and scale---particularly in online domains---there is an increasing need for choice models that escape traditional choice-theoretic axioms such as regularity, stochastic transitivity, and Luce's choice axiom. In this work we introduce the Pairwise Choice Markov Chain (PCMC) model of discrete choice, an inferentially tractable model that does not assume any of the above axioms while still satisfying the foundational axiom of uniform expansio...

  13. Quantum circuit dynamics via path integrals: Is there a classical action for discrete-time paths?

    International Nuclear Information System (INIS)

    Penney, Mark D; Koh, Dax Enshan; Spekkens, Robert W

    2017-01-01

    It is straightforward to compute the transition amplitudes of a quantum circuit using the sum-over-paths methodology when the gates in the circuit are balanced, where a balanced gate is one for which all non-zero transition amplitudes are of equal magnitude. Here we consider the question of whether, for such circuits, the relative phases of different discrete-time paths through the configuration space can be defined in terms of a classical action, as they are for continuous-time paths. We show how to do so for certain kinds of quantum circuits, namely, Clifford circuits where the elementary systems are continuous-variable systems or discrete systems of odd-prime dimension. These types of circuit are distinguished by having phase-space representations that serve to define their classical counterparts. For discrete systems, the phase-space coordinates are also discrete variables. We show that for each gate in the generating set, one can associate a symplectomorphism on the phase-space and to each of these one can associate a generating function, defined on two copies of the configuration space. For discrete systems, the latter association is achieved using tools from algebraic geometry. Finally, we show that if the action functional for a discrete-time path through a sequence of gates is defined using the sum of the corresponding generating functions, then it yields the correct relative phases for the path-sum expression. These results are likely to be relevant for quantizing physical theories where time is fundamentally discrete, characterizing the classical limit of discrete-time quantum dynamics, and proving complexity results for quantum circuits. (paper)

  14. Markov processes an introduction for physical scientists

    CERN Document Server

    Gillespie, Daniel T

    1991-01-01

    Markov process theory is basically an extension of ordinary calculus to accommodate functions whos time evolutions are not entirely deterministic. It is a subject that is becoming increasingly important for many fields of science. This book develops the single-variable theory of both continuous and jump Markov processes in a way that should appeal especially to physicists and chemists at the senior and graduate level.Key Features* A self-contained, prgamatic exposition of the needed elements of random variable theory* Logically integrated derviations of the Chapman-Kolmogorov e

  15. Prediction of inspection intervals using the Markov analysis

    International Nuclear Information System (INIS)

    Rea, R.; Arellano, J.

    2005-01-01

    To solve the unmanageable number of states of Markov of systems that have a great number of components, it is intends a modification to the method of Markov, denominated Markov truncated analysis, in which is assumed that it is worthless the dependence among faults of components. With it the number of states is increased in a lineal way (not exponential) with the number of components of the system, simplifying the analysis vastly. As example, the proposed method was applied to the system HPCS of the CLV considering its 18 main components. It thinks about that each component can take three states: operational, with hidden fault and with revealed fault. Additionally, it takes into account the configuration of the system HPCS by means of a block diagram of dependability to estimate their unavailability at level system. The results of the model here proposed are compared with other methods and approaches used to simplify the Markov analysis. It also intends the modification of the intervals of inspection of three components of the system HPCS. This finishes with base in the developed Markov model and in the maximum time allowed by the code ASME (NUREG-1482) to inspect components of systems that are in reservation in nuclear power plants. (Author)

  16. Discrete-time moment closure models for epidemic spreading in populations of interacting individuals.

    Science.gov (United States)

    Frasca, Mattia; Sharkey, Kieran J

    2016-06-21

    Understanding the dynamics of spread of infectious diseases between individuals is essential for forecasting the evolution of an epidemic outbreak or for defining intervention policies. The problem is addressed by many approaches including stochastic and deterministic models formulated at diverse scales (individuals, populations) and different levels of detail. Here we consider discrete-time SIR (susceptible-infectious-removed) dynamics propagated on contact networks. We derive a novel set of 'discrete-time moment equations' for the probability of the system states at the level of individual nodes and pairs of nodes. These equations form a set which we close by introducing appropriate approximations of the joint probabilities appearing in them. For the example case of SIR processes, we formulate two types of model, one assuming statistical independence at the level of individuals and one at the level of pairs. From the pair-based model we then derive a model at the level of the population which captures the behavior of epidemics on homogeneous random networks. With respect to their continuous-time counterparts, the models include a larger number of possible transitions from one state to another and joint probabilities with a larger number of individuals. The approach is validated through numerical simulation over different network topologies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Continuous time modelling of dynamical spatial lattice data observed at sparsely distributed times

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl; Møller, Jesper

    2007-01-01

    Summary. We consider statistical and computational aspects of simulation-based Bayesian inference for a spatial-temporal model based on a multivariate point process which is only observed at sparsely distributed times. The point processes are indexed by the sites of a spatial lattice......, and they exhibit spatial interaction. For specificity we consider a particular dynamical spatial lattice data set which has previously been analysed by a discrete time model involving unknown normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared...... with discrete time processes in the setting of the present paper as well as other spatial-temporal situations....

  18. Reliability evaluation of non-reparable three-state systems using Markov model and its comparison with the UGF and the recursive methods

    International Nuclear Information System (INIS)

    Pourkarim Guilani, Pedram; Sharifi, Mani; Niaki, S.T.A.; Zaretalab, Arash

    2014-01-01

    In multi-state systems (MSS) reliability problems, it is assumed that the components of each subsystem have different performance rates with certain probabilities. This leads into extensive computational efforts involved in using the commonly employed universal generation function (UGF) and the recursive algorithm to obtain reliability of systems consisting of a large number of components. This research deals with evaluating non-repairable three-state systems reliability and proposes a novel method based on a Markov process for which an appropriate state definition is provided. It is shown that solving the derived differential equations significantly reduces the computational time compared to the UGF and the recursive algorithm. - Highlights: • Reliability evaluation of a non-repairable three-state systems is aimed. • A novel method based on a Markov process is proposed. • An appropriate state definition is provided. • Computational time is significantly less compared to the ones in the UGF and the recursive methods

  19. Implementation of continuous-variable quantum key distribution with discrete modulation

    Science.gov (United States)

    Hirano, Takuya; Ichikawa, Tsubasa; Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Namiki, Ryo; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro

    2017-06-01

    We have developed a continuous-variable quantum key distribution (CV-QKD) system that employs discrete quadrature-amplitude modulation and homodyne detection of coherent states of light. We experimentally demonstrated automated secure key generation with a rate of 50 kbps when a quantum channel is a 10 km optical fibre. The CV-QKD system utilises a four-state and post-selection protocol and generates a secure key against the entangling cloner attack. We used a pulsed light source of 1550 nm wavelength with a repetition rate of 10 MHz. A commercially available balanced receiver is used to realise shot-noise-limited pulsed homodyne detection. We used a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification. A graphical processing unit card is used to accelerate the software-based post-processing.

  20. A methodology for stochastic analysis of share prices as Markov chains with finite states.

    Science.gov (United States)

    Mettle, Felix Okoe; Quaye, Enoch Nii Boi; Laryea, Ravenhill Adjetey

    2014-01-01

    Price volatilities make stock investments risky, leaving investors in critical position when uncertain decision is made. To improve investor evaluation confidence on exchange markets, while not using time series methodology, we specify equity price change as a stochastic process assumed to possess Markov dependency with respective state transition probabilities matrices following the identified state pace (i.e. decrease, stable or increase). We established that identified states communicate, and that the chains are aperiodic and ergodic thus possessing limiting distributions. We developed a methodology for determining expected mean return time for stock price increases and also establish criteria for improving investment decision based on highest transition probabilities, lowest mean return time and highest limiting distributions. We further developed an R algorithm for running the methodology introduced. The established methodology is applied to selected equities from Ghana Stock Exchange weekly trading data.

  1. Multifractality, imperfect scaling and hydrological properties of rainfall time series simulated by continuous universal multifractal and discrete random cascade models

    Directory of Open Access Journals (Sweden)

    F. Serinaldi

    2010-12-01

    Full Text Available Discrete multiplicative random cascade (MRC models were extensively studied and applied to disaggregate rainfall data, thanks to their formal simplicity and the small number of involved parameters. Focusing on temporal disaggregation, the rationale of these models is based on multiplying the value assumed by a physical attribute (e.g., rainfall intensity at a given time scale L, by a suitable number b of random weights, to obtain b attribute values corresponding to statistically plausible observations at a smaller L/b time resolution. In the original formulation of the MRC models, the random weights were assumed to be independent and identically distributed. However, for several studies this hypothesis did not appear to be realistic for the observed rainfall series as the distribution of the weights was shown to depend on the space-time scale and rainfall intensity. Since these findings contrast with the scale invariance assumption behind the MRC models and impact on the applicability of these models, it is worth studying their nature. This study explores the possible presence of dependence of the parameters of two discrete MRC models on rainfall intensity and time scale, by analyzing point rainfall series with 5-min time resolution. Taking into account a discrete microcanonical (MC model based on beta distribution and a discrete canonical beta-logstable (BLS, the analysis points out that the relations between the parameters and rainfall intensity across the time scales are detectable and can be modeled by a set of simple functions accounting for the parameter-rainfall intensity relationship, and another set describing the link between the parameters and the time scale. Therefore, MC and BLS models were modified to explicitly account for these relationships and compared with the continuous in scale universal multifractal (CUM model, which is used as a physically based benchmark model. Monte Carlo simulations point out

  2. Probability and stochastic modeling

    CERN Document Server

    Rotar, Vladimir I

    2012-01-01

    Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...

  3. Integrating Ecosystem Carbon Dynamics into State-and-Transition Simulation Models of Land Use/Land Cover Change

    Science.gov (United States)

    Sleeter, B. M.; Daniel, C.; Frid, L.; Fortin, M. J.

    2016-12-01

    State-and-transition simulation models (STSMs) provide a general approach for incorporating uncertainty into forecasts of landscape change. Using a Monte Carlo approach, STSMs generate spatially-explicit projections of the state of a landscape based upon probabilistic transitions defined between states. While STSMs are based on the basic principles of Markov chains, they have additional properties that make them applicable to a wide range of questions and types of landscapes. A current limitation of STSMs is that they are only able to track the fate of discrete state variables, such as land use/land cover (LULC) classes. There are some landscape modelling questions, however, for which continuous state variables - for example carbon biomass - are also required. Here we present a new approach for integrating continuous state variables into spatially-explicit STSMs. Specifically we allow any number of continuous state variables to be defined for each spatial cell in our simulations; the value of each continuous variable is then simulated forward in discrete time as a stochastic process based upon defined rates of change between variables. These rates can be defined as a function of the realized states and transitions of each cell in the STSM, thus providing a connection between the continuous variables and the dynamics of the landscape. We demonstrate this new approach by (1) developing a simple IPCC Tier 3 compliant model of ecosystem carbon biomass, where the continuous state variables are defined as terrestrial carbon biomass pools and the rates of change as carbon fluxes between pools, and (2) integrating this carbon model with an existing LULC change model for the state of Hawaii, USA.

  4. Some remarks about the thermodynamics of discrete finite Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Siboni, S. [Trento Univ. (Italy). Facolta` di Ingegneria, Dip. di Ingegneria dei Materiali

    1998-08-01

    The Author propose a simple way to define a Hamiltonian for aperiodic Markov chains and to apply these chains in a thermodynamical context. The basic thermodynamic functions are correspondingly calculated. A quite intriguing and nontrivial application to stochastic automata is also pointed out.

  5. Invariant set computation for constrained uncertain discrete-time systems

    NARCIS (Netherlands)

    Athanasopoulos, N.; Bitsoris, G.

    2010-01-01

    In this article a novel approach to the determination of polytopic invariant sets for constrained discrete-time linear uncertain systems is presented. First, the problem of stabilizing a prespecified initial condition set in the presence of input and state constraints is addressed. Second, the

  6. Fracture Mechanical Markov Chain Crack Growth Model

    DEFF Research Database (Denmark)

    Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard

    1991-01-01

    propagation process can be described by a discrete space Markov theory. The model is applicable to deterministic as well as to random loading. Once the model parameters for a given material have been determined, the results can be used for any structure as soon as the geometrical function is known....

  7. Anticontrol of chaos in continuous-time systems via time-delay feedback.

    Science.gov (United States)

    Wang, Xiao Fan; Chen, Guanrong; Yu, Xinghuo

    2000-12-01

    In this paper, a systematic design approach based on time-delay feedback is developed for anticontrol of chaos in a continuous-time system. This anticontrol method can drive a finite-dimensional, continuous-time, autonomous system from nonchaotic to chaotic, and can also enhance the existing chaos of an originally chaotic system. Asymptotic analysis is used to establish an approximate relationship between a time-delay differential equation and a discrete map. Anticontrol of chaos is then accomplished based on this relationship and the differential-geometry control theory. Several examples are given to verify the effectiveness of the methodology and to illustrate the systematic design procedure. (c) 2000 American Institute of Physics.

  8. Integrals of Motion for Discrete-Time Optimal Control Problems

    OpenAIRE

    Torres, Delfim F. M.

    2003-01-01

    We obtain a discrete time analog of E. Noether's theorem in Optimal Control, asserting that integrals of motion associated to the discrete time Pontryagin Maximum Principle can be computed from the quasi-invariance properties of the discrete time Lagrangian and discrete time control system. As corollaries, results for first-order and higher-order discrete problems of the calculus of variations are obtained.

  9. Saddlepoint expansions for sums of Markov dependent variables on a continuous state space

    DEFF Research Database (Denmark)

    Jensen, J.L.

    1991-01-01

    Based on the conjugate kernel studied in Iscoe et al. (1985) we derive saddlepoint expansions for either the density or distribution function of a sum f(X1)+...+f(Xn), where the Xi's constitute a Markov chain. The chain is assumed to satisfy a strong recurrence condition which makes the results...... here very similar to the classical results for i.i.d. variables. In particular we establish also conditions under which the expansions hold uniformly over the range of the saddlepoint. Expansions are also derived for sums of the form f(X1, X0)+f(X2, X1)+...+f(Xn, Xn-1) although the uniformity result...

  10. Exact goodness-of-fit tests for Markov chains.

    Science.gov (United States)

    Besag, J; Mondal, D

    2013-06-01

    Goodness-of-fit tests are useful in assessing whether a statistical model is consistent with available data. However, the usual χ² asymptotics often fail, either because of the paucity of the data or because a nonstandard test statistic is of interest. In this article, we describe exact goodness-of-fit tests for first- and higher order Markov chains, with particular attention given to time-reversible ones. The tests are obtained by conditioning on the sufficient statistics for the transition probabilities and are implemented by simple Monte Carlo sampling or by Markov chain Monte Carlo. They apply both to single and to multiple sequences and allow a free choice of test statistic. Three examples are given. The first concerns multiple sequences of dry and wet January days for the years 1948-1983 at Snoqualmie Falls, Washington State, and suggests that standard analysis may be misleading. The second one is for a four-state DNA sequence and lends support to the original conclusion that a second-order Markov chain provides an adequate fit to the data. The last one is six-state atomistic data arising in molecular conformational dynamics simulation of solvated alanine dipeptide and points to strong evidence against a first-order reversible Markov chain at 6 picosecond time steps. © 2013, The International Biometric Society.

  11. On the path independence conditions for discrete-continuous demand

    DEFF Research Database (Denmark)

    Batley, Richard; Ibáñez Rivas, Juan Nicolás

    2013-01-01

    We consider the manner in which the well-established path independence conditions apply to Small and Rosen's (1981) problem of discrete-continuous demand, focussing especially upon the restricted case of discrete choice (probabilistic) demand. We note that the consumer surplus measure promoted...... by Small and Rosen, which is specific to the probabilistic demand, imposes path independence to price changes a priori. We find that path independence to income changes can further be imposed provided a numeraire good is available in the consumption set. We show that, for practical purposes, Mc...

  12. Logistic and linear regression model documentation for statistical relations between continuous real-time and discrete water-quality constituents in the Kansas River, Kansas, July 2012 through June 2015

    Science.gov (United States)

    Foster, Guy M.; Graham, Jennifer L.

    2016-04-06

    The Kansas River is a primary source of drinking water for about 800,000 people in northeastern Kansas. Source-water supplies are treated by a combination of chemical and physical processes to remove contaminants before distribution. Advanced notification of changing water-quality conditions and cyanobacteria and associated toxin and taste-and-odor compounds provides drinking-water treatment facilities time to develop and implement adequate treatment strategies. The U.S. Geological Survey (USGS), in cooperation with the Kansas Water Office (funded in part through the Kansas State Water Plan Fund), and the City of Lawrence, the City of Topeka, the City of Olathe, and Johnson County Water One, began a study in July 2012 to develop statistical models at two Kansas River sites located upstream from drinking-water intakes. Continuous water-quality monitors have been operated and discrete-water quality samples have been collected on the Kansas River at Wamego (USGS site number 06887500) and De Soto (USGS site number 06892350) since July 2012. Continuous and discrete water-quality data collected during July 2012 through June 2015 were used to develop statistical models for constituents of interest at the Wamego and De Soto sites. Logistic models to continuously estimate the probability of occurrence above selected thresholds were developed for cyanobacteria, microcystin, and geosmin. Linear regression models to continuously estimate constituent concentrations were developed for major ions, dissolved solids, alkalinity, nutrients (nitrogen and phosphorus species), suspended sediment, indicator bacteria (Escherichia coli, fecal coliform, and enterococci), and actinomycetes bacteria. These models will be used to provide real-time estimates of the probability that cyanobacteria and associated compounds exceed thresholds and of the concentrations of other water-quality constituents in the Kansas River. The models documented in this report are useful for characterizing changes

  13. A systematic method for constructing time discretizations of integrable lattice systems: local equations of motion

    International Nuclear Information System (INIS)

    Tsuchida, Takayuki

    2010-01-01

    We propose a new method for discretizing the time variable in integrable lattice systems while maintaining the locality of the equations of motion. The method is based on the zero-curvature (Lax pair) representation and the lowest-order 'conservation laws'. In contrast to the pioneering work of Ablowitz and Ladik, our method allows the auxiliary dependent variables appearing in the stage of time discretization to be expressed locally in terms of the original dependent variables. The time-discretized lattice systems have the same set of conserved quantities and the same structures of the solutions as the continuous-time lattice systems; only the time evolution of the parameters in the solutions that correspond to the angle variables is discretized. The effectiveness of our method is illustrated using examples such as the Toda lattice, the Volterra lattice, the modified Volterra lattice, the Ablowitz-Ladik lattice (an integrable semi-discrete nonlinear Schroedinger system) and the lattice Heisenberg ferromagnet model. For the modified Volterra lattice, we also present its ultradiscrete analogue.

  14. Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2013-01-01

    Roč. 7, č. 3 (2013), s. 146-161 ISSN 0572-3043 R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150 Grant - others:AVČR a CONACyT(CZ) 171396 Institutional support: RVO:67985556 Keywords : Discrete-time Markov decision chains * exponential utility functions * certainty equivalent * mean-variance optimality * connections between risk -sensitive and risk -neutral models Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/sladky-0399099.pdf

  15. Security proof of continuous-variable quantum key distribution using three coherent states

    Science.gov (United States)

    Brádler, Kamil; Weedbrook, Christian

    2018-02-01

    We introduce a ternary quantum key distribution (QKD) protocol and asymptotic security proof based on three coherent states and homodyne detection. Previous work had considered the binary case of two coherent states and here we nontrivially extend this to three. Our motivation is to leverage the practical benefits of both discrete and continuous (Gaussian) encoding schemes creating a best-of-both-worlds approach; namely, the postprocessing of discrete encodings and the hardware benefits of continuous ones. We present a thorough and detailed security proof in the limit of infinite signal states which allows us to lower bound the secret key rate. We calculate this is in the context of collective eavesdropping attacks and reverse reconciliation postprocessing. Finally, we compare the ternary coherent state protocol to other well-known QKD schemes (and fundamental repeaterless limits) in terms of secret key rates and loss.

  16. Guaranteed Cost Finite-Time Control of Discrete-Time Positive Impulsive Switched Systems

    Directory of Open Access Journals (Sweden)

    Leipo Liu

    2018-01-01

    Full Text Available This paper considers the guaranteed cost finite-time boundedness of discrete-time positive impulsive switched systems. Firstly, the definition of guaranteed cost finite-time boundedness is introduced. By using the multiple linear copositive Lyapunov function (MLCLF and average dwell time (ADT approach, a state feedback controller is designed and sufficient conditions are obtained to guarantee that the corresponding closed-loop system is guaranteed cost finite-time boundedness (GCFTB. Such conditions can be solved by linear programming. Finally, a numerical example is provided to show the effectiveness of the proposed method.

  17. Two-boundary first exit time of Gauss-Markov processes for stochastic modeling of acto-myosin dynamics.

    Science.gov (United States)

    D'Onofrio, Giuseppe; Pirozzi, Enrica

    2017-05-01

    We consider a stochastic differential equation in a strip, with coefficients suitably chosen to describe the acto-myosin interaction subject to time-varying forces. By simulating trajectories of the stochastic dynamics via an Euler discretization-based algorithm, we fit experimental data and determine the values of involved parameters. The steps of the myosin are represented by the exit events from the strip. Motivated by these results, we propose a specific stochastic model based on the corresponding time-inhomogeneous Gauss-Markov and diffusion process evolving between two absorbing boundaries. We specify the mean and covariance functions of the stochastic modeling process taking into account time-dependent forces including the effect of an external load. We accurately determine the probability density function (pdf) of the first exit time (FET) from the strip by solving a system of two non singular second-type Volterra integral equations via a numerical quadrature. We provide numerical estimations of the mean of FET as approximations of the dwell-time of the proteins dynamics. The percentage of backward steps is given in agreement to experimental data. Numerical and simulation results are compared and discussed.

  18. Inferring network structure in non-normal and mixed discrete-continuous genomic data.

    Science.gov (United States)

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2018-03-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. © 2017, The International Biometric Society.

  19. Numerical Evaluation of the "Dual-Kernel Counter-flow" Matric Convolution Integral that Arises in Discrete/Continuous (D/C) Control Theory

    Science.gov (United States)

    Nixon, Douglas D.

    2009-01-01

    Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.

  20. Probabilistic sensitivity analysis on Markov models with uncertain transition probabilities: an application in evaluating treatment decisions for type 2 diabetes.

    Science.gov (United States)

    Zhang, Yuanhui; Wu, Haipeng; Denton, Brian T; Wilson, James R; Lobo, Jennifer M

    2017-10-27

    Markov models are commonly used for decision-making studies in many application domains; however, there are no widely adopted methods for performing sensitivity analysis on such models with uncertain transition probability matrices (TPMs). This article describes two simulation-based approaches for conducting probabilistic sensitivity analysis on a given discrete-time, finite-horizon, finite-state Markov model using TPMs that are sampled over a specified uncertainty set according to a relevant probability distribution. The first approach assumes no prior knowledge of the probability distribution, and each row of a TPM is independently sampled from the uniform distribution on the row's uncertainty set. The second approach involves random sampling from the (truncated) multivariate normal distribution of the TPM's maximum likelihood estimators for its rows subject to the condition that each row has nonnegative elements and sums to one. The two sampling methods are easily implemented and have reasonable computation times. A case study illustrates the application of these methods to a medical decision-making problem involving the evaluation of treatment guidelines for glycemic control of patients with type 2 diabetes, where natural variation in a patient's glycated hemoglobin (HbA1c) is modeled as a Markov chain, and the associated TPMs are subject to uncertainty.

  1. A descriptive model of resting-state networks using Markov chains.

    Science.gov (United States)

    Xie, H; Pal, R; Mitra, S

    2016-08-01

    Resting-state functional connectivity (RSFC) studies considering pairwise linear correlations have attracted great interests while the underlying functional network structure still remains poorly understood. To further our understanding of RSFC, this paper presents an analysis of the resting-state networks (RSNs) based on the steady-state distributions and provides a novel angle to investigate the RSFC of multiple functional nodes. This paper evaluates the consistency of two networks based on the Hellinger distance between the steady-state distributions of the inferred Markov chain models. The results show that generated steady-state distributions of default mode network have higher consistency across subjects than random nodes from various RSNs.

  2. Discrete-continuous bispectral operators and rational Darboux transformations

    International Nuclear Information System (INIS)

    Boyallian, Carina; Portillo, Sofia

    2010-01-01

    In this Letter we construct examples of discrete-continuous bispectral operators obtained by rational Darboux transformations applied to a regular pseudo-difference operator with constant coefficients. Moreover, we give an explicit procedure to write down the differential operators involved in the bispectral situation corresponding to the pseudo-difference operator obtained by the Darboux process.

  3. The Integration of Continuous and Discrete Latent Variable Models: Potential Problems and Promising Opportunities

    Science.gov (United States)

    Bauer, Daniel J.; Curran, Patrick J.

    2004-01-01

    Structural equation mixture modeling (SEMM) integrates continuous and discrete latent variable models. Drawing on prior research on the relationships between continuous and discrete latent variable models, the authors identify 3 conditions that may lead to the estimation of spurious latent classes in SEMM: misspecification of the structural model,…

  4. A generalized endogenous grid method for discrete-continuous choice

    OpenAIRE

    John Rust; Bertel Schjerning; Fedor Iskhakov

    2012-01-01

    This paper extends Carroll's endogenous grid method (2006 "The method of endogenous gridpoints for solving dynamic stochastic optimization problems", Economic Letters) for models with sequential discrete and continuous choice. Unlike existing generalizations, we propose solution algorithm that inherits both advantages of the original method, namely it avoids all root finding operations, and also efficiently deals with restrictions on the continuous decision variable. To further speed up the s...

  5. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    Science.gov (United States)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  6. Real-time frequency-to-time mapping based on spectrally-discrete chromatic dispersion.

    Science.gov (United States)

    Dai, Yitang; Li, Jilong; Zhang, Ziping; Yin, Feifei; Li, Wangzhe; Xu, Kun

    2017-07-10

    Traditional photonics-assisted real-time Fourier transform (RTFT) usually suffers from limited chromatic dispersion, huge volume, or large time delay and attendant loss. In this paper we propose frequency-to-time mapping (FTM) by spectrally-discrete dispersion to increase frequency sensitivity greatly. The novel media has periodic ON/OFF intensity frequency response while quadratic phase distribution along disconnected channels, which de-chirps matched optical input to repeated Fourier-transform-limited output. Real-time FTM is then obtained within each period. Since only discrete phase retardation rather than continuously-changed true time delay is required, huge equivalent dispersion is then available by compact device. Such FTM is theoretically analyzed, and implementation by cascaded optical ring resonators is proposed. After a numerical example, our theory is demonstrated by a proof-of-concept experiment, where a single loop containing 0.5-meters-long fiber is used. FTM under 400-MHz unambiguous bandwidth and 25-MHz resolution is reported. Highly-sensitive and linear mapping is achieved with 6.25 ps/MHz, equivalent to ~4.6 × 10 4 -km standard single mode fiber. Extended instantaneous bandwidth is expected by ring cascading. Our proposal may provide a promising method for real-time, low-latency Fourier transform.

  7. Interaction-aided continuous time quantum search

    International Nuclear Information System (INIS)

    Bae, Joonwoo; Kwon, Younghun; Baek, Inchan; Yoon, Dalsun

    2005-01-01

    The continuous quantum search algorithm (based on the Farhi-Gutmann Hamiltonian evolution) is known to be analogous to the Grover (or discrete time quantum) algorithm. Any errors introduced in Grover algorithm are fatal to its success. In the same way the Farhi-Gutmann Hamiltonian algorithm has a severe difficulty when the Hamiltonian is perturbed. In this letter we will show that the interaction term in quantum search Hamiltonian (actually which is in the generalized quantum search Hamiltonian) can save the perturbed Farhi-Gutmann Hamiltonian that should otherwise fail. We note that this fact is quite remarkable since it implies that introduction of interaction can be a way to correct some errors on the continuous time quantum search

  8. Symmetric discrete coherent states for n-qubits

    International Nuclear Information System (INIS)

    Muñoz, C; Klimov, A B; Sánchez-Soto, L L

    2012-01-01

    We put forward a method of constructing discrete coherent states for n qubits. After establishing appropriate displacement operators, the coherent states appear as displaced versions of a fiducial vector that is fixed by imposing a number of natural symmetry requirements on its Q-function. Using these coherent states, we establish a partial order in the discrete phase space, which allows us to picture some n-qubit states as apparent distributions. We also analyze correlations in terms of sums of squared Q-functions. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Coherent states: mathematical and physical aspects’. (paper)

  9. [Succession caused by beaver (Castor fiber L.) life activity: II. A refined Markov model].

    Science.gov (United States)

    Logofet; Evstigneev, O I; Aleinikov, A A; Morozova, A O

    2015-01-01

    The refined Markov model of cyclic zoogenic successions caused by beaver (Castor fiber L.) life activity represents a discrete chain of the following six states: flooded forest, swamped forest, pond, grassy swamp, shrubby swamp, and wet forest, which correspond to certain stages of succession. Those stages are defined, and a conceptual scheme of probable transitions between them for one time step is constructed from the knowledge of beaver behaviour in small river floodplains of "Bryanskii Les" Reserve. We calibrated the corresponding matrix of transition probabilities according to the optimization principle: minimizing differences between the model outcome and reality; the model generates a distribution of relative areas corresponding to the stages of succession, that has to be compared to those gained from case studies in the Reserve during 2002-2006. The time step is chosen to equal 2 years, and the first-step data in the sum of differences are given various weights, w (between 0 and 1). The value of w = 0.2 is selected due to its optimality and for some additional reasons. By the formulae of finite homogeneous Markov chain theory, we obtained the main results of the calibrated model, namely, a steady-state distribution of stage areas, indexes of cyclicity, and the mean durations (M(j)) of succession stages. The results of calibration give an objective quantitative nature to the expert knowledge of the course of succession and get a proper interpretation. The 2010 data, which are not involved in the calibration procedure, enabled assessing the quality of prediction by the homogeneous model in short-term (from the 2006 situation): the error of model area distribution relative to the distribution observed in 2010 falls into the range of 9-17%, the best prognosis being given by the least optimal matrices (rejected values of w). This indicates a formally heterogeneous nature of succession processes in time. Thus, the refined version of the homogeneous Markov chain

  10. Markov Chain Ontology Analysis (MCOA).

    Science.gov (United States)

    Frost, H Robert; McCray, Alexa T

    2012-02-03

    Biomedical ontologies have become an increasingly critical lens through which researchers analyze the genomic, clinical and bibliographic data that fuels scientific research. Of particular relevance are methods, such as enrichment analysis, that quantify the importance of ontology classes relative to a collection of domain data. Current analytical techniques, however, remain limited in their ability to handle many important types of structural complexity encountered in real biological systems including class overlaps, continuously valued data, inter-instance relationships, non-hierarchical relationships between classes, semantic distance and sparse data. In this paper, we describe a methodology called Markov Chain Ontology Analysis (MCOA) and illustrate its use through a MCOA-based enrichment analysis application based on a generative model of gene activation. MCOA models the classes in an ontology, the instances from an associated dataset and all directional inter-class, class-to-instance and inter-instance relationships as a single finite ergodic Markov chain. The adjusted transition probability matrix for this Markov chain enables the calculation of eigenvector values that quantify the importance of each ontology class relative to other classes and the associated data set members. On both controlled Gene Ontology (GO) data sets created with Escherichia coli, Drosophila melanogaster and Homo sapiens annotations and real gene expression data extracted from the Gene Expression Omnibus (GEO), the MCOA enrichment analysis approach provides the best performance of comparable state-of-the-art methods. A methodology based on Markov chain models and network analytic metrics can help detect the relevant signal within large, highly interdependent and noisy data sets and, for applications such as enrichment analysis, has been shown to generate superior performance on both real and simulated data relative to existing state-of-the-art approaches.

  11. Book Review: "Hidden Markov Models for Time Series: An ...

    African Journals Online (AJOL)

    Hidden Markov Models for Time Series: An Introduction using R. by Walter Zucchini and Iain L. MacDonald. Chapman & Hall (CRC Press), 2009. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT · http://dx.doi.org/10.4314/saaj.v10i1.61717 · AJOL African Journals Online.

  12. Discrete-Time Biomedical Signal Encryption

    Directory of Open Access Journals (Sweden)

    Victor Grigoraş

    2017-12-01

    Full Text Available Chaotic modulation is a strong method of improving communication security. Analog and discrete chaotic systems are presented in actual literature. Due to the expansion of digital communication, discrete-time systems become more efficient and closer to actual technology. The present contribution offers an in-depth analysis of the effects chaos encryption produce on 1D and 2D biomedical signals. The performed simulations show that modulating signals are precisely recovered by the synchronizing receiver if discrete systems are digitally implemented and the coefficients precisely correspond. Channel noise is also applied and its effects on biomedical signal demodulation are highlighted.

  13. A Novel Method for Decoding Any High-Order Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Fei Ye

    2014-01-01

    Full Text Available This paper proposes a novel method for decoding any high-order hidden Markov model. First, the high-order hidden Markov model is transformed into an equivalent first-order hidden Markov model by Hadar’s transformation. Next, the optimal state sequence of the equivalent first-order hidden Markov model is recognized by the existing Viterbi algorithm of the first-order hidden Markov model. Finally, the optimal state sequence of the high-order hidden Markov model is inferred from the optimal state sequence of the equivalent first-order hidden Markov model. This method provides a unified algorithm framework for decoding hidden Markov models including the first-order hidden Markov model and any high-order hidden Markov model.

  14. Semiclassical expanding discrete space-times

    International Nuclear Information System (INIS)

    Cobb, W.K.; Smalley, L.L.

    1981-01-01

    Given the close ties between general relativity and geometry one might reasonably expect that quantum effects associated with gravitation might also be tied to the geometry of space-time, namely, to some sort of discreteness in space-time itself. In particular it is supposed that space-time consists of a discrete lattice of points rather than the usual continuum. Since astronomical evidence seems to suggest that the universe is expanding, the lattice must also expand. Some of the implications of such a model are that the proton should presently be stable, and the universe should be closed although the mechanism for closure is quantum mechanical. (author)

  15. Control of the formation of projective synchronisation in lower-dimensional discrete-time systems

    International Nuclear Information System (INIS)

    Chee, C.Y.; Xu Daolin

    2003-01-01

    Projective synchronisation was recently observed in partially linear discrete-time systems. The scaling factor that characterises the behaviour of projective synchronisation is however unpredictable. In order to manipulate the ultimate state of the synchronisation, a control algorithm based on Schur-Chon stability criteria is proposed to direct the scaling factor onto any predestined value. In the numerical experiment, we illustrate the application on two chaotic discrete-time systems

  16. A Systematic Controller Design for a Grid-Connected Inverter with LCL Filter Using a Discrete-Time Integral State Feedback Control and State Observer

    Directory of Open Access Journals (Sweden)

    Seung-Jin Yoon

    2018-02-01

    Full Text Available Inductive-capacitive-inductive (LCL-type filters are currently preferred as a replacement for L-type filters in distributed generation (DG power systems, due to their superior harmonic attenuation capability. However, the third-order dynamics introduced by LCL filters pose a challenge to design a satisfactory controller for such a system. Conventionally, an LCL-filtered grid-connected inverter can be effectively controlled by using a full-state feedback control. However, this control approach requires the measurement of all system state variables, which brings about more complexity for the inverter system. To address this issue, this paper presents a systematic procedure to design an observer-based integral state feedback control for a LCL-filtered grid-connected inverter in the discrete-time domain. The proposed control scheme consists of an integral state feedback controller and a full-state observer which uses the control input, grid-side currents, and grid voltages to predict all the system state variables. Therefore, only the grid-side current sensors and grid voltage sensors are required to implement the proposed control scheme. Due to the discrete-time integrator incorporated in the state feedback controller, the proposed control scheme ensures both the reference tracking and disturbance rejection performance of the inverter system in a practical and simple way. As a result, superior control performance can be achieved by using the reduced number of sensors, which significantly reduces the cost and complexity of the LCL-filtered grid-connected inverter system in DG applications. To verify the practical usefulness of the proposed control scheme, a 2 kW three-phase prototype grid-connected inverter has been constructed, and the proposed control system has been implemented based on 32-bit floating-point digital signal processor (DSP TMS320F28335. The effectiveness of the proposed scheme is demonstrated through the comprehensive simulation

  17. Use of Markov chains for forecasting labor requirements in black coal mines

    Energy Technology Data Exchange (ETDEWEB)

    Penar, L.; Przybyla, H.

    1987-01-01

    Increasing mining depth, deterioration of mining conditions and technology development are causes of changes in labor requirements. In mines with stable coal output these changes in most cases are of a qualitative character, in mines with an increasing or decreasing coal output they are of a quantitative character. Methods for forecasting personnel needs, in particular professional requirements, are discussed. Quantitative and qualitative changes are accurately described by heterogenous Markov chains. A structure consisting of interdependent variables is the subject of a forecast. Changes that occur within the structure of time units is the subject of investigations. For a homogenous Markov chain probabilities of a transition from the i-state to the j-state are determined (the probabilities being time independent). For a heterogenous Markov chain probabilities of a transition from the i-state to the j-state are non-conditioned. The method was developed for the ODRA 1325 computers. 8 refs.

  18. Continuous versus discrete structures II -- Discrete Hamiltonian systems and Helmholtz conditions

    OpenAIRE

    Cresson, Jacky; Pierret, Frédéric

    2015-01-01

    We define discrete Hamiltonian systems in the framework of discrete embeddings. An explicit comparison with previous attempts is given. We then solve the discrete Helmholtz's inverse problem for the discrete calculus of variation in the Hamiltonian setting. Several applications are discussed.

  19. Constructing Markov State Models to elucidate the functional conformational changes of complex biomolecules

    KAUST Repository

    Wang, Wei; Cao, Siqin; Zhu, Lizhe; Huang, Xuhui

    2017-01-01

    bioengineering applications and rational drug design. Constructing Markov State Models (MSMs) based on large-scale molecular dynamics simulations has emerged as a powerful approach to model functional conformational changes of the biomolecular system

  20. From a Discrete to Continuous Description of Two-Dimensional Curved and Homogeneous Clusters: Some Kinetic Approach

    International Nuclear Information System (INIS)

    Gadomski, A.; Trame, Ch.

    1999-01-01

    Starting with a discrete picture of the self-avoiding polygon embeddable in the square lattice, and utilizing both scaling arguments as well as a Steinhaus rule for evaluating the polygon's area, we are able, by imposing a discrete time-dynamics and making use of the concept of quasi-static approximation, to arrive at some evolution rules for the surface fractal. The process is highly curvature-driven, which is very characteristic of many phenomena of biological interest, like crystallization, wetting, formation of biomembranes and interfaces. In a discrete regime, the number of subunits constituting the cluster is a nonlinear function of the number of the perimeter sites active for the growth. A change of the number of subunits in time is essentially determined by a change in the curvature in course of time, given explicitly by a difference operator. In a continuous limit, the process is assumed to proceed in time in a self-similar manner, and its description is generally offered in terms of a nonlinear dynamical system, even for the homogeneous clusters. For a sufficiently mature stage of the growing process, and when linearization of the dynamical system is realized, one may get some generalization of Mullins-Sekerka instability concept, where the function perturbing the circle is assumed to be everywhere continuous but not necessarily differentiable, like e.g., the Weierstrass function. Moreover, a time-dependent prefactor appears in the simplified dynamical system. (author)

  1. The Effect of Continuous and Discretized Presentations of Concurrent Augmented Visual Biofeedback on Postural Control in Quiet Stance.

    Directory of Open Access Journals (Sweden)

    Carmen D'Anna

    Full Text Available The purpose of this study was to evaluate the effect of a continuous and a discretized Visual Biofeedback (VBF on balance performance in upright stance. The coordinates of the Centre of Pressure (CoP, extracted from a force plate, were processed in real-time to implement the two VBFs, administered to two groups of 12 healthy participants. In the first group, a representation of the CoP was continuously shown, while in the second group, the discretized VBF was provided at an irregular frequency (that depended on the subject's performance by displaying one out of a set of five different emoticons, each corresponding to a specific area covered by the current position of the CoP. In the first case, participants were asked to maintain a white spot within a given square area, whereas in the second case they were asked to keep the smiling emoticon on. Trials with no VBF were administered as control. The effect of the two VBFs on balance was studied through classical postural parameters and a subset of stabilogram diffusion coefficients. To quantify the amount of time spent in stable conditions, the percentage of time during which the CoP was inside the stability area was calculated. Both VBFs improved balance maintainance as compared to the absence of any VBF. As compared to the continuous VBF, in the discretized VBF a significant decrease of sway path, diffusion and Hurst coefficients was found. These results seem to indicate that a discretized VBF favours a more natural postural behaviour by promoting a natural intermittent postural control strategy.

  2. Partition-based discrete-time quantum walks

    Science.gov (United States)

    Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo

    2018-04-01

    We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.

  3. Discrete Wigner function and quantum-state tomography

    Science.gov (United States)

    Leonhardt, Ulf

    1996-05-01

    The theory of discrete Wigner functions and of discrete quantum-state tomography [U. Leonhardt, Phys. Rev. Lett. 74, 4101 (1995)] is studied in more detail guided by the picture of precession tomography. Odd- and even-dimensional systems (angular momenta and spins, bosons, and fermions) are considered separately. Relations between simple number theory and the quantum mechanics of finite-dimensional systems are pointed out. In particular, the multicomplementarity of the precession states distinguishes prime dimensions from composite ones.

  4. Stabilisation of discrete-time polynomial fuzzy systems via a polynomial lyapunov approach

    Science.gov (United States)

    Nasiri, Alireza; Nguang, Sing Kiong; Swain, Akshya; Almakhles, Dhafer

    2018-02-01

    This paper deals with the problem of designing a controller for a class of discrete-time nonlinear systems which is represented by discrete-time polynomial fuzzy model. Most of the existing control design methods for discrete-time fuzzy polynomial systems cannot guarantee their Lyapunov function to be a radially unbounded polynomial function, hence the global stability cannot be assured. The proposed control design in this paper guarantees a radially unbounded polynomial Lyapunov functions which ensures global stability. In the proposed design, state feedback structure is considered and non-convexity problem is solved by incorporating an integrator into the controller. Sufficient conditions of stability are derived in terms of polynomial matrix inequalities which are solved via SOSTOOLS in MATLAB. A numerical example is presented to illustrate the effectiveness of the proposed controller.

  5. Markov process of muscle motors

    International Nuclear Information System (INIS)

    Kondratiev, Yu; Pechersky, E; Pirogov, S

    2008-01-01

    We study a Markov random process describing muscle molecular motor behaviour. Every motor is either bound up with a thin filament or unbound. In the bound state the motor creates a force proportional to its displacement from the neutral position. In both states the motor spends an exponential time depending on the state. The thin filament moves at a velocity proportional to the average of all displacements of all motors. We assume that the time which a motor stays in the bound state does not depend on its displacement. Then one can find an exact solution of a nonlinear equation appearing in the limit of an infinite number of motors

  6. The endogenous grid method for discrete-continuous dynamic choice models with (or without) taste shocks

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Jørgensen, Thomas H.; Rust, John

    2017-01-01

    We present a fast and accurate computational method for solving and estimating a class of dynamic programming models with discrete and continuous choice variables. The solution method we develop for structural estimation extends the endogenous grid-point method (EGM) to discrete-continuous (DC) p...

  7. Nonlinear wave propagation in discrete and continuous systems

    Science.gov (United States)

    Rothos, V. M.

    2016-09-01

    In this review we try to capture some of the recent excitement induced by a large volume of theoretical and computational studies addressing nonlinear Schrödinger models (discrete and continuous) and the localized structures that they support. We focus on some prototypical structures, namely the breather solutions and solitary waves. In particular, we investigate the bifurcation of travelling wave solution in Discrete NLS system applying dynamical systems methods. Next, we examine the combined effects of cubic and quintic terms of the long range type in the dynamics of a double well potential. The relevant bifurcations, the stability of the branches and their dynamical implications are examined both in the reduced (ODE) and in the full (PDE) setting. We also offer an outlook on interesting possibilities for future work on this theme.

  8. Displacement in the parameter space versus spurious solution of discretization with large time step

    International Nuclear Information System (INIS)

    Mendes, Eduardo; Letellier, Christophe

    2004-01-01

    In order to investigate a possible correspondence between differential and difference equations, it is important to possess discretization of ordinary differential equations. It is well known that when differential equations are discretized, the solution thus obtained depends on the time step used. In the majority of cases, such a solution is considered spurious when it does not resemble the expected solution of the differential equation. This often happens when the time step taken into consideration is too large. In this work, we show that, even for quite large time steps, some solutions which do not correspond to the expected ones are still topologically equivalent to solutions of the original continuous system if a displacement in the parameter space is considered. To reduce such a displacement, a judicious choice of the discretization scheme should be made. To this end, a recent discretization scheme, based on the Lie expansion of the original differential equations, proposed by Monaco and Normand-Cyrot will be analysed. Such a scheme will be shown to be sufficient for providing an adequate discretization for quite large time steps compared to the pseudo-period of the underlying dynamics

  9. Robust stability and ℋ ∞ -estimation for uncertain discrete systems with state-delay

    Directory of Open Access Journals (Sweden)

    Mahmoud Magdi S.

    2001-01-01

    Full Text Available In this paper, we investigate the problems of robust stability and ℋ ∞ -estimation for a class of linear discrete-time systems with time-varying norm-bounded parameter uncertainty and unknown state-delay. We provide complete results for robust stability with prescribed performance measure and establish a version of the discrete Bounded Real Lemma. Then, we design a linear estimator such that the estimation error dynamics is robustly stable with a guaranteed ℋ ∞ -performance irrespective of the parameteric uncertainties and unknown state delays. A numerical example is worked out to illustrate the developed theory.

  10. On an elastic dissipation model as continuous approximation for discrete media

    Directory of Open Access Journals (Sweden)

    I. V. Andrianov

    2006-01-01

    Full Text Available Construction of an accurate continuous model for discrete media is an important topic in various fields of science. We deal with a 1D differential-difference equation governing the behavior of an n-mass oscillator with linear relaxation. It is known that a string-type approximation is justified for low part of frequency spectra of a continuous model, but for free and forced vibrations a solution of discrete and continuous models can be quite different. A difference operator makes analysis difficult due to its nonlocal form. Approximate equations can be obtained by replacing the difference operators via a local derivative operator. Although application of a model with derivative of more than second order improves the continuous model, a higher order of approximated differential equation seriously complicates a solution of continuous problem. It is known that accuracy of the approximation can dramatically increase using Padé approximations. In this paper, one- and two-point Padé approximations suitable for justify choice of structural damping models are used.

  11. Renewal characterization of Markov modulated Poisson processes

    Directory of Open Access Journals (Sweden)

    Marcel F. Neuts

    1989-01-01

    Full Text Available A Markov Modulated Poisson Process (MMPP M(t defined on a Markov chain J(t is a pure jump process where jumps of M(t occur according to a Poisson process with intensity λi whenever the Markov chain J(t is in state i. M(t is called strongly renewal (SR if M(t is a renewal process for an arbitrary initial probability vector of J(t with full support on P={i:λi>0}. M(t is called weakly renewal (WR if there exists an initial probability vector of J(t such that the resulting MMPP is a renewal process. The purpose of this paper is to develop general characterization theorems for the class SR and some sufficiency theorems for the class WR in terms of the first passage times of the bivariate Markov chain [J(t,M(t]. Relevance to the lumpability of J(t is also studied.

  12. Confluence reduction for Markov automata

    NARCIS (Netherlands)

    Timmer, Mark; Katoen, Joost P.; van de Pol, Jaco; Stoelinga, Mariëlle Ida Antoinette

    2016-01-01

    Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. As expected, the state space explosion threatens the analysability of these models. We therefore introduce confluence reduction for Markov automata, a powerful reduction

  13. Markov chain aggregation and its applications to combinatorial reaction networks.

    Science.gov (United States)

    Ganguly, Arnab; Petrov, Tatjana; Koeppl, Heinz

    2014-09-01

    We consider a continuous-time Markov chain (CTMC) whose state space is partitioned into aggregates, and each aggregate is assigned a probability measure. A sufficient condition for defining a CTMC over the aggregates is presented as a variant of weak lumpability, which also characterizes that the measure over the original process can be recovered from that of the aggregated one. We show how the applicability of de-aggregation depends on the initial distribution. The application section is devoted to illustrate how the developed theory aids in reducing CTMC models of biochemical systems particularly in connection to protein-protein interactions. We assume that the model is written by a biologist in form of site-graph-rewrite rules. Site-graph-rewrite rules compactly express that, often, only a local context of a protein (instead of a full molecular species) needs to be in a certain configuration in order to trigger a reaction event. This observation leads to suitable aggregate Markov chains with smaller state spaces, thereby providing sufficient reduction in computational complexity. This is further exemplified in two case studies: simple unbounded polymerization and early EGFR/insulin crosstalk.

  14. Exponential stability of delayed recurrent neural networks with Markovian jumping parameters

    International Nuclear Information System (INIS)

    Wang Zidong; Liu Yurong; Yu Li; Liu Xiaohui

    2006-01-01

    In this Letter, the global exponential stability analysis problem is considered for a class of recurrent neural networks (RNNs) with time delays and Markovian jumping parameters. The jumping parameters considered here are generated from a continuous-time discrete-state homogeneous Markov process, which are governed by a Markov process with discrete and finite state space. The purpose of the problem addressed is to derive some easy-to-test conditions such that the dynamics of the neural network is stochastically exponentially stable in the mean square, independent of the time delay. By employing a new Lyapunov-Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish the desired sufficient conditions, and therefore the global exponential stability in the mean square for the delayed RNNs can be easily checked by utilizing the numerically efficient Matlab LMI toolbox, and no tuning of parameters is required. A numerical example is exploited to show the usefulness of the derived LMI-based stability conditions

  15. Identification of continuous-time systems from samples of input ...

    Indian Academy of Sciences (India)

    Abstract. This paper presents an introductory survey of the methods that have been developed for identification of continuous-time systems from samples of input±output data. The two basic approaches may be described as (i) the indirect method, where first a discrete-time model is estimated from the sampled data and then ...

  16. A Parallel Solver for Large-Scale Markov Chains

    Czech Academy of Sciences Publication Activity Database

    Benzi, M.; Tůma, Miroslav

    2002-01-01

    Roč. 41, - (2002), s. 135-153 ISSN 0168-9274 R&D Projects: GA AV ČR IAA2030801; GA ČR GA101/00/1035 Keywords : parallel preconditioning * iterative methods * discrete Markov chains * generalized inverses * singular matrices * graph partitioning * AINV * Bi-CGSTAB Subject RIV: BA - General Mathematics Impact factor: 0.504, year: 2002

  17. Multi-state reliability for pump group in system based on UGF and semi-Markov process

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Zhao Xinwen; Chen Ling

    2012-01-01

    In this paper, multi-state reliability value of pump group in nuclear power system is obtained by the combination method of the universal generating function (UGF) and Semi-Markov process. UGF arithmetic model of multi-state system reliability is studied, and the performance state probability expression of multi-state component is derived using semi-Markov theory. A quantificational model is defined to express the performance rate of the system and component. Different availability results by multi-state and binary state analysis method are compared under the condition whether the performance rate can satisfy the demanded value, and the mean value of system instantaneous output performance is also obtained. It shows that this combination method is an effective and feasible one which can quantify the effect of the partial failure on the system reliability, and the result of multi-state system reliability by this method deduces the modesty of the reliability value obtained by binary reliability analysis method. (authors)

  18. Limitations of discrete-time quantum walk on a one-dimensional infinite chain

    Science.gov (United States)

    Lin, Jia-Yi; Zhu, Xuanmin; Wu, Shengjun

    2018-04-01

    How well can we manipulate the state of a particle via a discrete-time quantum walk? We show that the discrete-time quantum walk on a one-dimensional infinite chain with coin operators that are independent of the position can only realize product operators of the form eiξ A ⊗1p, which cannot change the position state of the walker. We present a scheme to construct all possible realizations of all the product operators of the form eiξ A ⊗1p. When the coin operators are dependent on the position, we show that the translation operators on the position can not be realized via a DTQW with coin operators that are either the identity operator 1 or the Pauli operator σx.

  19. The discretized Schroedinger equation and simple models for semiconductor quantum wells

    International Nuclear Information System (INIS)

    Boykin, Timothy B; Klimeck, Gerhard

    2004-01-01

    The discretized Schroedinger equation is one of the most commonly employed methods for solving one-dimensional quantum mechanics problems on the computer, yet many of its characteristics remain poorly understood. The differences with the continuous Schroedinger equation are generally viewed as shortcomings of the discrete model and are typically described in purely mathematical terms. This is unfortunate since the discretized equation is more productively viewed from the perspective of solid-state physics, which naturally links the discrete model to realistic semiconductor quantum wells and nanoelectronic devices. While the relationship between the discrete model and a one-dimensional tight-binding model has been known for some time, the fact that the discrete Schroedinger equation admits analytic solutions for quantum wells has gone unnoted. Here we present a solution to this new analytically solvable problem. We show that the differences between the discrete and continuous models are due to their fundamentally different bandstructures, and present evidence for our belief that the discrete model is the more physically reasonable one

  20. Adaptive Mobile Positioning in WCDMA Networks

    Directory of Open Access Journals (Sweden)

    Dong B.

    2005-01-01

    Full Text Available We propose a new technique for mobile tracking in wideband code-division multiple-access (WCDMA systems employing multiple receive antennas. To achieve a high estimation accuracy, the algorithm utilizes the time difference of arrival (TDOA measurements in the forward link pilot channel, the angle of arrival (AOA measurements in the reverse-link pilot channel, as well as the received signal strength. The mobility dynamic is modelled by a first-order autoregressive (AR vector process with an additional discrete state variable as the motion offset, which evolves according to a discrete-time Markov chain. It is assumed that the parameters in this model are unknown and must be jointly estimated by the tracking algorithm. By viewing a nonlinear dynamic system such as a jump-Markov model, we develop an efficient auxiliary particle filtering algorithm to track both the discrete and continuous state variables of this system as well as the associated system parameters. Simulation results are provided to demonstrate the excellent performance of the proposed adaptive mobile positioning algorithm in WCDMA networks.