WorldWideScience

Sample records for periodically driven markov

  1. On almost-periodic points of a topological Markov chain

    International Nuclear Information System (INIS)

    Bogatyi, Semeon A; Redkozubov, Vadim V

    2012-01-01

    We prove that a transitive topological Markov chain has almost-periodic points of all D-periods. Moreover, every D-period is realized by continuously many distinct minimal sets. We give a simple constructive proof of the result which asserts that any transitive topological Markov chain has periodic points of almost all periods, and study the structure of the finite set of positive integers that are not periods.

  2. Respondent-driven sampling as Markov chain Monte Carlo.

    Science.gov (United States)

    Goel, Sharad; Salganik, Matthew J

    2009-07-30

    Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present RDS as Markov chain Monte Carlo importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which the sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating RDS studies.

  3. Resonances in a periodically driven bosonic system

    NARCIS (Netherlands)

    Quelle, Anton; de Morais Smith, Cristiane

    2017-01-01

    Periodically driven systems are a common topic in modern physics. In optical lattices specifically, driving is at the origin of many interesting phenomena. However, energy is not conserved in driven systems, and under periodic driving, heating of a system is a real concern. In an effort to better

  4. Reduced equations of motion for quantum systems driven by diffusive Markov processes.

    Science.gov (United States)

    Sarovar, Mohan; Grace, Matthew D

    2012-09-28

    The expansion of a stochastic Liouville equation for the coupled evolution of a quantum system and an Ornstein-Uhlenbeck process into a hierarchy of coupled differential equations is a useful technique that simplifies the simulation of stochastically driven quantum systems. We expand the applicability of this technique by completely characterizing the class of diffusive Markov processes for which a useful hierarchy of equations can be derived. The expansion of this technique enables the examination of quantum systems driven by non-Gaussian stochastic processes with bounded range. We present an application of this extended technique by simulating Stark-tuned Förster resonance transfer in Rydberg atoms with nonperturbative position fluctuations.

  5. Vlasov dynamics of periodically driven systems

    Science.gov (United States)

    Banerjee, Soumyadip; Shah, Kushal

    2018-04-01

    Analytical solutions of the Vlasov equation for periodically driven systems are of importance in several areas of plasma physics and dynamical systems and are usually approximated using ponderomotive theory. In this paper, we derive the plasma distribution function predicted by ponderomotive theory using Hamiltonian averaging theory and compare it with solutions obtained by the method of characteristics. Our results show that though ponderomotive theory is relatively much easier to use, its predictions are very restrictive and are likely to be very different from the actual distribution function of the system. We also analyse all possible initial conditions which lead to periodic solutions of the Vlasov equation for periodically driven systems and conjecture that the irreducible polynomial corresponding to the initial condition must only have squares of the spatial and momentum coordinate. The resulting distribution function for other initial conditions is aperiodic and can lead to complex relaxation processes within the plasma.

  6. a multi-period markov model for monthly rainfall in lagos, nigeria

    African Journals Online (AJOL)

    PUBLICATIONS1

    A twelve-period. Markov model has been developed for the monthly rainfall data for Lagos, along the coast of .... autoregressive process to model river flow; Deo et al. (2015) utilized an ...... quences for the analysis of river basins by simulation.

  7. Resonances in a periodically driven bosonic system

    Science.gov (United States)

    Quelle, Anton; Smith, Cristiane Morais

    2017-11-01

    Periodically driven systems are a common topic in modern physics. In optical lattices specifically, driving is at the origin of many interesting phenomena. However, energy is not conserved in driven systems, and under periodic driving, heating of a system is a real concern. In an effort to better understand this phenomenon, the heating of single-band systems has been studied, with a focus on disorder- and interaction-induced effects, such as many-body localization. Nevertheless, driven systems occur in a much wider context than this, leaving room for further research. Here, we fill this gap by studying a noninteracting model, characterized by discrete, periodically spaced energy levels that are unbounded from above. We couple these energy levels resonantly through a periodic drive, and discuss the heating dynamics of this system as a function of the driving protocol. In this way, we show that a combination of stimulated emission and absorption causes the presence of resonant stable states. This will serve to elucidate the conditions under which resonant driving causes heating in quantum systems.

  8. Resonances in a periodically driven bosonic system.

    Science.gov (United States)

    Quelle, Anton; Smith, Cristiane Morais

    2017-11-01

    Periodically driven systems are a common topic in modern physics. In optical lattices specifically, driving is at the origin of many interesting phenomena. However, energy is not conserved in driven systems, and under periodic driving, heating of a system is a real concern. In an effort to better understand this phenomenon, the heating of single-band systems has been studied, with a focus on disorder- and interaction-induced effects, such as many-body localization. Nevertheless, driven systems occur in a much wider context than this, leaving room for further research. Here, we fill this gap by studying a noninteracting model, characterized by discrete, periodically spaced energy levels that are unbounded from above. We couple these energy levels resonantly through a periodic drive, and discuss the heating dynamics of this system as a function of the driving protocol. In this way, we show that a combination of stimulated emission and absorption causes the presence of resonant stable states. This will serve to elucidate the conditions under which resonant driving causes heating in quantum systems.

  9. Event-Driven Control for Networked Control Systems With Quantization and Markov Packet Losses.

    Science.gov (United States)

    Yang, Hongjiu; Xu, Yang; Zhang, Jinhui

    2016-05-23

    In this paper, event-driven is used in a networked control system (NCS) which is subjected to the effect of quantization and packet losses. A discrete event-detector is used to monitor specific events in the NCS. Both an arbitrary region quantizer and Markov jump packet losses are also considered for the NCS. Based on zoom strategy and Lyapunov theory, a complete proof is given to guarantee mean square stability of the closed-loop system. Stabilization of the NCS is ensured by designing a feedback controller. Lastly, an inverted pendulum model is given to show the advantages and effectiveness of the proposed results.

  10. Periodic and quasiperiodic revivals in periodically driven interacting quantum systems

    Science.gov (United States)

    Luitz, David J.; Lazarides, Achilleas; Bar Lev, Yevgeny

    2018-01-01

    Recently it has been shown that interparticle interactions generically destroy dynamical localization in periodically driven systems, resulting in diffusive transport and heating. In this Rapid Communication we rigorously construct a family of interacting driven systems which are dynamically localized and effectively decoupled from the external driving potential. We show that these systems exhibit tunable periodic or quasiperiodic revivals of the many-body wave function and thus of all physical observables. By numerically examining spinless fermions on a one-dimensional lattice we show that the analytically obtained revivals of such systems remain stable for finite systems with open boundary conditions while having a finite lifetime in the presence of static spatial disorder. We find this lifetime to be inversely proportional to the disorder strength.

  11. Periodically Driven Array of Single Rydberg Atoms

    Science.gov (United States)

    Basak, Sagarika; Chougale, Yashwant; Nath, Rejish

    2018-03-01

    An array of single Rydberg atoms driven by a temporally modulated atom-field detuning is studied. The periodic modulation effectively modifies the Rabi coupling, leading to unprecedented dynamics in the presence of Rydberg-Rydberg interactions, in particular, blockade enhancement, antiblockades, and state-dependent population trapping. Interestingly, the Schrieffer-Wolf transformation reveals a fundamental process in Rydberg gases, correlated Rabi coupling, which stems from the extended nature of the Rydberg-Rydberg interactions. Also, the correlated coupling provides an alternative depiction for the Rydberg blockade, exhibiting a nontrivial behavior in the presence of periodic modulation. The dynamical localization of a many-body configuration in a driven Rydberg lattice is discussed.

  12. Stochastic demand patterns for Markov service facilities with neutral and active periods

    International Nuclear Information System (INIS)

    Csenki, Attila

    2009-01-01

    In an earlier paper, a closed form expression was obtained for the joint interval reliability of a Markov system with a partitioned state space S=U union D, i.e. for the probability that the system will reside in the set of up states U throughout the union of some specific disjoint time intervals I l =[θ l ,θ l +ζ l ],l=1,...,k. The deterministic time intervals I l formed a demand pattern specifying the desired active periods. In the present paper, we admit stochastic demand patterns by assuming that the lengths of the active periods, ζ l , as well as the lengths of the neutral periods, θ l -(θ l-1 +ζ l-1 ), are random. We explore two mechanisms for modelling random demand: (1) by alternating renewal processes; (2) by sojourn times of some continuous time Markov chain with a partitioned state space. The first construction results in an expression in terms of a revised version of the moment generating functions of the sojourns of the alternating renewal process. The second construction involves the probability that a Markov chain follows certain patterns of visits to some groups of states and yields an expression using Kronecker matrix operations. The model of a small computer system is analysed to exemplify the ideas

  13. Thermodynamics of a periodically driven qubit

    Science.gov (United States)

    Donvil, Brecht

    2018-04-01

    We present a new approach to the open system dynamics of a periodically driven qubit in contact with a temperature bath. We are specifically interested in the thermodynamics of the qubit. It is well known that by combining the Markovian approximation with Floquet theory it is possible to derive a stochastic Schrödinger equation in for the state of the qubit. We follow here a different approach. We use Floquet theory to embed the time-non autonomous qubit dynamics into time-autonomous yet infinite dimensional dynamics. We refer to the resulting infinite dimensional system as the dressed-qubit. Using the Markovian approximation we derive the stochastic Schrödinger equation for the dressed-qubit. The advantage of our approach is that the jump operators are ladder operators of the Hamiltonian. This simplifies the formulation of the thermodynamics. We use the thermodynamics of the infinite dimensional system to recover the thermodynamical description for the driven qubit. We compare our results with the existing literature and recover the known results.

  14. Solar Dynamo Driven by Periodic Flow Oscillation

    Science.gov (United States)

    Mayr, Hans G.; Hartle, Richard E.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    We have proposed that the periodicity of the solar magnetic cycle is determined by wave mean flow interactions analogous to those driving the Quasi Biennial Oscillation in the Earth's atmosphere. Upward propagating gravity waves would produce oscillating flows near the top of the radiation zone that in turn would drive a kinematic dynamo to generate the 22-year solar magnetic cycle. The dynamo we propose is built on a given time independent magnetic field B, which allows us to estimate the time dependent, oscillating components of the magnetic field, (Delta)B. The toroidal magnetic field (Delta)B(sub phi) is directly driven by zonal flow and is relatively large in the source region, (Delta)(sub phi)/B(sub Theta) much greater than 1. Consistent with observations, this field peaks at low latitudes and has opposite polarities in both hemispheres. The oscillating poloidal magnetic field component, (Delta)B(sub Theta), is driven by the meridional circulation, which is difficult to assess without a numerical model that properly accounts for the solar atmosphere dynamics. Scale-analysis suggests that (Delta)B(sub Theta) is small compared to B(sub Theta) in the dynamo region. Relative to B(sub Theta), however, the oscillating magnetic field perturbations are expected to be transported more rapidly upwards in the convection zone to the solar surface. As a result, (Delta)B(sub Theta) (and (Delta)B(sub phi)) should grow relative to B(sub Theta), so that the magnetic fields reverse at the surface as observed. Since the meridional and zonai flow oscillations are out of phase, the poloidal magnetic field peaks during times when the toroidal field reverses direction, which is observed. With the proposed wave driven flow oscillation, the magnitude of the oscillating poloidal magnetic field increases with the mean rotation rate of the fluid. This is consistent with the Bode-Blackett empirical scaling law, which reveals that in massive astrophysical bodies the magnetic moment tends

  15. Data-driven Markov models and their application in the evaluation of adverse events in radiotherapy

    CERN Document Server

    Abler, Daniel; Davies, Jim; Dosanjh, Manjit; Jena, Raj; Kirkby, Norman; Peach, Ken

    2013-01-01

    Decision-making processes in medicine rely increasingly on modelling and simulation techniques; they are especially useful when combining evidence from multiple sources. Markov models are frequently used to synthesize the available evidence for such simulation studies, by describing disease and treatment progress, as well as associated factors such as the treatment's effects on a patient's life and the costs to society. When the same decision problem is investigated by multiple stakeholders, differing modelling assumptions are often applied, making synthesis and interpretation of the results difficult. This paper proposes a standardized approach towards the creation of Markov models. It introduces the notion of ‘general Markov models’, providing a common definition of the Markov models that underlie many similar decision problems, and develops a language for their specification. We demonstrate the application of this language by developing a general Markov model for adverse event analysis in radiotherapy ...

  16. Replicate periodic windows in the parameter space of driven oscillators

    Energy Technology Data Exchange (ETDEWEB)

    Medeiros, E.S., E-mail: esm@if.usp.br [Instituto de Fisica, Universidade de Sao Paulo, Sao Paulo (Brazil); Souza, S.L.T. de [Universidade Federal de Sao Joao del-Rei, Campus Alto Paraopeba, Minas Gerais (Brazil); Medrano-T, R.O. [Departamento de Ciencias Exatas e da Terra, Universidade Federal de Sao Paulo, Diadema, Sao Paulo (Brazil); Caldas, I.L. [Instituto de Fisica, Universidade de Sao Paulo, Sao Paulo (Brazil)

    2011-11-15

    Highlights: > We apply a weak harmonic perturbation to control chaos in two driven oscillators. > We find replicate periodic windows in the driven oscillator parameter space. > We find that the periodic window replication is associated with the chaos control. - Abstract: In the bi-dimensional parameter space of driven oscillators, shrimp-shaped periodic windows are immersed in chaotic regions. For two of these oscillators, namely, Duffing and Josephson junction, we show that a weak harmonic perturbation replicates these periodic windows giving rise to parameter regions correspondent to periodic orbits. The new windows are composed of parameters whose periodic orbits have the same periodicity and pattern of stable and unstable periodic orbits already existent for the unperturbed oscillator. Moreover, these unstable periodic orbits are embedded in chaotic attractors in phase space regions where the new stable orbits are identified. Thus, the observed periodic window replication is an effective oscillator control process, once chaotic orbits are replaced by regular ones.

  17. Data-driven Markov models and their application in the evaluation of adverse events in radiotherapy

    Science.gov (United States)

    Abler, Daniel; Kanellopoulos, Vassiliki; Davies, Jim; Dosanjh, Manjit; Jena, Raj; Kirkby, Norman; Peach, Ken

    2013-01-01

    Decision-making processes in medicine rely increasingly on modelling and simulation techniques; they are especially useful when combining evidence from multiple sources. Markov models are frequently used to synthesize the available evidence for such simulation studies, by describing disease and treatment progress, as well as associated factors such as the treatment's effects on a patient's life and the costs to society. When the same decision problem is investigated by multiple stakeholders, differing modelling assumptions are often applied, making synthesis and interpretation of the results difficult. This paper proposes a standardized approach towards the creation of Markov models. It introduces the notion of ‘general Markov models’, providing a common definition of the Markov models that underlie many similar decision problems, and develops a language for their specification. We demonstrate the application of this language by developing a general Markov model for adverse event analysis in radiotherapy and argue that the proposed method can automate the creation of Markov models from existing data. The approach has the potential to support the radiotherapy community in conducting systematic analyses involving predictive modelling of existing and upcoming radiotherapy data. We expect it to facilitate the application of modelling techniques in medical decision problems beyond the field of radiotherapy, and to improve the comparability of their results. PMID:23824126

  18. Data-driven Markov models and their application in the evaluation of adverse events in radiotherapy

    International Nuclear Information System (INIS)

    Abler, Daniel; Kanellopoulos, Vassiliki; Dosanjh, Manjit; Davies, Jim; Peach, Ken; Jena, Raj; Kirkby, Norman

    2013-01-01

    Decision-making processes in medicine rely increasingly on modelling and simulation techniques; they are especially useful when combining evidence from multiple sources. Markov models are frequently used to synthesize the available evidence for such simulation studies, by describing disease and treatment progress, as well as associated factors such as the treatment's effects on a patient's life and the costs to society. When the same decision problem is investigated by multiple stakeholders, differing modelling assumptions are often applied, making synthesis and interpretation of the results difficult. This paper proposes a standardized approach towards the creation of Markov models. It introduces the notion of 'general Markov models', providing a common definition of the Markov models that underlie many similar decision problems, and develops a language for their specification. We demonstrate the application of this language by developing a general Markov model for adverse event analysis in radiotherapy and argue that the proposed method can automate the creation of Markov models from existing data. The approach has the potential to support the radiotherapy community in conducting systematic analyses involving predictive modelling of existing and upcoming radiotherapy data. We expect it to facilitate the application of modelling techniques in medical decision problems beyond the field of radiotherapy, and to improve the comparability of their results. (author)

  19. Markov processes

    CERN Document Server

    Kirkwood, James R

    2015-01-01

    Review of ProbabilityShort HistoryReview of Basic Probability DefinitionsSome Common Probability DistributionsProperties of a Probability DistributionProperties of the Expected ValueExpected Value of a Random Variable with Common DistributionsGenerating FunctionsMoment Generating FunctionsExercisesDiscrete-Time, Finite-State Markov ChainsIntroductionNotationTransition MatricesDirected Graphs: Examples of Markov ChainsRandom Walk with Reflecting BoundariesGambler’s RuinEhrenfest ModelCentral Problem of Markov ChainsCondition to Ensure a Unique Equilibrium StateFinding the Equilibrium StateTransient and Recurrent StatesIndicator FunctionsPerron-Frobenius TheoremAbsorbing Markov ChainsMean First Passage TimeMean Recurrence Time and the Equilibrium StateFundamental Matrix for Regular Markov ChainsDividing a Markov Chain into Equivalence ClassesPeriodic Markov ChainsReducible Markov ChainsSummaryExercisesDiscrete-Time, Infinite-State Markov ChainsRenewal ProcessesDelayed Renewal ProcessesEquilibrium State f...

  20. On periodically driven AdS/CFT

    CERN Document Server

    Auzzi, Roberto; Gudnason, Sven Bjarke; Rabinovici, Eliezer

    2013-01-01

    We use the AdS/CFT correspondence to study a thermally isolated conformal field theory in four dimensions which undergoes a repeated deformation by an external periodic time-dependent source coupled to an operator of dimension Delta. The initial state of the theory is taken to be at a finite temperature. We compute the energy dissipated in the system as a function of the frequency and of the dimension Delta of the perturbing operator. This is done in the linear response regime. In order to study the details of thermalization in the dual field theory, the leading-order backreaction on the AdS black brane metric is computed. The evolution of the event and the apparent horizons is monitored; the increase of area in each cycle coincides with the increase in the equilibrium entropy corresponding to the amount of energy dissipated. The time evolution of the entanglement entropy of a spherical region and that of the two-points function of a probe operator with a large dimension are also inspected; we find a delay in...

  1. Markov-modulated infinite-server queues driven by a common background process

    OpenAIRE

    Mandjes , Michel; De Turck , Koen

    2016-01-01

    International audience; This paper studies a system with multiple infinite-server queues which are modulated by a common background process. If this background process, being modeled as a finite-state continuous-time Markov chain, is in state j, then the arrival rate into the i-th queue is λi,j, whereas the service times of customers present in this queue are exponentially distributed with mean µ −1 i,j ; at each of the individual queues all customers present are served in parallel (thus refl...

  2. Data-Driven Markov Decision Process Approximations for Personalized Hypertension Treatment Planning

    Directory of Open Access Journals (Sweden)

    Greggory J. Schell PhD

    2016-10-01

    Full Text Available Background: Markov decision process (MDP models are powerful tools. They enable the derivation of optimal treatment policies but may incur long computational times and generate decision rules that are challenging to interpret by physicians. Methods: In an effort to improve usability and interpretability, we examined whether Poisson regression can approximate optimal hypertension treatment policies derived by an MDP for maximizing a patient’s expected discounted quality-adjusted life years. Results: We found that our Poisson approximation to the optimal treatment policy matched the optimal policy in 99% of cases. This high accuracy translates to nearly identical health outcomes for patients. Furthermore, the Poisson approximation results in 104 additional quality-adjusted life years per 1000 patients compared to the Seventh Joint National Committee’s treatment guidelines for hypertension. The comparative health performance of the Poisson approximation was robust to the cardiovascular disease risk calculator used and calculator calibration error. Limitations: Our results are based on Markov chain modeling. Conclusions: Poisson model approximation for blood pressure treatment planning has high fidelity to optimal MDP treatment policies, which can improve usability and enhance transparency of more personalized treatment policies.

  3. Generalized Boolean logic Driven Markov Processes: A powerful modeling framework for Model-Based Safety Analysis of dynamic repairable and reconfigurable systems

    International Nuclear Information System (INIS)

    Piriou, Pierre-Yves; Faure, Jean-Marc; Lesage, Jean-Jacques

    2017-01-01

    This paper presents a modeling framework that permits to describe in an integrated manner the structure of the critical system to analyze, by using an enriched fault tree, the dysfunctional behavior of its components, by means of Markov processes, and the reconfiguration strategies that have been planned to ensure safety and availability, with Moore machines. This framework has been developed from BDMP (Boolean logic Driven Markov Processes), a previous framework for dynamic repairable systems. First, the contribution is motivated by pinpointing the limitations of BDMP to model complex reconfiguration strategies and the failures of the control of these strategies. The syntax and semantics of GBDMP (Generalized Boolean logic Driven Markov Processes) are then formally defined; in particular, an algorithm to analyze the dynamic behavior of a GBDMP model is developed. The modeling capabilities of this framework are illustrated on three representative examples. Last, qualitative and quantitative analysis of GDBMP models highlight the benefits of the approach.

  4. Quantum tunneling in the periodically driven SU(2) model

    International Nuclear Information System (INIS)

    Arvieu, R.

    1991-01-01

    The tunneling rate is investigated in the quantum and classical limits using an exactly soluble, periodically driven SU(2) model. The tunneling rate is obtained by solving the time-dependent Schroedinger equation and projecting the exact wave-function on the space of coherent states using the Husimi distribution. The oscillatory, coherent tunneling of the wave-function between two Hartree-Fock minima is observed. The driving plays an important role increasing the tunneling rate by orders of magnitude as compared to the semiclassical results. This is due to the dominant role of excited states in the driven quantum tunneling. (author) 15 refs., 4 figs

  5. Quantum revivals in periodically driven systems close to nonlinear resonances

    International Nuclear Information System (INIS)

    Saif, Farhan; Fortunato, Mauro

    2002-01-01

    We calculate the quantum revival time for a wave packet initially well localized in a one-dimensional potential in the presence of an external periodic modulating field. The dependence of the revival time on various parameters of the driven system is shown analytically. As an example of an application of our approach, we compare the analytically obtained values of the revival time for various modulation strengths with the numerically computed ones in the case of a driven gravitational cavity. We show that they are in very good agreement

  6. Theory of many-body localization in periodically driven systems

    International Nuclear Information System (INIS)

    Abanin, Dmitry A.; De Roeck, Wojciech; Huveneers, François

    2016-01-01

    We present a theory of periodically driven, many-body localized (MBL) systems. We argue that MBL persists under periodic driving at high enough driving frequency: The Floquet operator (evolution operator over one driving period) can be represented as an exponential of an effective time-independent Hamiltonian, which is a sum of quasi-local terms and is itself fully MBL. We derive this result by constructing a sequence of canonical transformations to remove the time-dependence from the original Hamiltonian. When the driving evolves smoothly in time, the theory can be sharpened by estimating the probability of adiabatic Landau–Zener transitions at many-body level crossings. In all cases, we argue that there is delocalization at sufficiently low frequency. We propose a phase diagram of driven MBL systems.

  7. Coherent states of the driven Rydberg atom: Quantum-classical correspondence of periodically driven systems

    International Nuclear Information System (INIS)

    Vela-Arevalo, Luz V.; Fox, Ronald F.

    2005-01-01

    A methodology to calculate generalized coherent states for a periodically driven system is presented. We study wave packets constructed as a linear combination of suitable Floquet states of the three-dimensional Rydberg atom in a microwave field. The driven coherent states show classical space localization, spreading, and revivals and remain localized along the classical trajectory. The microwave strength and frequency have a great effect in the localization of Floquet states, since quasienergy avoided crossings produce delocalization of the Floquet states, showing that tuning of the parameters is very important. Using wavelet-based time-frequency analysis, the classical phase-space structure is determined, which allows us to show that the driven coherent state is located in a large regular region in which the z coordinate is in resonance with the external field. The expectation values of the wave packet show that the driven coherent state evolves along the classical trajectory

  8. Discrete changes of current statistics in periodically driven stochastic systems

    International Nuclear Information System (INIS)

    Chernyak, Vladimir Y; Sinitsyn, N A

    2010-01-01

    We demonstrate that the counting statistics of currents in periodically driven ergodic stochastic systems can show sharp changes of some of its properties in response to continuous changes of the driving protocol. To describe this effect, we introduce a new topological phase factor in the evolution of the moment generating function which is akin to the topological geometric phase in the evolution of a periodically driven quantum mechanical system with time-reversal symmetry. This phase leads to the prediction of a sign change for the difference of the probabilities to find even and odd numbers of particles transferred in a stochastic system in response to cyclic evolution of control parameters. The driving protocols that lead to this sign change should enclose specific degeneracy points in the space of control parameters. The relation between the topology of the paths in the control parameter space and the sign changes can be described in terms of the first Stiefel–Whitney class of topological invariants. (letter)

  9. Upscaling of dilution and mixing using a trajectory based Spatial Markov random walk model in a periodic flow domain

    Science.gov (United States)

    Sund, Nicole L.; Porta, Giovanni M.; Bolster, Diogo

    2017-05-01

    The Spatial Markov Model (SMM) is an upscaled model that has been used successfully to predict effective mean transport across a broad range of hydrologic settings. Here we propose a novel variant of the SMM, applicable to spatially periodic systems. This SMM is built using particle trajectories, rather than travel times. By applying the proposed SMM to a simple benchmark problem we demonstrate that it can predict mean effective transport, when compared to data from fully resolved direct numerical simulations. Next we propose a methodology for using this SMM framework to predict measures of mixing and dilution, that do not just depend on mean concentrations, but are strongly impacted by pore-scale concentration fluctuations. We use information from trajectories of particles to downscale and reconstruct pore-scale approximate concentration fields from which mixing and dilution measures are then calculated. The comparison between measurements from fully resolved simulations and predictions with the SMM agree very favorably.

  10. Hydrodynamic bifurcation in electro-osmotically driven periodic flows

    Science.gov (United States)

    Morozov, Alexander; Marenduzzo, Davide; Larson, Ronald G.

    2018-06-01

    In this paper, we report an inertial instability that occurs in electro-osmotically driven channel flows. We assume that the charge motion under the influence of an externally applied electric field is confined to a small vicinity of the channel walls that, effectively, drives a bulk flow through a prescribed slip velocity at the boundaries. Here, we study spatially periodic wall velocity modulations in a two-dimensional straight channel numerically. At low slip velocities, the bulk flow consists of a set of vortices along each wall that are left-right symmetric, while at sufficiently high slip velocities, this flow loses its stability through a supercritical bifurcation. Surprisingly, the flow state that bifurcates from a left-right symmetric base flow has a rather strong mean component along the channel, which is similar to pressure-driven velocity profiles. The instability sets in at rather small Reynolds numbers of about 20-30, and we discuss its potential applications in microfluidic devices.

  11. Redistribution of phase fluctuations in a periodically driven cuprate superconductor

    Energy Technology Data Exchange (ETDEWEB)

    Hoeppner, Robert; Zhu, Beilei; Rexin, Tobias [Zentrum fuer Optische Quantentechnologien und Institut fuer Laserphysik, Hamburg (Germany); Mathey, Ludwig [Zentrum fuer Optische Quantentechnologien und Institut fuer Laserphysik, Hamburg (Germany); The Hamburg Centre for Ultrafast Imaging, Hamburg (Germany); Cavalleri, Andrea [Max Planck Institute for the Structure and Dynamics of Matter, Hamburg (Germany); Department of Physics, Oxford University, Clarendon Laboratory, Parks Road, Oxford (United Kingdom)

    2015-07-01

    We study the thermally fluctuating state of a bi-layer cuprate superconductor under the periodic action of a staggered field oscillating at optical frequencies. This analysis distills essential elements of the recently discovered phenomenon of light enhanced coherence in YBCO, which was achieved by periodically driving infrared active apical oxygen distortions. The effect of a staggered periodic perturbation is studied using a Langevin description of driven, coupled Josephson junctions, which represent two neighboring pairs of layers and their two plasmons. We demonstrate that the external driving leads to a suppression of phase fluctuations of the low-energy plasmon, an effect which is amplified via the resonance of the high energy plasmon, with a striking suppression of the low-energy fluctuations, as visible in the power spectrum. We also find that this effect acts onto the in-plane fluctuations, which are reduced on long length scales and we discuss the behavior of vortices in the ab-planes and across the weakly coupled junctions.

  12. Biased and flow driven Brownian motion in periodic channels

    Science.gov (United States)

    Martens, S.; Straube, A.; Schmid, G.; Schimansky-Geier, L.; Hänggi, P.

    2012-02-01

    In this talk we will present an expansion of the common Fick-Jacobs approximation to hydrodynamically as well as by external forces driven Brownian transport in two-dimensional channels exhibiting smoothly varying periodic cross-section. We employ an asymptotic analysis to the components of the flow field and to stationary probability density for finding the particles within the channel in a geometric parameter. We demonstrate that the problem of biased Brownian dynamics in a confined 2D geometry can be replaced by Brownian motion in an effective periodic one-dimensional potential ψ(x) which takes the external bias, the change of the local channel width, and the flow velocity component in longitudinal direction into account. In addition, we study the influence of the external force magnitude, respectively, the pressure drop of the fluid on the particle transport quantities like the averaged velocity and the effective diffusion coefficient. The critical ratio between the external force and pressure drop where the average velocity equals zero is identified and the dependence of the latter on the channel geometry is derived. Analytic findings are confirmed by numerical simulations of the particle dynamics in a reflection symmetric sinusoidal channel.

  13. Quantum noise spectra for periodically driven cavity optomechanics

    Science.gov (United States)

    Aranas, E. B.; Akram, M. Javed; Malz, Daniel; Monteiro, T. S.

    2017-12-01

    A growing number of experimental setups in cavity optomechanics exploit periodically driven fields. However, such setups are not amenable to analysis by using simple, yet powerful, closed-form expressions of linearized optomechanics, which have provided so much of our present understanding of experimental optomechanics. In the present paper, we formulate a method to calculate quantum noise spectra in modulated optomechanical systems, which we analyze, compare, and discuss with two other recently proposed solutions: we term these (i) frequency-shifted operators, (ii) Floquet [Phys. Rev. A 94, 023803 (2016), 10.1103/PhysRevA.94.023803], and (iii) iterative analysis [New J. Phys. 18, 113021 (2016), 10.1088/1367-2630/18/11/113021]. We prove that (i) and (ii) yield equivalent noise spectra and find that (iii) is an analytical approximation to (i) for weak modulations. We calculate the noise spectra of a doubly modulated system describing experiments of levitated particles in hybrid electro-optical traps. We show excellent agreement with Langevin stochastic simulations in the thermal regime and predict squeezing in the quantum regime. Finally, we reveal how otherwise-inaccessible spectral components of a modulated system can be measured in heterodyne detection through an appropriate choice of modulation frequencies.

  14. Periodic thermodynamics of laser-driven molecular motor

    International Nuclear Information System (INIS)

    Li Dan; Zheng Wenwei; Wang Zhisong

    2008-01-01

    Operation of a laser-driven nano-motor inevitably generates a non-trivial amount of heat, which can possibly lead to instability or even hinder the motor's continual running. This work quantitatively examines the overheating problem for a recently proposed laser-operated molecular locomotive. We present a single-molecule cooling theory, in which molecular details of the locomotive system are explicitly treated. This theory is able to quantitatively predict cooling efficiency for various candidates of molecular systems for the locomotive, and also suggests concrete strategies for improving the locomotive's cooling. It is found that water environment is able to cool the hot locomotive down to room temperature within 100 picoseconds after photon absorption. This cooling time is a few orders of magnitude shorter than the typical time for laser operation, effectively preventing any overheating for the nano-locomotive. However, when the cooling is less effective in non-aqueous environment, residual heat may build up. A continuous running of the motor will then lead to a periodic thermodynamics, which is a common character of many laser-operated nano-devices

  15. Climate-driven seasonal geocenter motion during the GRACE period

    NARCIS (Netherlands)

    Zhang, Hongyue; Sun, Y.

    2018-01-01

    Annual cycles in the geocenter motion time series are primarily driven by mass changes in the Earth’s hydrologic system, which includes land hydrology, atmosphere, and oceans. Seasonal variations of the geocenter motion have been reliably determined according to Sun et al. (J Geophys Res Solid

  16. Mammalian cycles: internally defined periods and interaction-driven amplitudes

    Directory of Open Access Journals (Sweden)

    LR Ginzburg

    2015-08-01

    Full Text Available The cause of mammalian cycles—the rise and fall of populations over a predictable period of time—has remained controversial since these patterns were first observed over a century ago. In spite of extensive work on observable mammalian cycles, the field has remained divided upon what the true cause is, with a majority of opinions attributing it to either predation or to intra-species mechanisms. Here we unite the eigenperiod hypothesis, which describes an internal, maternal effect-based mechanism to explain the cycles’ periods with a recent generalization explaining the amplitude of snowshoe hare cycles in northwestern North America based on initial predator abundance. By explaining the period and the amplitude of the cycle with separate mechanisms, a unified and consistent view of the causation of cycles is reached. Based on our suggested theory, we forecast the next snowshoe hare cycle (predicted peak in 2016 to be of extraordinarily low amplitude.

  17. Mode locking and spatiotemporal chaos in periodically driven Gunn diodes

    DEFF Research Database (Denmark)

    Mosekilde, Erik; Feldberg, Rasmus; Knudsen, Carsten

    1990-01-01

    oscillation entrains with the external signal. This produces a devil’s staircase of frequency-locked solutions. At higher microwave amplitudes, period doubling and other forms of mode-converting bifurcations can be seen. In this interval the diode also exhibits spatiotemporal chaos. At still higher microwave...

  18. Climate-driven seasonal geocenter motion during the GRACE period

    Science.gov (United States)

    Zhang, Hongyue; Sun, Yu

    2018-03-01

    Annual cycles in the geocenter motion time series are primarily driven by mass changes in the Earth's hydrologic system, which includes land hydrology, atmosphere, and oceans. Seasonal variations of the geocenter motion have been reliably determined according to Sun et al. (J Geophys Res Solid Earth 121(11):8352-8370, 2016) by combining the Gravity Recovery And Climate Experiment (GRACE) data with an ocean model output. In this study, we reconstructed the observed seasonal geocenter motion with geophysical model predictions of mass variations in the polar ice sheets, continental glaciers, terrestrial water storage (TWS), and atmosphere and dynamic ocean (AO). The reconstructed geocenter motion time series is shown to be in close agreement with the solution based on GRACE data supporting with an ocean bottom pressure model. Over 85% of the observed geocenter motion time series, variance can be explained by the reconstructed solution, which allows a further investigation of the driving mechanisms. We then demonstrated that AO component accounts for 54, 62, and 25% of the observed geocenter motion variances in the X, Y, and Z directions, respectively. The TWS component alone explains 42, 32, and 39% of the observed variances. The net mass changes over oceans together with self-attraction and loading effects also contribute significantly (about 30%) to the seasonal geocenter motion in the X and Z directions. Other contributing sources, on the other hand, have marginal (less than 10%) impact on the seasonal variations but introduce a linear trend in the time series.

  19. Explosive dome eruptions modulated by periodic gas-driven inflation

    Science.gov (United States)

    Johnson, Jeffrey B.; Lyons, John; Andrews, B. J.; Lees, J.M.

    2014-01-01

    Volcan Santiaguito (Guatemala) “breathes” with extraordinary regularity as the edifice's conduit system accumulates free gas, which periodically vents to the atmosphere. Periodic pressurization controls explosion timing, which nearly always occurs at peak inflation, as detected with tiltmeters. Tilt cycles in January 2012 reveal regular 26 ± 6 min inflation/deflation cycles corresponding to at least ~101 kg/s of gas fluxing the system. Very long period (VLP) earthquakes presage explosions and occur during cycles when inflation rates are most rapid. VLPs locate ~300 m below the vent and indicate mobilization of volatiles, which ascend at ~50 m/s. Rapid gas ascent feeds pyroclast-laden eruptions lasting several minutes and rising to ~1 km. VLPs are not observed during less rapid inflation episodes; instead, gas vents passively through the conduit producing no infrasound and no explosion. These observations intimate that steady gas exsolution and accumulation in shallow reservoirs may drive inflation cycles at open-vent silicic volcanoes.

  20. Markov Chains and Markov Processes

    OpenAIRE

    Ogunbayo, Segun

    2016-01-01

    Markov chain, which was named after Andrew Markov is a mathematical system that transfers a state to another state. Many real world systems contain uncertainty. This study helps us to understand the basic idea of a Markov chain and how is been useful in our daily lives. For some times there had been suspense on distinct predictions and future existences. Also in different games there had been different expectations or results involved. That is the reason why we need Markov chains to predict o...

  1. Electron transmission through a periodically driven graphene magnetic barrier

    Energy Technology Data Exchange (ETDEWEB)

    Biswas, R., E-mail: rbiswas.pkc@gmail.com [Department of Physics, P. K. College, Contai, Purba Medinipur, West Bengal – 721401 (India); Maiti, S. [Ajodhya Hills G.S.A.T High School, Ajodhya, Purulia, West Bengal – 723152 (India); Mukhopadhyay, S. [Purulia Zilla School, Dulmi Nadiha, Purulia, West Bengal – 723102 (India); Sinha, C. [Department of Physics, P. K. College, Contai, Purba Medinipur, West Bengal – 721401 (India); Department of Theoretical Physics, Indian Association for the Cultivation of Science, Jadavpur – 700032 (India)

    2017-05-10

    Electronic transport through graphene magnetic barriers is studied theoretically in presence of an external time harmonic scalar potential in the framework of non-perturbative Landau–Floquet Formalism. The oscillating field mostly suppresses the transmission for rectangular magnetic barrier structure and exhibits the Fano resonance for multiphoton processes due to the presence of bound state inside the barrier. While, for a pair of delta function barriers of larger separation, the oscillating potential suppresses the usual Fabry–Perot oscillations in the transmission and a new type of asymmetric Fano resonance is noted for smaller separation, occurring due to extended states between the barriers. - Highlights: • Tunnelling of the Dirac Fermions through oscillating pure magnetic barriers is reported for the first time. • The high energy transmission through a graphene magnetic barrier is suppressed by the application of time periodic modulation. • Suppression of the Fabry Perot transmission is noted due to the application of an external time harmonic potential. • Two kinds of the Fano resonances are noted in transmission through a pair of modulated δ-function magnetic barriers.

  2. Nonequilibrium steady states and resonant tunneling in time-periodically driven systems with interactions

    Science.gov (United States)

    Qin, Tao; Hofstetter, Walter

    2018-03-01

    Time-periodically driven systems are a versatile toolbox for realizing interesting effective Hamiltonians. Heating, caused by excitations to high-energy states, is a challenge for experiments. While most setups so far address the relatively weakly interacting regime, it is of general interest to study heating in strongly correlated systems. Using Floquet dynamical mean-field theory, we study nonequilibrium steady states (NESS) in the Falicov-Kimball model, with time-periodically driven kinetic energy or interaction. We systematically investigate the nonequilibrium properties of the NESS. For a driven kinetic energy, we show that resonant tunneling, where the interaction is an integer multiple of the driving frequency, plays an important role in the heating. In the strongly correlated regime, we show that this can be well understood using Fermi's golden rule and the Schrieffer-Wolff transformation for a time-periodically driven system. We furthermore demonstrate that resonant tunneling can be used to control the population of Floquet states to achieve "photodoping." For driven interactions introduced by an oscillating magnetic field near a widely adopted Feshbach resonance, we find that the double occupancy is strongly modulated. Our calculations apply to shaken ultracold-atom systems and to solid-state systems in a spatially uniform but time-dependent electric field. They are also closely related to lattice modulation spectroscopy. Our calculations are helpful to understand the latest experiments on strongly correlated Floquet systems.

  3. On Lévy-driven vacation models with correlated busy periods and service interruptions

    NARCIS (Netherlands)

    Kella, O.; Boxma, O.; Mandjes, M.

    2010-01-01

    This paper considers queues with server vacations, but departs from the traditional setting in two ways: (i) the queueing model is driven by Lévy processes rather than just compound Poisson processes; (ii) the vacation lengths depend on the length of the server’s preceding busy period. Regarding the

  4. Self-assembly of colloidal bands driven by a periodic external field

    Energy Technology Data Exchange (ETDEWEB)

    Nunes, André S.; Araújo, Nuno A. M., E-mail: nmaraujo@fc.ul.pt; Telo da Gama, Margarida M. [Departamento de Física, Faculdade de Ciências, Universidade de Lisboa, P-1749-016 Lisboa, Portugal and Centro de Física Teórica e Computacional, Universidade de Lisboa, P-1749-016 Lisboa (Portugal)

    2016-01-21

    We study the formation of bands of colloidal particles driven by periodic external fields. Using Brownian dynamics, we determine the dependence of the band width on the strength of the particle interactions and on the intensity and periodicity of the field. We also investigate the switching (field-on) dynamics and the relaxation times as a function of the system parameters. The observed scaling relations were analyzed using a simple dynamic density-functional theory of fluids.

  5. Flux through a Markov chain

    International Nuclear Information System (INIS)

    Floriani, Elena; Lima, Ricardo; Ourrad, Ouerdia; Spinelli, Lionel

    2016-01-01

    Highlights: • The flux through a Markov chain of a conserved quantity (mass) is studied. • Mass is supplied by an external source and ends in the absorbing states of the chain. • Meaningful for modeling open systems whose dynamics has a Markov property. • The analytical expression of mass distribution is given for a constant source. • The expression of mass distribution is given for periodic or random sources. - Abstract: In this paper we study the flux through a finite Markov chain of a quantity, that we will call mass, which moves through the states of the chain according to the Markov transition probabilities. Mass is supplied by an external source and accumulates in the absorbing states of the chain. We believe that studying how this conserved quantity evolves through the transient (non-absorbing) states of the chain could be useful for the modelization of open systems whose dynamics has a Markov property.

  6. Mode-coupling of interaction quenched ultracold bosons in periodically driven lattices

    Science.gov (United States)

    Mistakidis, Simeon; Schmelcher, Peter

    2016-05-01

    The out-of-equilibrium dynamics of interaction quenched finite ultracold bosonic ensembles in periodically driven one-dimensional optical lattices is investigated. As a first attempt a brief analysis of the dynamics caused exclusively by the periodically driven lattice is presented and the induced low-lying modes are introduced. It is shown that the periodic driving enforces the bosons in the outer wells to exhibit out-of-phase dipole-like modes, while in the central well the cloud experiences a local-breathing mode. The dynamical behavior of the system is investigated with respect to the driving frequency, revealing a resonant-like behavior of the intra-well dynamics. Subsequently, we drive the system to a highly non-equilibrium state by performing an interaction quench upon the periodically driven lattice. This protocol gives rise to admixtures of excitations in the outer wells, an enhanced breathing in the center and an amplification of the tunneling dynamics. As a result (of the quench) the system experiences multiple resonances between the inter- and intra-well dynamics at different quench amplitudes. Finally, our study reveals that the position of the resonances can be adjusted e.g. via the driving frequency or the atom number manifesting their many-body nature. Deutsche Forschungsgemeinschaft (DFG) in the framework of the SFB 925 ``Light induced dynamics and control of correlated quantum systems''.

  7. Floquet–Magnus theory and generic transient dynamics in periodically driven many-body quantum systems

    International Nuclear Information System (INIS)

    Kuwahara, Tomotaka; Mori, Takashi; Saito, Keiji

    2016-01-01

    This work explores a fundamental dynamical structure for a wide range of many-body quantum systems under periodic driving. Generically, in the thermodynamic limit, such systems are known to heat up to infinite temperature states in the long-time limit irrespective of dynamical details, which kills all the specific properties of the system. In the present study, instead of considering infinitely long-time scale, we aim to provide a general framework to understand the long but finite time behavior, namely the transient dynamics. In our analysis, we focus on the Floquet–Magnus (FM) expansion that gives a formal expression of the effective Hamiltonian on the system. Although in general the full series expansion is not convergent in the thermodynamics limit, we give a clear relationship between the FM expansion and the transient dynamics. More precisely, we rigorously show that a truncated version of the FM expansion accurately describes the exact dynamics for a certain time-scale. Our theory reveals an experimental time-scale for which non-trivial dynamical phenomena can be reliably observed. We discuss several dynamical phenomena, such as the effect of small integrability breaking, efficient numerical simulation of periodically driven systems, dynamical localization and thermalization. Especially on thermalization, we discuss a generic scenario on the prethermalization phenomenon in periodically driven systems. -- Highlights: •A general framework to describe transient dynamics for periodically driven systems. •The theory is applicable to generic quantum many-body systems including long-range interacting systems. •Physical meaning of the truncation of the Floquet–Magnus expansion is rigorously established. •New mechanism of the prethermalization is proposed. •Revealing an experimental time-scale for which non-trivial dynamical phenomena can be reliably observed.

  8. Floquet–Magnus theory and generic transient dynamics in periodically driven many-body quantum systems

    Energy Technology Data Exchange (ETDEWEB)

    Kuwahara, Tomotaka, E-mail: tomotaka.phys@gmail.com [Department of Physics, Graduate School of Science, University of Tokyo, Bunkyo-ku, Tokyo 113-0033 (Japan); WPI, Advanced Institute for Materials Research, Tohoku University, Sendai 980-8577 (Japan); Mori, Takashi [Department of Physics, Graduate School of Science, University of Tokyo, Bunkyo-ku, Tokyo 113-0033 (Japan); Saito, Keiji [Department of Physics, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama, 223-8522 (Japan)

    2016-04-15

    This work explores a fundamental dynamical structure for a wide range of many-body quantum systems under periodic driving. Generically, in the thermodynamic limit, such systems are known to heat up to infinite temperature states in the long-time limit irrespective of dynamical details, which kills all the specific properties of the system. In the present study, instead of considering infinitely long-time scale, we aim to provide a general framework to understand the long but finite time behavior, namely the transient dynamics. In our analysis, we focus on the Floquet–Magnus (FM) expansion that gives a formal expression of the effective Hamiltonian on the system. Although in general the full series expansion is not convergent in the thermodynamics limit, we give a clear relationship between the FM expansion and the transient dynamics. More precisely, we rigorously show that a truncated version of the FM expansion accurately describes the exact dynamics for a certain time-scale. Our theory reveals an experimental time-scale for which non-trivial dynamical phenomena can be reliably observed. We discuss several dynamical phenomena, such as the effect of small integrability breaking, efficient numerical simulation of periodically driven systems, dynamical localization and thermalization. Especially on thermalization, we discuss a generic scenario on the prethermalization phenomenon in periodically driven systems. -- Highlights: •A general framework to describe transient dynamics for periodically driven systems. •The theory is applicable to generic quantum many-body systems including long-range interacting systems. •Physical meaning of the truncation of the Floquet–Magnus expansion is rigorously established. •New mechanism of the prethermalization is proposed. •Revealing an experimental time-scale for which non-trivial dynamical phenomena can be reliably observed.

  9. Current density waves in open mesoscopic rings driven by time-periodic magnetic fluxes

    International Nuclear Information System (INIS)

    Yan Conghua; Wei Lianfu

    2010-01-01

    Quantum coherent transport through open mesoscopic Aharonov-Bohm rings (driven by static fluxes) have been studied extensively. Here, by using quantum waveguide theory and the Floquet theorem we investigate the quantum transport of electrons along an open mesoscopic ring threaded by a time-periodic magnetic flux. We predicate that current density waves could be excited along such an open ring. As a consequence, a net current could be generated along the lead with only one reservoir, if the lead additionally connects to such a normal-metal loop driven by the time-dependent flux. These phenomena could be explained by photon-assisted processes, due to the interaction between the transported electrons and the applied oscillating external fields. We also discuss how the time-average currents (along the ring and the lead) depend on the amplitude and frequency of the applied oscillating fluxes.

  10. Floquet-Magnus theory and generic transient dynamics in periodically driven many-body quantum systems

    Science.gov (United States)

    Kuwahara, Tomotaka; Mori, Takashi; Saito, Keiji

    2016-04-01

    This work explores a fundamental dynamical structure for a wide range of many-body quantum systems under periodic driving. Generically, in the thermodynamic limit, such systems are known to heat up to infinite temperature states in the long-time limit irrespective of dynamical details, which kills all the specific properties of the system. In the present study, instead of considering infinitely long-time scale, we aim to provide a general framework to understand the long but finite time behavior, namely the transient dynamics. In our analysis, we focus on the Floquet-Magnus (FM) expansion that gives a formal expression of the effective Hamiltonian on the system. Although in general the full series expansion is not convergent in the thermodynamics limit, we give a clear relationship between the FM expansion and the transient dynamics. More precisely, we rigorously show that a truncated version of the FM expansion accurately describes the exact dynamics for a certain time-scale. Our theory reveals an experimental time-scale for which non-trivial dynamical phenomena can be reliably observed. We discuss several dynamical phenomena, such as the effect of small integrability breaking, efficient numerical simulation of periodically driven systems, dynamical localization and thermalization. Especially on thermalization, we discuss a generic scenario on the prethermalization phenomenon in periodically driven systems.

  11. Current reversal in a continuously periodic system driven by an additive noise and a multiplicative noise

    International Nuclear Information System (INIS)

    Wang Canjun; Chen Shibo; Mei Dongcheng

    2006-01-01

    We study the noise-induce transport and current reversal of Brownian particles in a continuously periodic potential driven by cross correlation between a multiplicative white noise and an additive white noise. We find that directed motion of the Brownian particles can be induced by the correlation between the additive noise and the multiplicative noise. The current reversal and the direction of the current is controlled by the values of the intensity (λ) of the correlated noises and a dimensionless parameter R (R=α/D, D is the intensity of multiplicative noise and α is the intensity of additive noise)

  12. A Markov chain model for N-linked protein glycosylation – towards a low-parameter tool for model-driven glycoengineering

    DEFF Research Database (Denmark)

    Spahn, Philipp N.; Hansen, Anders Holmgaard; Hansen, Henning Gram

    2016-01-01

    Glycosylation is a critical quality attribute of most recombinant biotherapeutics. Consequently, drug development requires careful control of glycoforms to meet bioactivity and biosafety requirements. However, glycoengineering can be extraordinarily difficult given the complex reaction networks...... present a novel low-parameter approach to describe glycosylation using flux-balance and Markov chain modeling. The model recapitulates the biological complexity of glycosylation, but does not require user-provided kinetic information. We use this method to predict and experimentally validate glycoprofiles...

  13. Evaluation of Markov-Decision Model for Instructional Sequence Optimization. Semi-Annual Technical Report for the period 1 July-31 December 1975. Technical Report No. 76.

    Science.gov (United States)

    Wollmer, Richard D.; Bond, Nicholas A.

    Two computer-assisted instruction programs were written in electronics and trigonometry to test the Wollmer Markov Model for optimizing hierarchial learning; calibration samples totalling 110 students completed these programs. Since the model postulated that transfer effects would be a function of the amount of practice, half of the students were…

  14. Markov processes and controlled Markov chains

    CERN Document Server

    Filar, Jerzy; Chen, Anyue

    2002-01-01

    The general theory of stochastic processes and the more specialized theory of Markov processes evolved enormously in the second half of the last century. In parallel, the theory of controlled Markov chains (or Markov decision processes) was being pioneered by control engineers and operations researchers. Researchers in Markov processes and controlled Markov chains have been, for a long time, aware of the synergies between these two subject areas. However, this may be the first volume dedicated to highlighting these synergies and, almost certainly, it is the first volume that emphasizes the contributions of the vibrant and growing Chinese school of probability. The chapters that appear in this book reflect both the maturity and the vitality of modern day Markov processes and controlled Markov chains. They also will provide an opportunity to trace the connections that have emerged between the work done by members of the Chinese school of probability and the work done by the European, US, Central and South Ameri...

  15. Floquet prethermalization and regimes of heating in a periodically driven, interacting quantum system

    Science.gov (United States)

    Weidinger, Simon; Knap, Michael

    We study the regimes of heating in the periodically driven O (N) -model, which represents a generic model for interacting quantum many-body systems. By computing the absorbed energy with a non-equilibrium Keldysh Green's function approach, we establish three dynamical regimes: at short times a single-particle dominated regime, at intermediate times a stable Floquet prethermal regime in which the system ceases to absorb, and at parametrically late times a thermalizing regime. Our simulations suggest that in the thermalizing regime the absorbed energy grows algebraically in time with an the exponent that approaches the universal value of 1 / 2 , and is thus significantly slower than linear Joule heating. Our results demonstrate the parametric stability of prethermal states in a generic many-body system driven at frequencies that are comparable to its microscopic scales. This paves the way for realizing exotic quantum phases, such as time crystals or interacting topological phases, in the prethermal regime of interacting Floquet systems. We acknowledge support from the Technical University of Munich - Institute for Advanced Study, funded by the German Excellence Initiative and the European Union FP7 under Grant agreement 291763, and from the DFG Grant No. KN 1254/1-1.

  16. Nonequilibriun Dynamic Phases of Driven Vortex Lattices in Superconductors with Periodic Pinning Arrays

    Science.gov (United States)

    Reichhardt, C.; Olson, C. J.; Nori, F.

    1998-03-01

    We present results from extensive simulations of driven vortex lattices interacting with periodic pinning arrays. Changing an applied driving force produces an exceptionally rich variety of distinct dynamic phases which include over a dozen well defined plastic flow phases. Transitions between different dynamical phases are marked by sharp jumps in the V(I) curves that coincide with distinct changes in the vortex trajectories and vortex lattice order. A series of dynamical phase diagrams are presented which outline the onset of the different dynamical phases (C. Reichhardt, C.J. Olson, and F. Nori, Phys. Rev. Lett. 78), 2648 (1997); and to be published. Videos are avaliable at http://www-personal.engin.umich.edu/ñori/. Using force balance arguments, several of the phase boundaries can be derived analyticaly.

  17. Semiclassical analysis of long-wavelength multiphoton processes: The periodically driven harmonic oscillator

    International Nuclear Information System (INIS)

    Fox, Ronald F.; Vela-Arevalo, Luz V.

    2002-01-01

    The problem of multiphoton processes for intense, long-wavelength irradiation of atomic and molecular electrons is presented. The recently developed method of quasiadiabatic time evolution is used to obtain a nonperturbative analysis. When applied to the standard vector potential coupling, an exact auxiliary equation is obtained that is in the electric dipole coupling form. This is achieved through application of the Goeppert-Mayer gauge. While the analysis to this point is general and aimed at microwave irradiation of Rydberg atoms, a Floquet analysis of the auxiliary equation is presented for the special case of the periodically driven harmonic oscillator. Closed form expressions for a complete set of Floquet states are obtained. These are used to demonstrate that for the oscillator case there are no multiphoton resonances

  18. Thermodynamics of Micro- and Nano-Systems Driven by Periodic Temperature Variations

    Directory of Open Access Journals (Sweden)

    Kay Brandner

    2015-08-01

    Full Text Available We introduce a general framework for analyzing the thermodynamics of small systems that are driven by both a periodic temperature variation and some external parameter modulating their energy. This setup covers, in particular, periodic micro- and nano-heat engines. In a first step, we show how to express total entropy production by properly identified time-independent affinities and currents without making a linear response assumption. In linear response, kinetic coefficients akin to Onsager coefficients can be identified. Specializing to a Fokker-Planck-type dynamics, we show that these coefficients can be expressed as a sum of an adiabatic contribution and one reminiscent of a Green-Kubo expression that contains deviations from adiabaticity. Furthermore, we show that the generalized kinetic coefficients fulfill an Onsager-Casimir-type symmetry tracing back to microscopic reversibility. This symmetry allows for nonidentical off-diagonal coefficients if the driving protocols are not symmetric under time reversal. We then derive a novel constraint on the kinetic coefficients that is sharper than the second law and provides an efficiency-dependent bound on power. As one consequence, we can prove that the power vanishes at least linearly when approaching Carnot efficiency. We illustrate our general framework by explicitly working out the paradigmatic case of a Brownian heat engine realized by a colloidal particle in a time-dependent harmonic trap subject to a periodic temperature profile. This case study reveals inter alia that our new general bound on power is asymptotically tight.

  19. Control dynamics of interaction quenched ultracold bosons in periodically driven lattices

    Science.gov (United States)

    Mistakidis, Simeon; Schmelcher, Peter; Group of Fundamental Processes in Quantum Physics Team

    2016-05-01

    The out-of-equilibrium dynamics of ultracold bosons following an interaction quench upon a periodically driven optical lattice is investigated. It is shown that an interaction quench triggers the inter-well tunneling dynamics, while for the intra-well dynamics breathing and cradle-like processes can be generated. In particular, the occurrence of a resonance between the cradle and tunneling modes is revealed. On the other hand, the employed periodic driving enforces the bosons in the mirror wells to oscillate out-of-phase and to exhibit a dipole mode, while in the central well the cloud experiences a breathing mode. The dynamical behaviour of the system is investigated with respect to the driving frequency revealing a resonant behaviour of the intra-well dynamics. To drive the system in a highly non-equilibrium state an interaction quench upon the driving is performed giving rise to admixtures of excitations in the outer wells, an enhanced breathing in the center and an amplification of the tunneling dynamics. As a result of the quench the system experiences multiple resonances between the inter- and intra-well dynamics at different quench amplitudes. Deutsche Forschungsgemeinschaft, SFB 925 ``Light induced dynamics and control of correlated quantum systems''.

  20. Identifying Time Periods of Minimal Thermal Gradient for Temperature-Driven Structural Health Monitoring

    Directory of Open Access Journals (Sweden)

    John Reilly

    2018-03-01

    Full Text Available Temperature changes play a large role in the day to day structural behavior of structures, but a smaller direct role in most contemporary Structural Health Monitoring (SHM analyses. Temperature-Driven SHM will consider temperature as the principal driving force in SHM, relating a measurable input temperature to measurable output generalized strain (strain, curvature, etc. and generalized displacement (deflection, rotation, etc. to create three-dimensional signatures descriptive of the structural behavior. Identifying time periods of minimal thermal gradient provides the foundation for the formulation of the temperature–deformation–displacement model. Thermal gradients in a structure can cause curvature in multiple directions, as well as non-linear strain and stress distributions within the cross-sections, which significantly complicates data analysis and interpretation, distorts the signatures, and may lead to unreliable conclusions regarding structural behavior and condition. These adverse effects can be minimized if the signatures are evaluated at times when thermal gradients in the structure are minimal. This paper proposes two classes of methods based on the following two metrics: (i the range of raw temperatures on the structure, and (ii the distribution of the local thermal gradients, for identifying time periods of minimal thermal gradient on a structure with the ability to vary the tolerance of acceptable thermal gradients. The methods are tested and validated with data collected from the Streicker Bridge on campus at Princeton University.

  1. Identifying Time Periods of Minimal Thermal Gradient for Temperature-Driven Structural Health Monitoring.

    Science.gov (United States)

    Reilly, John; Glisic, Branko

    2018-03-01

    Temperature changes play a large role in the day to day structural behavior of structures, but a smaller direct role in most contemporary Structural Health Monitoring (SHM) analyses. Temperature-Driven SHM will consider temperature as the principal driving force in SHM, relating a measurable input temperature to measurable output generalized strain (strain, curvature, etc.) and generalized displacement (deflection, rotation, etc.) to create three-dimensional signatures descriptive of the structural behavior. Identifying time periods of minimal thermal gradient provides the foundation for the formulation of the temperature-deformation-displacement model. Thermal gradients in a structure can cause curvature in multiple directions, as well as non-linear strain and stress distributions within the cross-sections, which significantly complicates data analysis and interpretation, distorts the signatures, and may lead to unreliable conclusions regarding structural behavior and condition. These adverse effects can be minimized if the signatures are evaluated at times when thermal gradients in the structure are minimal. This paper proposes two classes of methods based on the following two metrics: (i) the range of raw temperatures on the structure, and (ii) the distribution of the local thermal gradients, for identifying time periods of minimal thermal gradient on a structure with the ability to vary the tolerance of acceptable thermal gradients. The methods are tested and validated with data collected from the Streicker Bridge on campus at Princeton University.

  2. Two-rate periodic protocol with dynamics driven through many cycles

    Science.gov (United States)

    Kar, Satyaki

    2017-02-01

    We study the long time dynamics in closed quantum systems periodically driven via time dependent parameters with two frequencies ω1 and ω2=r ω1 . Tuning of the ratio r there can unleash plenty of dynamical phenomena to occur. Our study includes integrable models like Ising and X Y models in d =1 and the Kitaev model in d =1 and 2 and can also be extended to Dirac fermions in graphene. We witness the wave-function overlap or dynamic freezing that occurs within some small/ intermediate frequency regimes in the (ω1,r ) plane (with r ≠0 ) when the ground state is evolved through a single cycle of driving. However, evolved states soon become steady with long driving, and the freezing scenario gets rarer. We extend the formalism of adiabatic-impulse approximation for many cycle driving within our two-rate protocol and show the near-exact comparisons at small frequencies. An extension of the rotating wave approximation is also developed to gather an analytical framework of the dynamics at high frequencies. Finally we compute the entanglement entropy in the stroboscopically evolved states within the gapped phases of the system and observe how it gets tuned with the ratio r in our protocol. The minimally entangled states are found to fall within the regime of dynamical freezing. In general, the results indicate that the entanglement entropy in our driven short-ranged integrable systems follow a genuine nonarea law of scaling and show a convergence (with a r dependent pace) towards volume scaling behavior as the driving is continued for a long time.

  3. Dynamic Phases in Driven Vortex Lattices in Superconductors with Periodic Pinning Arrays.

    Science.gov (United States)

    Reichhardt, C.; Olson, C. J.; Nori, F.

    1997-03-01

    In an extensive series of simulations of driven vortices interacting with periodic pinning arrays, an extremely rich variety of novel plastic flow phases, very distinct from those observed in random arrays, are found as a function of applied driving force. We show that signatures of the transitions between these different dynamical phases appear as pronounced jumps and dips in the I-V curves, coinciding with marked changes in the microscopic structure and flow behavior of the vortex lattice. When the number of vortices is greater than the number of pinning sites, we observe up to six distinct dynamical phases, including a pinned phase, a flow of interstitial vortices between pinned vortices, a disordered flow, a 1D flow along the pinning rows, and a homogeneous flow. By varying a wide range of microscopic pinning parameters, including pinning strength, size, density, and degree of ordering, as well as varying temperature and commensurability, we obtain a series of dynamic phase diagrams. nori>A short video will also be presented to highlight these different dynamic phases.

  4. Insolation driven biomagnetic response to the Holocene Warm Period in semi-arid East Asia

    Science.gov (United States)

    Liu, Suzhen; Deng, Chenglong; Xiao, Jule; Li, Jinhua; Paterson, Greig A.; Chang, Liao; Yi, Liang; Qin, Huafeng; Pan, Yongxin; Zhu, Rixiang

    2015-01-01

    The Holocene Warm Period (HWP) provides valuable insights into the climate system and biotic responses to environmental variability and thus serves as an excellent analogue for future global climate changes. Here we document, for the first time, that warm and wet HWP conditions were highly favourable for magnetofossil proliferation in the semi-arid Asian interior. The pronounced increase of magnetofossil concentrations at ~9.8 ka and decrease at ~5.9 ka in Dali Lake coincided respectively with the onset and termination of the HWP, and are respectively linked to increased nutrient supply due to postglacial warming and poor nutrition due to drying at ~6 ka in the Asian interior. The two-stage transition at ~7.7 ka correlates well with increased organic carbon in middle HWP and suggests that improved climate conditions, leading to high quality nutrient influx, fostered magnetofossil proliferation. Our findings represent an excellent lake record in which magnetofossil abundance is, through nutrient availability, controlled by insolation driven climate changes.

  5. Patterns and singular features of extreme fluctuational paths of a periodically driven system

    International Nuclear Information System (INIS)

    Chen, Zhen; Liu, Xianbin

    2016-01-01

    Large fluctuations of an overdamped periodically driven oscillating system are investigated theoretically and numerically in the limit of weak noise. Optimal paths fluctuating to certain point are given by statistical analysis using the concept of prehistory probability distribution. The validity of statistical results is verified by solutions of boundary value problem. Optimal paths are found to change topologically when terminating points lie at opposite side of a switching line. Patterns of extreme paths are plotted through a proper parameterization of Lagrangian manifold having complicated structures. Several extreme paths to the same point are obtained by multiple solutions of boundary value solutions. Actions along various extreme paths are calculated and associated analysis is performed in relation to the singular features of the patterns. - Highlights: • Both extreme and optimal paths are obtained by various methods. • Boundary value problems are solved to ensure the validity of statistical results. • Topological structure of Lagrangian manifold is considered. • Singularities of the pattern of extreme paths are studied.

  6. Derivation of Markov processes that violate detailed balance

    Science.gov (United States)

    Lee, Julian

    2018-03-01

    Time-reversal symmetry of the microscopic laws dictates that the equilibrium distribution of a stochastic process must obey the condition of detailed balance. However, cyclic Markov processes that do not admit equilibrium distributions with detailed balance are often used to model systems driven out of equilibrium by external agents. I show that for a Markov model without detailed balance, an extended Markov model can be constructed, which explicitly includes the degrees of freedom for the driving agent and satisfies the detailed balance condition. The original cyclic Markov model for the driven system is then recovered as an approximation at early times by summing over the degrees of freedom for the driving agent. I also show that the widely accepted expression for the entropy production in a cyclic Markov model is actually a time derivative of an entropy component in the extended model. Further, I present an analytic expression for the entropy component that is hidden in the cyclic Markov model.

  7. Markov stochasticity coordinates

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2017-01-01

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  8. Decisive Markov Chains

    OpenAIRE

    Abdulla, Parosh Aziz; Henda, Noomene Ben; Mayr, Richard

    2007-01-01

    We consider qualitative and quantitative verification problems for infinite-state Markov chains. We call a Markov chain decisive w.r.t. a given set of target states F if it almost certainly eventually reaches either F or a state from which F can no longer be reached. While all finite Markov chains are trivially decisive (for every set F), this also holds for many classes of infinite Markov chains. Infinite Markov chains which contain a finite attractor are decisive w.r.t. every set F. In part...

  9. Markov stochasticity coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Eliazar, Iddo, E-mail: iddo.eliazar@intel.com

    2017-01-15

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  10. Anomalous edge states and the bulk-edge correspondence for periodically-driven two dimensional systems

    DEFF Research Database (Denmark)

    Rudner, Mark Spencer; Lindner, Netanel; Berg, Erez

    2013-01-01

    revealed phenomena that cannot be characterized by analogy to the topological classification framework for static systems. In particular, in driven systems in two dimensions (2D), robust chiral edge states can appear even though the Chern numbers of all the bulk Floquet bands are zero. Here, we elucidate...... the crucial distinctions between static and driven 2D systems, and construct a new topological invariant that yields the correct edge-state structure in the driven case. We provide formulations in both the time and frequency domains, which afford additional insight into the origins of the “anomalous” spectra...... that arise in driven systems. Possibilities for realizing these phenomena in solid-state and cold-atomic systems are discussed....

  11. Semi-Markov processes

    CERN Document Server

    Grabski

    2014-01-01

    Semi-Markov Processes: Applications in System Reliability and Maintenance is a modern view of discrete state space and continuous time semi-Markov processes and their applications in reliability and maintenance. The book explains how to construct semi-Markov models and discusses the different reliability parameters and characteristics that can be obtained from those models. The book is a useful resource for mathematicians, engineering practitioners, and PhD and MSc students who want to understand the basic concepts and results of semi-Markov process theory. Clearly defines the properties and

  12. Periodically driven random quantum spin chains: real-space renormalization for Floquet localized phases

    Science.gov (United States)

    Monthus, Cécile

    2017-07-01

    When random quantum spin chains are submitted to some periodic Floquet driving, the eigenstates of the time-evolution operator over one period can be localized in real space. For the case of periodic quenches between two Hamiltonians (or periodic kicks), where the time-evolution operator over one period reduces to the product of two simple transfer matrices, we propose a block-self-dual renormalization procedure to construct the localized eigenstates of the Floquet dynamics. We also discuss the corresponding strong disorder renormalization procedure, that generalizes the RSRG-X procedure to construct the localized eigenstates of time-independent Hamiltonians.

  13. Markov Trends in Macroeconomic Time Series

    NARCIS (Netherlands)

    R. Paap (Richard)

    1997-01-01

    textabstractMany macroeconomic time series are characterised by long periods of positive growth, expansion periods, and short periods of negative growth, recessions. A popular model to describe this phenomenon is the Markov trend, which is a stochastic segmented trend where the slope depends on the

  14. Fields From Markov Chains

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2005-01-01

    A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly.......A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly....

  15. One-third (period three) harmonic generation in microwave-driven Josephson tunnel junctions

    DEFF Research Database (Denmark)

    Hansen, Jørn Bindslev; Clarke, J.; Mygind, Jesper

    1986-01-01

    One-third harmonic signals have been generated in the zero voltage state of a Josephson tunnel junction driven with a microwave current in the frequency range 8–20 GHz. The signal was as much as 50 dB above the noise level of the detector with a linewidth of less than 100 Hz. The junction...... parameters and microwave current were measured in situ in separate experiments. The subharmonic generation occurred for ranges of microwave current and frequency that were in reasonable agreement with the results of digital computer simulations. Applied Physics Letters is copyrighted by The American...

  16. A New GMRES(m Method for Markov Chains

    Directory of Open Access Journals (Sweden)

    Bing-Yuan Pu

    2013-01-01

    Full Text Available This paper presents a class of new accelerated restarted GMRES method for calculating the stationary probability vector of an irreducible Markov chain. We focus on the mechanism of this new hybrid method by showing how to periodically combine the GMRES and vector extrapolation method into a much efficient one for improving the convergence rate in Markov chain problems. Numerical experiments are carried out to demonstrate the efficiency of our new algorithm on several typical Markov chain problems.

  17. Stochastic response and bifurcation of periodically driven nonlinear oscillators by the generalized cell mapping method

    Science.gov (United States)

    Han, Qun; Xu, Wei; Sun, Jian-Qiao

    2016-09-01

    The stochastic response of nonlinear oscillators under periodic and Gaussian white noise excitations is studied with the generalized cell mapping based on short-time Gaussian approximation (GCM/STGA) method. The solutions of the transition probability density functions over a small fraction of the period are constructed by the STGA scheme in order to construct the GCM over one complete period. Both the transient and steady-state probability density functions (PDFs) of a smooth and discontinuous (SD) oscillator are computed to illustrate the application of the method. The accuracy of the results is verified by direct Monte Carlo simulations. The transient responses show the evolution of the PDFs from being Gaussian to non-Gaussian. The effect of a chaotic saddle on the stochastic response is also studied. The stochastic P-bifurcation in terms of the steady-state PDFs occurs with the decrease of the smoothness parameter, which corresponds to the deterministic pitchfork bifurcation.

  18. Construction and properties of a topological index for periodically driven time-reversal invariant 2D crystals

    Directory of Open Access Journals (Sweden)

    D. Carpentier

    2015-07-01

    Full Text Available We present mathematical details of the construction of a topological invariant for periodically driven two-dimensional lattice systems with time-reversal symmetry and quasienergy gaps, which was proposed recently by some of us. The invariant is represented by a gap-dependent Z2-valued index that is simply related to the Kane–Mele invariants of quasienergy bands but contains an extra information. As a byproduct, we prove new expressions for the two-dimensional Kane–Mele invariant relating the latter to Wess–Zumino amplitudes and the boundary gauge anomaly.

  19. Current-driven plasmonic boom instability in three-dimensional gated periodic ballistic nanostructures

    Science.gov (United States)

    Aizin, G. R.; Mikalopas, J.; Shur, M.

    2016-05-01

    An alternative approach of using a distributed transmission line analogy for solving transport equations for ballistic nanostructures is applied for solving the three-dimensional problem of electron transport in gated ballistic nanostructures with periodically changing width. The structures with varying width allow for modulation of the electron drift velocity while keeping the plasma velocity constant. We predict that in such structures biased by a constant current, a periodic modulation of the electron drift velocity due to the varying width results in the instability of the plasma waves if the electron drift velocity to plasma wave velocity ratio changes from below to above unity. The physics of such instability is similar to that of the sonic boom, but, in the periodically modulated structures, this analog of the sonic boom is repeated many times leading to a larger increment of the instability. The constant plasma velocity in the sections of different width leads to resonant excitation of the unstable plasma modes with varying bias current. This effect (that we refer to as the superplasmonic boom condition) results in a strong enhancement of the instability. The predicted instability involves the oscillating dipole charge carried by the plasma waves. The plasmons can be efficiently coupled to the terahertz electromagnetic radiation due to the periodic geometry of the gated structure. Our estimates show that the analyzed instability should enable powerful tunable terahertz electronic sources.

  20. Insolation driven biomagnetic response to Holocene Warm Period in semi-arid East Asia

    NARCIS (Netherlands)

    Liu, S.; Deng, Chenglong; Xiao, Jule; Li, Jinhua; Paterson, Greig; Chang, Liao; Yi, Liang; Qin, Huafeng; Pan, Yongxin; Zhu, Rixiang

    2015-01-01

    The Holocene Warm Period (HWP) provides valuable insights into the climate system and biotic responses to environmental variability and thus serves as an excellent analogue for future global climate changes. Here we document, for the first time, that warm and wet HWP conditions were highly

  1. Quantum model for a periodically driven selectivity filter in a K+ ion channel

    International Nuclear Information System (INIS)

    Cifuentes, A A; Semião, F L

    2014-01-01

    In this work, we present a quantum transport model for the selectivity filter in the KcsA potassium ion channel. This model is fully consistent with the fact that two conduction pathways are involved in the translocation of ions through the filter, and we show that the presence of a second path may actually bring advantages for the filter as a result of quantum interference. To highlight interferences and resonances in the model, we consider the selectivity filter to be driven by a controlled time-dependent external field, which changes the free-energy scenario and consequently the conduction of the ions. In particular, we demonstrate that the two-pathway conduction mechanism is more advantageous for the filter when dephasing in the transient configurations is lower than in the main configurations. As a matter of fact, K + ions in the main configurations are highly coordinated by oxygen atoms of the filter backbone, and this increases noise. Moreover, we also show that for a wide range of dephasing rates and driving frequencies, the two-pathway conduction used by the filter leads to higher ionic currents than the single–path model. (paper)

  2. Markov Tail Chains

    OpenAIRE

    janssen, Anja; Segers, Johan

    2013-01-01

    The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in Rd. We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In ...

  3. Studies of phase return map and symbolic dynamics in a periodically driven Hodgkin—Huxley neuron

    International Nuclear Information System (INIS)

    Ding Jiong; Zhang Hong; Tong Qin-Ye; Chen Zhuo

    2014-01-01

    How neuronal spike trains encode external information is a hot topic in neurodynamics studies. In this paper, we investigate the dynamical states of the Hodgkin—Huxley neuron under periodic forcing. Depending on the parameters of the stimulus, the neuron exhibits periodic, quasiperiodic and chaotic spike trains. In order to analyze these spike trains quantitatively, we use the phase return map to describe the dynamical behavior on a one-dimensional (1D) map. According to the monotonicity or discontinuous point of the 1D map, the spike trains are transformed into symbolic sequences by implementing a coarse-grained algorithm — symbolic dynamics. Based on the ordering rules of symbolic dynamics, the parameters of the external stimulus can be measured in high resolution with finite length symbolic sequences. A reasonable explanation for why the nervous system can discriminate or cognize the small change of the external signals in a short time is also presented. (general)

  4. Insolation driven biomagnetic response to the Holocene Warm Period in semi-arid East Asia

    OpenAIRE

    Liu, Suzhen; Deng, Chenglong; Xiao, Jule; Li, Jinhua; Paterson, Greig A.; Chang, Liao; Yi, Liang; Qin, Huafeng; Pan, Yongxin; Zhu, Rixiang

    2015-01-01

    The Holocene Warm Period (HWP) provides valuable insights into the climate system and biotic responses to environmental variability and thus serves as an excellent analogue for future global climate changes. Here we document, for the first time, that warm and wet HWP conditions were highly favourable for magnetofossil proliferation in the semi-arid Asian interior. The pronounced increase of magnetofossil concentrations at ~9.8?ka and decrease at ~5.9?ka in Dali Lake coincided respectively wit...

  5. Phase-locking of driven vortex lattices with transverse ac force and periodic pinning

    International Nuclear Information System (INIS)

    Reichhardt, Charles; Kolton, Alejandro B.; Dominguez, Daniel; Gronbech-Jensen, Niels

    2001-01-01

    For a vortex lattice moving in a periodic array we show analytically and numerically that a new type of phase locking occurs in the presence of a longitudinal dc driving force and a transverse ac driving force. This phase locking is distinct from the Shapiro step phase locking found with longitudinal ac drives. We show that an increase in critical current and a fundamental phase-locked step width scale with the square of the driving ac amplitude. Our results should carry over to other systems such as vortex motion in Josephson-junction arrays

  6. Insolation driven biomagnetic response to Holocene Warm Period in semi-arid East Asia

    OpenAIRE

    Liu, S.; Deng, Chenglong; Xiao, Jule; Li, Jinhua; Paterson, Greig; Chang, Liao; Yi, Liang; Qin, Huafeng; Pan, Yongxin; Zhu, Rixiang

    2015-01-01

    The Holocene Warm Period (HWP) provides valuable insights into the climate system and biotic responses to environmental variability and thus serves as an excellent analogue for future global climate changes. Here we document, for the first time, that warm and wet HWP conditions were highly favourable for magnetofossil proliferation in the semi-arid Asian interior. The pronounced increase of magnetofossil concentrations at ~9.8 ka and decrease at ~5.9 ka in Dali Lake coincided respectively wit...

  7. A new autogenous mobile system driven by vibration without impacts, excited by an impulse periodic force

    Directory of Open Access Journals (Sweden)

    Duong The-Hung

    2018-01-01

    Full Text Available This report describes a new proposed design for autogenous mobile systems which can move without any external mechanisms such as legs or wheels. A Duffing oscillator with a cubic spring, which is excited by an impulse periodic force, is utilized to drive the whole system. The rectilinear motion of the system is performed employing the periodically oscillation of the internal mass interacting without collisions with the main body. Utilizing the nonlinear restoring force of the cubic spring, the system can move in desired directions. When the ratio between the excitation force and the friction force is smaller than 2.5, backward or forward motion can be easily achieved by applying an excitation force in the same desired direction. Different from other vibro-impact drifting devices, no impact needed to drive the new proposed system. This novel structure allows to miniaturize the device as well as to simplify the control algorithm thus can significantly expand applicability of the proposed system.

  8. Large deviation function for a driven underdamped particle in a periodic potential

    Science.gov (United States)

    Fischer, Lukas P.; Pietzonka, Patrick; Seifert, Udo

    2018-02-01

    Employing large deviation theory, we explore current fluctuations of underdamped Brownian motion for the paradigmatic example of a single particle in a one-dimensional periodic potential. Two different approaches to the large deviation function of the particle current are presented. First, we derive an explicit expression for the large deviation functional of the empirical phase space density, which replaces the level 2.5 functional used for overdamped dynamics. Using this approach, we obtain several bounds on the large deviation function of the particle current. We compare these to bounds for overdamped dynamics that have recently been derived, motivated by the thermodynamic uncertainty relation. Second, we provide a method to calculate the large deviation function via the cumulant generating function. We use this method to assess the tightness of the bounds in a numerical case study for a cosine potential.

  9. A Rigorous Theory of Many-Body Prethermalization for Periodically Driven and Closed Quantum Systems

    Science.gov (United States)

    Abanin, Dmitry; De Roeck, Wojciech; Ho, Wen Wei; Huveneers, François

    2017-09-01

    Prethermalization refers to the transient phenomenon where a system thermalizes according to a Hamiltonian that is not the generator of its evolution. We provide here a rigorous framework for quantum spin systems where prethermalization is exhibited for very long times. First, we consider quantum spin systems under periodic driving at high frequency {ν}. We prove that up to a quasi-exponential time {τ_* ˜ e^{c ν/log^3 ν}}, the system barely absorbs energy. Instead, there is an effective local Hamiltonian {\\widehat D} that governs the time evolution up to {τ_*}, and hence this effective Hamiltonian is a conserved quantity up to {τ_*}. Next, we consider systems without driving, but with a separation of energy scales in the Hamiltonian. A prime example is the Fermi-Hubbard model where the interaction U is much larger than the hopping J. Also here we prove the emergence of an effective conserved quantity, different from the Hamiltonian, up to a time {τ_*} that is (almost) exponential in {U/J}.

  10. Directed transport in a periodic tube driven by asymmetric unbiased forces coexisting with spatially modulated noises

    International Nuclear Information System (INIS)

    Li Fengguo; Ai Baoquan

    2011-01-01

    Graphical abstract: The current J as a function of the phase shift φ and ε at a = 1/2π, b = 0.5/2π, k B T = 0.5, α = 0.1, and F 0 = 0.5. Highlights: → Unbiased forces and spatially modulated white noises affect the current. → In the adiabatic limit, the analytical expression of directed current is obtained. → Their competition will induce current reversals. → For negative asymmetric parameters of the force, there exists an optimum parameter. → The current increases monotonously for positive asymmetric parameters. - Abstract: Transport of Brownian particles in a symmetrically periodic tube is investigated in the presence of asymmetric unbiased external forces and spatially modulated Gaussian white noises. In the adiabatic limit, we obtain the analytical expression of the directed current. It is found that the temporal asymmetry can break thermodynamic equilibrium and induce a net current. Their competition between the temporal asymmetry force and the phase shift between the noise modulation and the tube shape will induce some peculiar phenomena, for example, current reversals. The current changes with the phase shift in the form of the sine function. For negative asymmetric parameters of the force, there exists an optimum parameter at which the current takes its maximum value. However, the current increases monotonously for positive asymmetric parameters.

  11. Phasic Triplet Markov Chains.

    Science.gov (United States)

    El Yazid Boudaren, Mohamed; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar

    2014-11-01

    Hidden Markov chains have been shown to be inadequate for data modeling under some complex conditions. In this work, we address the problem of statistical modeling of phenomena involving two heterogeneous system states. Such phenomena may arise in biology or communications, among other fields. Namely, we consider that a sequence of meaningful words is to be searched within a whole observation that also contains arbitrary one-by-one symbols. Moreover, a word may be interrupted at some site to be carried on later. Applying plain hidden Markov chains to such data, while ignoring their specificity, yields unsatisfactory results. The Phasic triplet Markov chain, proposed in this paper, overcomes this difficulty by means of an auxiliary underlying process in accordance with the triplet Markov chains theory. Related Bayesian restoration techniques and parameters estimation procedures according to the new model are then described. Finally, to assess the performance of the proposed model against the conventional hidden Markov chain model, experiments are conducted on synthetic and real data.

  12. Analytical results on the periodically driven damped pendulum. Application to sliding charge-density waves and Josephson junctions

    International Nuclear Information System (INIS)

    Azbel, M.Y.; Bak, P.

    1984-01-01

    The differential equation epsilonphi-dieresis+phi-dot-(1/2)α sin(2phi) = I+summation/sub n/ = -infinity/sup infinity/A/sub n/delta(t-t/sub n/) describing the periodically driven damped pendulum is analyzed in the strong damping limit epsilon<<1, using first-order perturbation theory. The equation may represent the motion of a sliding charge-density wave (CDW) in ac plus dc electric fields, and the resistively shunted Josephson junction driven by dc and microwave currents. When the torque I exceeds a critical value the pendulum rotates with a frequency ω. For infinite damping, or zero mass (epsilon = 0), the equation can be transformed to the Schroedinger equation of the Kronig-Penney model. When A/sub n/ is random the pendulum exhibits chaotic motion. In the regular case A/sub n/ = A the frequency ω is a smooth function of the parameters, so there are no phase-locked subharmonic plateaus in the ω(I) curve, or the I-V characteristics for the CDW or Josephson-junction systems. For small nonzero epsilon the return map expressing the phase phi(t/sub n/+1) as a function of the phase phi(t/sub n/) is a one-dimensional circle map. Applying known analytical results for the circle map one finds narrow subharmonic plateaus at all rational frequencies, in agreement with experiments on CDW systems

  13. Experimental study of a thermoelectrically-driven liquid chiller in terms of COP and cooling down period

    International Nuclear Information System (INIS)

    Faraji, Amir Yadollah; Goldsmid, H.J.; Akbarzadeh, Aliakbar

    2014-01-01

    Highlights: • A COP of 0.8 is achievable for a thermoelectrically-driven water chiller. • With two market available TEC modules with ZT around 0.7 sub-zero temperatures became applicable. • Forced air convection heat exchangers have better COP and CDP compared with natural convection. • A PID controller has several advantages against on–off controller for controlling TEC module. - Abstract: To study COP and other cooling parameters of a thermoelectically-driven liquid chiller, a 430 ml capacity liquid chiller incorporating two commercially available thermoelectric modules as its active components, has been designed, built and assessed. The system can use natural or forced air convection in heat exchangers attached to the thermoelectric module surfaces. The coefficient of performance (COP) and cooling down period (CDP) of the system for different thermoelectric input voltages have been measured. The COP of the thermoelectric chiller was found to be in the range 0.2–1.4 for forced convection and 0.2–1 for natural convection at a cooled liquid temperature of 10 °C and an ambient temperature of 18 °C. For the chiller, heat pumping capacity, minimum achievable water temperature, and temperature difference across the thermoelectric surfaces were investigated for input voltages of 3 V, 5 V, 7 V, 10 V and 12 V. Furthermore, as a basis for reliable and convenient control of the chiller, a proportional integral derivative (PID) controller has been proposed

  14. Markov set-chains

    CERN Document Server

    Hartfiel, Darald J

    1998-01-01

    In this study extending classical Markov chain theory to handle fluctuating transition matrices, the author develops a theory of Markov set-chains and provides numerous examples showing how that theory can be applied. Chapters are concluded with a discussion of related research. Readers who can benefit from this monograph are those interested in, or involved with, systems whose data is imprecise or that fluctuate with time. A background equivalent to a course in linear algebra and one in probability theory should be sufficient.

  15. Hidden Markov Models for Human Genes

    DEFF Research Database (Denmark)

    Baldi, Pierre; Brunak, Søren; Chauvin, Yves

    1997-01-01

    We analyse the sequential structure of human genomic DNA by hidden Markov models. We apply models of widely different design: conventional left-right constructs and models with a built-in periodic architecture. The models are trained on segments of DNA sequences extracted such that they cover com...

  16. Confluence reduction for Markov automata

    NARCIS (Netherlands)

    Timmer, Mark; Katoen, Joost P.; van de Pol, Jaco; Stoelinga, Mariëlle Ida Antoinette

    2016-01-01

    Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. As expected, the state space explosion threatens the analysability of these models. We therefore introduce confluence reduction for Markov automata, a powerful reduction

  17. Process Algebra and Markov Chains

    NARCIS (Netherlands)

    Brinksma, Hendrik; Hermanns, H.; Brinksma, Hendrik; Hermanns, H.; Katoen, Joost P.

    This paper surveys and relates the basic concepts of process algebra and the modelling of continuous time Markov chains. It provides basic introductions to both fields, where we also study the Markov chains from an algebraic perspective, viz. that of Markov chain algebra. We then proceed to study

  18. Process algebra and Markov chains

    NARCIS (Netherlands)

    Brinksma, E.; Hermanns, H.; Brinksma, E.; Hermanns, H.; Katoen, J.P.

    2001-01-01

    This paper surveys and relates the basic concepts of process algebra and the modelling of continuous time Markov chains. It provides basic introductions to both fields, where we also study the Markov chains from an algebraic perspective, viz. that of Markov chain algebra. We then proceed to study

  19. Markov Chain Monte Carlo

    Indian Academy of Sciences (India)

    be obtained as a limiting value of a sample path of a suitable ... makes a mathematical model of chance and deals with the problem by .... Is the Markov chain aperiodic? It is! Here is how you can see it. Suppose that after you do the cut, you hold the top half in your right hand, and the bottom half in your left. Then there.

  20. Composable Markov Building Blocks

    NARCIS (Netherlands)

    Evers, S.; Fokkinga, M.M.; Apers, Peter M.G.; Prade, H.; Subrahmanian, V.S.

    2007-01-01

    In situations where disjunct parts of the same process are described by their own first-order Markov models and only one model applies at a time (activity in one model coincides with non-activity in the other models), these models can be joined together into one. Under certain conditions, nearly all

  1. Composable Markov Building Blocks

    NARCIS (Netherlands)

    Evers, S.; Fokkinga, M.M.; Apers, Peter M.G.

    2007-01-01

    In situations where disjunct parts of the same process are described by their own first-order Markov models, these models can be joined together under the constraint that there can only be one activity at a time, i.e. the activities of one model coincide with non-activity in the other models. Under

  2. Perturbed Markov chains

    OpenAIRE

    Solan, Eilon; Vieille, Nicolas

    2015-01-01

    We study irreducible time-homogenous Markov chains with finite state space in discrete time. We obtain results on the sensitivity of the stationary distribution and other statistical quantities with respect to perturbations of the transition matrix. We define a new closeness relation between transition matrices, and use graph-theoretic techniques, in contrast with the matrix analysis techniques previously used.

  3. Markov Chain Monte Carlo

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.

  4. Partially Hidden Markov Models

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Rissanen, Jorma

    1996-01-01

    Partially Hidden Markov Models (PHMM) are introduced. They differ from the ordinary HMM's in that both the transition probabilities of the hidden states and the output probabilities are conditioned on past observations. As an illustration they are applied to black and white image compression where...

  5. Prognostics for Steam Generator Tube Rupture using Markov Chain model

    International Nuclear Information System (INIS)

    Kim, Gibeom; Heo, Gyunyoung; Kim, Hyeonmin

    2016-01-01

    This paper will describe the prognostics method for evaluating and forecasting the ageing effect and demonstrate the procedure of prognostics for the Steam Generator Tube Rupture (SGTR) accident. Authors will propose the data-driven method so called MCMC (Markov Chain Monte Carlo) which is preferred to the physical-model method in terms of flexibility and availability. Degradation data is represented as growth of burst probability over time. Markov chain model is performed based on transition probability of state. And the state must be discrete variable. Therefore, burst probability that is continuous variable have to be changed into discrete variable to apply Markov chain model to the degradation data. The Markov chain model which is one of prognostics methods was described and the pilot demonstration for a SGTR accident was performed as a case study. The Markov chain model is strong since it is possible to be performed without physical models as long as enough data are available. However, in the case of the discrete Markov chain used in this study, there must be loss of information while the given data is discretized and assigned to the finite number of states. In this process, original information might not be reflected on prediction sufficiently. This should be noted as the limitation of discrete models. Now we will be studying on other prognostics methods such as GPM (General Path Model) which is also data-driven method as well as the particle filer which belongs to physical-model method and conducting comparison analysis

  6. Markov Chain: A Predictive Model for Manpower Planning ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Keywords: Markov Chain, Transition Probability Matrix, Manpower Planning, Recruitment, Promotion, .... movement of the workforce in Jordan productivity .... Planning periods, with T being the horizon, the value of t represents a session.

  7. Generalized Markov branching models

    OpenAIRE

    Li, Junping

    2005-01-01

    In this thesis, we first considered a modified Markov branching process incorporating both state-independent immigration and resurrection. After establishing the criteria for regularity and uniqueness, explicit expressions for the extinction probability and mean extinction time are presented. The criteria for recurrence and ergodicity are also established. In addition, an explicit expression for the equilibrium distribution is presented.\\ud \\ud We then moved on to investigate the basic proper...

  8. Pairwise Choice Markov Chains

    OpenAIRE

    Ragain, Stephen; Ugander, Johan

    2016-01-01

    As datasets capturing human choices grow in richness and scale---particularly in online domains---there is an increasing need for choice models that escape traditional choice-theoretic axioms such as regularity, stochastic transitivity, and Luce's choice axiom. In this work we introduce the Pairwise Choice Markov Chain (PCMC) model of discrete choice, an inferentially tractable model that does not assume any of the above axioms while still satisfying the foundational axiom of uniform expansio...

  9. Distinguishing Hidden Markov Chains

    OpenAIRE

    Kiefer, Stefan; Sistla, A. Prasad

    2015-01-01

    Hidden Markov Chains (HMCs) are commonly used mathematical models of probabilistic systems. They are employed in various fields such as speech recognition, signal processing, and biological sequence analysis. We consider the problem of distinguishing two given HMCs based on an observation sequence that one of the HMCs generates. More precisely, given two HMCs and an observation sequence, a distinguishing algorithm is expected to identify the HMC that generates the observation sequence. Two HM...

  10. Fermionic Markov Chains

    OpenAIRE

    Fannes, Mark; Wouters, Jeroen

    2012-01-01

    We study a quantum process that can be considered as a quantum analogue for the classical Markov process. We specifically construct a version of these processes for free Fermions. For such free Fermionic processes we calculate the entropy density. This can be done either directly using Szeg\\"o's theorem for asymptotic densities of functions of Toeplitz matrices, or through an extension of said theorem to rates of functions, which we present in this article.

  11. Pemodelan Markov Switching Autoregressive

    OpenAIRE

    Ariyani, Fiqria Devi; Warsito, Budi; Yasin, Hasbi

    2014-01-01

    Transition from depreciation to appreciation of exchange rate is one of regime switching that ignored by classic time series model, such as ARIMA, ARCH, or GARCH. Therefore, economic variables are modeled by Markov Switching Autoregressive (MSAR) which consider the regime switching. MLE is not applicable to parameters estimation because regime is an unobservable variable. So that filtering and smoothing process are applied to see the regime probabilities of observation. Using this model, tran...

  12. Approximate quantum Markov chains

    CERN Document Server

    Sutter, David

    2018-01-01

    This book is an introduction to quantum Markov chains and explains how this concept is connected to the question of how well a lost quantum mechanical system can be recovered from a correlated subsystem. To achieve this goal, we strengthen the data-processing inequality such that it reveals a statement about the reconstruction of lost information. The main difficulty in order to understand the behavior of quantum Markov chains arises from the fact that quantum mechanical operators do not commute in general. As a result we start by explaining two techniques of how to deal with non-commuting matrices: the spectral pinching method and complex interpolation theory. Once the reader is familiar with these techniques a novel inequality is presented that extends the celebrated Golden-Thompson inequality to arbitrarily many matrices. This inequality is the key ingredient in understanding approximate quantum Markov chains and it answers a question from matrix analysis that was open since 1973, i.e., if Lieb's triple ma...

  13. A relation between non-Markov and Markov processes

    International Nuclear Information System (INIS)

    Hara, H.

    1980-01-01

    With the aid of a transformation technique, it is shown that some memory effects in the non-Markov processes can be eliminated. In other words, some non-Markov processes are rewritten in a form obtained by the random walk process; the Markov process. To this end, two model processes which have some memory or correlation in the random walk process are introduced. An explanation of the memory in the processes is given. (orig.)

  14. Non-stationary Markov chains

    OpenAIRE

    Mallak, Saed

    1996-01-01

    Ankara : Department of Mathematics and Institute of Engineering and Sciences of Bilkent University, 1996. Thesis (Master's) -- Bilkent University, 1996. Includes bibliographical references leaves leaf 29 In thi.s work, we studierl the Ergodicilv of Non-Stationary .Markov chains. We gave several e.xainples with different cases. We proved that given a sec[uence of Markov chains such that the limit of this sec|uence is an Ergodic Markov chain, then the limit of the combination ...

  15. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    Keywords. Markov chain; state space; stationary transition probability; stationary distribution; irreducibility; aperiodicity; stationarity; M-H algorithm; proposal distribution; acceptance probability; image processing; Gibbs sampler.

  16. Probability distributions for Markov chain based quantum walks

    Science.gov (United States)

    Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.

    2018-01-01

    We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.

  17. Musical Markov Chains

    Science.gov (United States)

    Volchenkov, Dima; Dawin, Jean René

    A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.

  18. On Weak Markov's Principle

    DEFF Research Database (Denmark)

    Kohlenbach, Ulrich Wilhelm

    2002-01-01

    We show that the so-called weak Markov's principle (WMP) which states that every pseudo-positive real number is positive is underivable in E-HA + AC. Since allows one to formalize (atl eastl arge parts of) Bishop's constructive mathematics, this makes it unlikely that WMP can be proved within...... the framework of Bishop-style mathematics (which has been open for about 20 years). The underivability even holds if the ine.ective schema of full comprehension (in all types) for negated formulas (in particular for -free formulas) is added, which allows one to derive the law of excluded middle...

  19. Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium

    Science.gov (United States)

    Kapfer, Sebastian C.; Krauth, Werner

    2017-12-01

    We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.

  20. Nonlinear Markov processes: Deterministic case

    International Nuclear Information System (INIS)

    Frank, T.D.

    2008-01-01

    Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution

  1. Regeneration and general Markov chains

    Directory of Open Access Journals (Sweden)

    Vladimir V. Kalashnikov

    1994-01-01

    Full Text Available Ergodicity, continuity, finite approximations and rare visits of general Markov chains are investigated. The obtained results permit further quantitative analysis of characteristics, such as, rates of convergence, continuity (measured as a distance between perturbed and non-perturbed characteristics, deviations between Markov chains, accuracy of approximations and bounds on the distribution function of the first visit time to a chosen subset, etc. The underlying techniques use the embedding of the general Markov chain into a wide sense regenerative process with the help of splitting construction.

  2. Markov chains theory and applications

    CERN Document Server

    Sericola, Bruno

    2013-01-01

    Markov chains are a fundamental class of stochastic processes. They are widely used to solve problems in a large number of domains such as operational research, computer science, communication networks and manufacturing systems. The success of Markov chains is mainly due to their simplicity of use, the large number of available theoretical results and the quality of algorithms developed for the numerical evaluation of many metrics of interest.The author presents the theory of both discrete-time and continuous-time homogeneous Markov chains. He carefully examines the explosion phenomenon, the

  3. Quadratic Variation by Markov Chains

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Horel, Guillaume

    We introduce a novel estimator of the quadratic variation that is based on the the- ory of Markov chains. The estimator is motivated by some general results concerning filtering contaminated semimartingales. Specifically, we show that filtering can in prin- ciple remove the effects of market...... microstructure noise in a general framework where little is assumed about the noise. For the practical implementation, we adopt the dis- crete Markov chain model that is well suited for the analysis of financial high-frequency prices. The Markov chain framework facilitates simple expressions and elegant analyti...

  4. Modeling nonhomogeneous Markov processes via time transformation.

    Science.gov (United States)

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  5. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    Systat Software Asia-Pacific. Ltd., in Bangalore, where the technical work for the development of the statistical software Systat takes ... In Part 4, we discuss some applications of the Markov ... one can construct the joint probability distribution of.

  6. Reviving Markov processes and applications

    International Nuclear Information System (INIS)

    Cai, H.

    1988-01-01

    In this dissertation we study a procedure which restarts a Markov process when the process is killed by some arbitrary multiplicative functional. The regenerative nature of this revival procedure is characterized through a Markov renewal equation. An interesting duality between the revival procedure and the classical killing operation is found. Under the condition that the multiplicative functional possesses an intensity, the generators of the revival process can be written down explicitly. An intimate connection is also found between the perturbation of the sample path of a Markov process and the perturbation of a generator (in Kato's sense). The applications of the theory include the study of the processes like piecewise-deterministic Markov process, virtual waiting time process and the first entrance decomposition (taboo probability)

  7. Confluence reduction for Markov automata

    NARCIS (Netherlands)

    Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. Recently, the process algebra MAPA was introduced to efficiently model such systems. As always, the state space explosion threatens the analysability of the models

  8. Confluence Reduction for Markov Automata

    NARCIS (Netherlands)

    Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette; Braberman, Victor; Fribourg, Laurent

    Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. Recently, the process algebra MAPA was introduced to efficiently model such systems. As always, the state space explosion threatens the analysability of the models

  9. The impact of nurse-driven targeted HIV screening in 8 emergency departments: study protocol for the DICI-VIH cluster-randomized two-period crossover trial.

    Science.gov (United States)

    Leblanc, Judith; Rousseau, Alexandra; Hejblum, Gilles; Durand-Zaleski, Isabelle; de Truchis, Pierre; Lert, France; Costagliola, Dominique; Simon, Tabassome; Crémieux, Anne-Claude

    2016-02-01

    In 2010, to reduce late HIV diagnosis, the French national health agency endorsed non-targeted HIV screening in health care settings. Despite these recommendations, non-targeted screening has not been implemented and only physician-directed diagnostic testing is currently performed. A survey conducted in 2010 in 29 French Emergency Departments (EDs) showed that non-targeted nurse-driven screening was feasible though only a few new HIV diagnoses were identified, predominantly among high-risk groups. A strategy targeting high-risk groups combined with current practice could be shown to be feasible, more efficient and cost-effective than current practice alone. DICI-VIH (acronym for nurse-driven targeted HIV screening) is a multicentre, cluster-randomized, two-period crossover trial. The primary objective is to compare the effectiveness of 2 strategies for diagnosing HIV among adult patients visiting EDs: nurse-driven targeted HIV screening combined with current practice (physician-directed diagnostic testing) versus current practice alone. Main secondary objectives are to compare access to specialist consultation and how early HIV diagnosis occurs in the course of the disease between the 2 groups, and to evaluate the implementation, acceptability and cost-effectiveness of nurse-driven targeted screening. The 2 strategies take place during 2 randomly assigned periods in 8 EDs of metropolitan Paris, where 42 % of France's new HIV patients are diagnosed every year. All patients aged 18 to 64, not presenting secondary to HIV exposure are included. During the intervention period, patients are invited to fill a 7-item questionnaire (country of birth, sexual partners and injection drug use) in order to select individuals who are offered a rapid test. If the rapid test is reactive, a follow-up visit with an infectious disease specialist is scheduled within 72 h. Assuming an 80 % statistical power and a 5 % type 1 error, with 1.04 and 3.38 new diagnoses per 10,000 patients in

  10. Spatio-temporal organization of dynamics in a two-dimensional periodically driven vortex flow: A Lagrangian flow network perspective.

    Science.gov (United States)

    Lindner, Michael; Donner, Reik V

    2017-03-01

    We study the Lagrangian dynamics of passive tracers in a simple model of a driven two-dimensional vortex resembling real-world geophysical flow patterns. Using a discrete approximation of the system's transfer operator, we construct a directed network that describes the exchange of mass between distinct regions of the flow domain. By studying different measures characterizing flow network connectivity at different time-scales, we are able to identify the location of dynamically invariant structures and regions of maximum dispersion. Specifically, our approach allows us to delimit co-existing flow regimes with different dynamics. To validate our findings, we compare several network characteristics to the well-established finite-time Lyapunov exponents and apply a receiver operating characteristic analysis to identify network measures that are particularly useful for unveiling the skeleton of Lagrangian chaos.

  11. Investigation of the climate-driven periodicity of shallow groundwater level fluctuations in a Central-Eastern European agricultural region

    Science.gov (United States)

    Garamhegyi, Tamás; Kovács, József; Pongrácz, Rita; Tanos, Péter; Hatvani, István Gábor

    2018-05-01

    The distribution and amount of groundwater, a crucial source of Earth's drinking and irrigation water, is changing due to climate-change effects. Therefore, it is important to understand groundwater behavior in extreme scenarios, e.g. drought. Shallow groundwater (SGW) level fluctuation under natural conditions displays periodic behavior, i.e. seasonal variation. Thus, the study aims to investigate (1) the periodic behavior of the SGW level time series of an agriculturally important and drought-sensitive region in Central-Eastern Europe - the Carpathian Basin, in the north-eastern part of the Great Hungarian Plain, and (2) its relationship to the European atmospheric pressure action centers. Data from 216 SGW wells were studied using wavelet spectrum analysis and wavelet coherence analyses for 1961-2010. Locally, a clear relationship exists between the absence of annual periodic behavior in the SGW level and the periodicity of droughts, as indicated by the self-calibrating Palmer Drought Severity Index and the Aridity Index. During the non-periodic intervals, significant drops in groundwater levels (average 0.5 m) were recorded in 89% of the wells. This result links the meteorological variables to the periodic behavior of SGW, and consequently, drought. On a regional scale, Mediterranean cyclones from the Gulf of Genoa (northwest Italy) were found to be a driving factor in the 8-yr periodic behavior of the SGW wells. The research documents an important link between SGW levels and local/regional climate variables or indices, thereby facilitating the necessary adaptation strategies on national and/or regional scales, as these must take into account the predictions of drought-related climatic conditions.

  12. Multitudes of Stable States in a Periodically Driven Electron-Nuclear Spin System in a Quantum Dot

    OpenAIRE

    Korenev, V. L.

    2010-01-01

    The periodical modulation of circularly polarized light with a frequency close to the electron spin resonance frequency induces a sharp change of the single electron spin orientation. Hyperfine interaction provides a feedback, thus fixing the precession frequency of the electron spin in the external and the Overhauser field near the modulation frequency. The nuclear polarization is bidirectional and the electron-nuclear spin system (ENSS) possesses a few stable states. A similar frequency-loc...

  13. Baseline Preferences for Daily, Event-Driven, or Periodic HIV Pre-Exposure Prophylaxis among Gay and Bisexual Men in the PRELUDE Demonstration Project

    Directory of Open Access Journals (Sweden)

    Stefanie J. Vaccher

    2017-12-01

    Full Text Available IntroductionThe effectiveness of daily pre-exposure prophylaxis (PrEP is well established. However, there has been increasing interest in non-daily dosing schedules among gay and bisexual men (GBM. This paper explores preferences for PrEP dosing schedules among GBM at baseline in the PRELUDE demonstration project.Materials and methodsIndividuals at high-risk of HIV were enrolled in a free PrEP demonstration project in New South Wales, Australia, between November 2014 and April 2016. At baseline, they completed an online survey containing detailed behavioural, demographic, and attitudinal questions, including their ideal way to take PrEP: daily (one pill taken every day, event-driven (pills taken only around specific risk events, or periodic (daily dosing during periods of increased risk.ResultsOverall, 315 GBM (98% of study sample provided a preferred PrEP dosing schedule at baseline. One-third of GBM expressed a preference for non-daily PrEP dosing: 20% for event-driven PrEP, and 14% for periodic PrEP. Individuals with a trade/vocational qualification were more likely to prefer periodic to daily PrEP [adjusted odds ratio (aOR = 4.58, 95% confidence intervals (95% CI: (1.68, 12.49], compared to individuals whose highest level of education was high school. Having an HIV-positive main regular partner was associated with strong preference for daily, compared to event-driven PrEP [aOR = 0.20, 95% CI: (0.04, 0.87]. Participants who rated themselves better at taking medications were more likely to prefer daily over periodic PrEP [aOR = 0.39, 95% CI: (0.20, 0.76].DiscussionIndividuals’ preferences for PrEP schedules are associated with demographic and behavioural factors that may impact on their ability to access health services and information about PrEP and patterns of HIV risk. At the time of data collection, there were limited data available about the efficacy of non-daily PrEP schedules, and clinicians only recommended daily PrEP to

  14. Maximizing Entropy over Markov Processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2013-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code....

  15. Maximizing entropy over Markov processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2014-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code. © 2014 Elsevier...

  16. Markov Networks in Evolutionary Computation

    CERN Document Server

    Shakya, Siddhartha

    2012-01-01

    Markov networks and other probabilistic graphical modes have recently received an upsurge in attention from Evolutionary computation community, particularly in the area of Estimation of distribution algorithms (EDAs).  EDAs have arisen as one of the most successful experiences in the application of machine learning methods in optimization, mainly due to their efficiency to solve complex real-world optimization problems and their suitability for theoretical analysis. This book focuses on the different steps involved in the conception, implementation and application of EDAs that use Markov networks, and undirected models in general. It can serve as a general introduction to EDAs but covers also an important current void in the study of these algorithms by explaining the specificities and benefits of modeling optimization problems by means of undirected probabilistic models. All major developments to date in the progressive introduction of Markov networks based EDAs are reviewed in the book. Hot current researc...

  17. Excess energy and decoherence factor of a qubit coupled to a one-dimensional periodically driven spin chain

    Science.gov (United States)

    Nag, Tanay

    2016-06-01

    We take a central spin model (CSM), consisting of a one-dimensional environmental Ising spin chain and a single qubit connected globally to all the spins of the environment, to study the excess energy (EE) of the environment and the logarithm of decoherence factor namely, generalized fidelity susceptibility per site (GFSS), associated with the qubit under a periodic driving of the transverse field term of environment across its critical point using the Floquet theory. The coupling to the qubit, prepared in a pure state, with the transverse field of the spin chain yields two sets of EE corresponding to the two species of Floquet operators. In the limit of weak coupling, we derive an approximated expression of GFSS after an infinite number of driving period which can successfully estimate the low- and intermediate-frequency behavior of GFSS obtained numerically with a large number of time periods. Our main focus is to analytically investigate the effect of system-environment coupling strength on the EEs and GFSS and relate the behavior of GFSS to EEs as a function of frequency by plausible analytical arguments. We explicitly show that the low-frequency beatinglike pattern of GFSS is an outcome of two frequencies, causing the oscillations in the two branches of EEs, that are dependent on the coupling strength. In the intermediate frequency regime, dip structure observed in GFSS can be justified by the resonance peaks of EEs at those coupling parameter-dependent frequencies; high-frequency saturation behavior of EEs and GFSS are controlled by the same static Hamiltonian and the associated saturation values are related to the coupling strength.

  18. Markov chains and mixing times

    CERN Document Server

    Levin, David A; Wilmer, Elizabeth L

    2009-01-01

    This book is an introduction to the modern approach to the theory of Markov chains. The main goal of this approach is to determine the rate of convergence of a Markov chain to the stationary distribution as a function of the size and geometry of the state space. The authors develop the key tools for estimating convergence times, including coupling, strong stationary times, and spectral methods. Whenever possible, probabilistic methods are emphasized. The book includes many examples and provides brief introductions to some central models of statistical mechanics. Also provided are accounts of r

  19. Markov Models for Handwriting Recognition

    CERN Document Server

    Plotz, Thomas

    2011-01-01

    Since their first inception, automatic reading systems have evolved substantially, yet the recognition of handwriting remains an open research problem due to its substantial variation in appearance. With the introduction of Markovian models to the field, a promising modeling and recognition paradigm was established for automatic handwriting recognition. However, no standard procedures for building Markov model-based recognizers have yet been established. This text provides a comprehensive overview of the application of Markov models in the field of handwriting recognition, covering both hidden

  20. Low-frequency phase diagram of irradiated graphene and a periodically driven spin-1/2 X Y chain

    Science.gov (United States)

    Mukherjee, Bhaskar; Mohan, Priyanka; Sen, Diptiman; Sengupta, K.

    2018-05-01

    We study the Floquet phase diagram of two-dimensional Dirac materials such as graphene and the one-dimensional (1D) spin-1/2 X Y model in a transverse field in the presence of periodic time-varying terms in their Hamiltonians in the low drive frequency (ω ) regime where standard 1 /ω perturbative expansions fail. For graphene, such periodic time-dependent terms are generated via the application of external radiation of amplitude A0 and time period T =2 π /ω , while for the 1D X Y model, they result from a two-rate drive protocol with a time-dependent magnetic field and nearest-neighbor couplings between the spins. Using the adiabatic-impulse method, whose predictions agree almost exactly with the corresponding numerical results in the low-frequency regime, we provide several semianalytic criteria for the occurrence of changes in the topology of the phase bands (eigenstates of the evolution operator U ) of such systems. For irradiated graphene, we point out the role of the symmetries of the instantaneous Hamiltonian H (t ) and the evolution operator U behind such topology changes. Our analysis reveals that at low frequencies, topology changes of irradiated graphene phase bands may also happen at t =T /3 and2 T /3 (apart from t =T ) showing the necessity of analyzing the phase bands of the system for obtaining its phase diagrams. We chart out the phase diagrams at t =T /3 ,2 T /3 ,and T , where such topology changes occur, as a function of A0 and T using exact numerics, and compare them with the prediction of the adiabatic-impulse method. We show that several characteristics of these phase diagrams can be analytically understood from results obtained using the adiabatic-impulse method and point out the crucial contribution of the high-symmetry points in the graphene Brillouin zone to these diagrams. We study the modes that can appear at the edges of a finite-width strip of graphene and show that the change in the number of such modes agrees with the change in the

  1. A Markov deterioration model for predicting recurrent maintenance ...

    African Journals Online (AJOL)

    The parameters of the Markov chain model for predicting the condition of the road at a design · period for· the flexible pavement failures of wheel track rutting, cracks and pot holes were developed for the Niger State· road network . in Nigeria. Twelve sampled candidate roads were each subjected to standard inventory, traffic ...

  2. Coexistence for an Almost Periodic Predator-Prey Model with Intermittent Predation Driven by Discontinuous Prey Dispersal

    Directory of Open Access Journals (Sweden)

    Yantao Luo

    2017-01-01

    Full Text Available An almost periodic predator-prey model with intermittent predation and prey discontinuous dispersal is studied in this paper, which differs from the classical continuous and impulsive dispersal predator-prey models. The intermittent predation behavior of the predator species only happens in the channels between two patches where the discontinuous migration movement of the prey species occurs. Using analytic approaches and comparison theorems of the impulsive differential equations, sufficient criteria on the boundedness, permanence, and coexistence for this system are established. Finally, numerical simulations demonstrate that, for an intermittent predator-prey model, both the intermittent predation and intrinsic growth rates of the prey and predator species can greatly impact the permanence, extinction, and coexistence of the population.

  3. Multiple stable states of a periodically driven electron spin in a quantum dot using circularly polarized light

    Science.gov (United States)

    Korenev, V. L.

    2011-06-01

    The periodical modulation of circularly polarized light with a frequency close to the electron spin resonance frequency induces a sharp change of the single electron spin orientation. Hyperfine interaction provides a feedback, thus fixing the precession frequency of the electron spin in the external and the Overhauser field near the modulation frequency. The nuclear polarization is bidirectional and the electron-nuclear spin system (ENSS) possesses a few stable states. The same physics underlie the frequency-locking effect for two-color and mode-locked excitations. However, the pulsed excitation with mode-locked laser brings about the multitudes of stable states in ENSS in a quantum dot. The resulting precession frequencies of the electron spin differ in these states by the multiple of the modulation frequency. Under such conditions ENSS represents a digital frequency converter with more than 100 stable channels.

  4. Martingales and Markov chains solved exercises and elements of theory

    CERN Document Server

    Baldi, Paolo; Priouret, Pierre

    2002-01-01

    CONDITIONAL EXPECTATIONSIntroductionDefinition and First PropertiesConditional Expectations and Conditional LawsExercisesSolutionsSTOCHASTIC PROCESSESGeneral FactsStopping TimesExercisesSolutionsMARTINGALESFirst DefinitionsFirst PropertiesThe Stopping TheoremMaximal InequalitiesSquare Integral MartingalesConvergence TheoremsRegular MartingalesExercisesProblemsSolutionsMARKOV CHAINSTransition Matrices, Markov ChainsConstruction and ExistenceComputations on the Canonical ChainPotential OperatorsPassage ProblemsRecurrence, TransienceRecurrent Irreducible ChainsPeriodicityExercisesProblemsSolution

  5. An Application of Graph Theory in Markov Chains Reliability Analysis

    Directory of Open Access Journals (Sweden)

    Pavel Skalny

    2014-01-01

    Full Text Available The paper presents reliability analysis which was realized for an industrial company. The aim of the paper is to present the usage of discrete time Markov chains and the flow in network approach. Discrete Markov chains a well-known method of stochastic modelling describes the issue. The method is suitable for many systems occurring in practice where we can easily distinguish various amount of states. Markov chains are used to describe transitions between the states of the process. The industrial process is described as a graph network. The maximal flow in the network corresponds to the production. The Ford-Fulkerson algorithm is used to quantify the production for each state. The combination of both methods are utilized to quantify the expected value of the amount of manufactured products for the given time period.

  6. Consistency and refinement for Interval Markov Chains

    DEFF Research Database (Denmark)

    Delahaye, Benoit; Larsen, Kim Guldstrand; Legay, Axel

    2012-01-01

    Interval Markov Chains (IMC), or Markov Chains with probability intervals in the transition matrix, are the base of a classic specification theory for probabilistic systems [18]. The standard semantics of IMCs assigns to a specification the set of all Markov Chains that satisfy its interval...

  7. A Markov reward model checker

    NARCIS (Netherlands)

    Katoen, Joost P.; Maneesh Khattri, M.; Zapreev, I.S.; Zapreev, I.S.

    2005-01-01

    This short tool paper introduces MRMC, a model checker for discrete-time and continuous-time Markov reward models. It supports reward extensions of PCTL and CSL, and allows for the automated verification of properties concerning long-run and instantaneous rewards as well as cumulative rewards. In

  8. Adaptive Partially Hidden Markov Models

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Rasmussen, Tage

    1996-01-01

    Partially Hidden Markov Models (PHMM) have recently been introduced. The transition and emission probabilities are conditioned on the past. In this report, the PHMM is extended with a multiple token version. The different versions of the PHMM are applied to bi-level image coding....

  9. Markov Decision Processes in Practice

    NARCIS (Netherlands)

    Boucherie, Richardus J.; van Dijk, N.M.

    2017-01-01

    It is over 30 years ago since D.J. White started his series of surveys on practical applications of Markov decision processes (MDP), over 20 years after the phenomenal book by Martin Puterman on the theory of MDP, and over 10 years since Eugene A. Feinberg and Adam Shwartz published their Handbook

  10. Approximating Markov Chains: What and why

    International Nuclear Information System (INIS)

    Pincus, S.

    1996-01-01

    Much of the current study of dynamical systems is focused on geometry (e.g., chaos and bifurcations) and ergodic theory. Yet dynamical systems were originally motivated by an attempt to open-quote open-quote solve,close-quote close-quote or at least understand, a discrete-time analogue of differential equations. As such, numerical, analytical solution techniques for dynamical systems would seem desirable. We discuss an approach that provides such techniques, the approximation of dynamical systems by suitable finite state Markov Chains. Steady state distributions for these Markov Chains, a straightforward calculation, will converge to the true dynamical system steady state distribution, with appropriate limit theorems indicated. Thus (i) approximation by a computable, linear map holds the promise of vastly faster steady state solutions for nonlinear, multidimensional differential equations; (ii) the solution procedure is unaffected by the presence or absence of a probability density function for the attractor, entirely skirting singularity, fractal/multifractal, and renormalization considerations. The theoretical machinery underpinning this development also implies that under very general conditions, steady state measures are weakly continuous with control parameter evolution. This means that even though a system may change periodicity, or become chaotic in its limiting behavior, such statistical parameters as the mean, standard deviation, and tail probabilities change continuously, not abruptly with system evolution. copyright 1996 American Institute of Physics

  11. Markov chains and mixing times

    CERN Document Server

    Levin, David A

    2017-01-01

    Markov Chains and Mixing Times is a magical book, managing to be both friendly and deep. It gently introduces probabilistic techniques so that an outsider can follow. At the same time, it is the first book covering the geometric theory of Markov chains and has much that will be new to experts. It is certainly THE book that I will use to teach from. I recommend it to all comers, an amazing achievement. -Persi Diaconis, Mary V. Sunseri Professor of Statistics and Mathematics, Stanford University Mixing times are an active research topic within many fields from statistical physics to the theory of algorithms, as well as having intrinsic interest within mathematical probability and exploiting discrete analogs of important geometry concepts. The first edition became an instant classic, being accessible to advanced undergraduates and yet bringing readers close to current research frontiers. This second edition adds chapters on monotone chains, the exclusion process and hitting time parameters. Having both exercises...

  12. Markov Chain Ontology Analysis (MCOA).

    Science.gov (United States)

    Frost, H Robert; McCray, Alexa T

    2012-02-03

    Biomedical ontologies have become an increasingly critical lens through which researchers analyze the genomic, clinical and bibliographic data that fuels scientific research. Of particular relevance are methods, such as enrichment analysis, that quantify the importance of ontology classes relative to a collection of domain data. Current analytical techniques, however, remain limited in their ability to handle many important types of structural complexity encountered in real biological systems including class overlaps, continuously valued data, inter-instance relationships, non-hierarchical relationships between classes, semantic distance and sparse data. In this paper, we describe a methodology called Markov Chain Ontology Analysis (MCOA) and illustrate its use through a MCOA-based enrichment analysis application based on a generative model of gene activation. MCOA models the classes in an ontology, the instances from an associated dataset and all directional inter-class, class-to-instance and inter-instance relationships as a single finite ergodic Markov chain. The adjusted transition probability matrix for this Markov chain enables the calculation of eigenvector values that quantify the importance of each ontology class relative to other classes and the associated data set members. On both controlled Gene Ontology (GO) data sets created with Escherichia coli, Drosophila melanogaster and Homo sapiens annotations and real gene expression data extracted from the Gene Expression Omnibus (GEO), the MCOA enrichment analysis approach provides the best performance of comparable state-of-the-art methods. A methodology based on Markov chain models and network analytic metrics can help detect the relevant signal within large, highly interdependent and noisy data sets and, for applications such as enrichment analysis, has been shown to generate superior performance on both real and simulated data relative to existing state-of-the-art approaches.

  13. Markov processes characterization and convergence

    CERN Document Server

    Ethier, Stewart N

    2009-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists."[A]nyone who works with Markov processes whose state space is uncountably infinite will need this most impressive book as a guide and reference."-American Scientist"There is no question but that space should immediately be reserved for [this] book on the library shelf. Those who aspire to mastery of the contents should also reserve a large number of long winter evenings."-Zentralblatt f?r Mathematik und ihre Grenzgebiete/Mathematics Abstracts"Ethier and Kurtz have produced an excellent treatment of the modern theory of Markov processes that [is] useful both as a reference work and as a graduate textbook."-Journal of Statistical PhysicsMarkov Proce...

  14. The Markov moment problem and extremal problems

    CERN Document Server

    Kreĭn, M G; Louvish, D

    1977-01-01

    In this book, an extensive circle of questions originating in the classical work of P. L. Chebyshev and A. A. Markov is considered from the more modern point of view. It is shown how results and methods of the generalized moment problem are interlaced with various questions of the geometry of convex bodies, algebra, and function theory. From this standpoint, the structure of convex and conical hulls of curves is studied in detail and isoperimetric inequalities for convex hulls are established; a theory of orthogonal and quasiorthogonal polynomials is constructed; problems on limiting values of integrals and on least deviating functions (in various metrics) are generalized and solved; problems in approximation theory and interpolation and extrapolation in various function classes (analytic, absolutely monotone, almost periodic, etc.) are solved, as well as certain problems in optimal control of linear objects.

  15. Fast sampling from a Hidden Markov Model posterior for large data

    DEFF Research Database (Denmark)

    Bonnevie, Rasmus; Hansen, Lars Kai

    2014-01-01

    Hidden Markov Models are of interest in a broad set of applications including modern data driven systems involving very large data sets. However, approximate inference methods based on Bayesian averaging are precluded in such applications as each sampling step requires a full sweep over the data...

  16. Verification of Open Interactive Markov Chains

    OpenAIRE

    Brazdil, Tomas; Hermanns, Holger; Krcal, Jan; Kretinsky, Jan; Rehak, Vojtech

    2012-01-01

    Interactive Markov chains (IMC) are compositional behavioral models extending both labeled transition systems and continuous-time Markov chains. IMC pair modeling convenience - owed to compositionality properties - with effective verification algorithms and tools - owed to Markov properties. Thus far however, IMC verification did not consider compositionality properties, but considered closed systems. This paper discusses the evaluation of IMC in an open and thus compositional interpretation....

  17. Spectral methods for quantum Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Szehr, Oleg

    2014-05-08

    The aim of this project is to contribute to our understanding of quantum time evolutions, whereby we focus on quantum Markov chains. The latter constitute a natural generalization of the ubiquitous concept of a classical Markov chain to describe evolutions of quantum mechanical systems. We contribute to the theory of such processes by introducing novel methods that allow us to relate the eigenvalue spectrum of the transition map to convergence as well as stability properties of the Markov chain.

  18. Spectral methods for quantum Markov chains

    International Nuclear Information System (INIS)

    Szehr, Oleg

    2014-01-01

    The aim of this project is to contribute to our understanding of quantum time evolutions, whereby we focus on quantum Markov chains. The latter constitute a natural generalization of the ubiquitous concept of a classical Markov chain to describe evolutions of quantum mechanical systems. We contribute to the theory of such processes by introducing novel methods that allow us to relate the eigenvalue spectrum of the transition map to convergence as well as stability properties of the Markov chain.

  19. A scaling analysis of a cat and mouse Markov chain

    NARCIS (Netherlands)

    Litvak, Nelli; Robert, Philippe

    2012-01-01

    If ($C_n$) a Markov chain on a discrete state space $S$, a Markov chain ($C_n, M_n$) on the product space $S \\times S$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain behaves like the original Markov chain and the second component changes only when both

  20. Criterion of Semi-Markov Dependent Risk Model

    Institute of Scientific and Technical Information of China (English)

    Xiao Yun MO; Xiang Qun YANG

    2014-01-01

    A rigorous definition of semi-Markov dependent risk model is given. This model is a generalization of the Markov dependent risk model. A criterion and necessary conditions of semi-Markov dependent risk model are obtained. The results clarify relations between elements among semi-Markov dependent risk model more clear and are applicable for Markov dependent risk model.

  1. Frequency domain Monte Carlo simulation method for cross power spectral density driven by periodically pulsed spallation neutron source using complex-valued weight Monte Carlo

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro

    2014-01-01

    Highlights: • The cross power spectral density in ADS has correlated and uncorrelated components. • A frequency domain Monte Carlo method to calculate the uncorrelated one is developed. • The method solves the Fourier transformed transport equation. • The method uses complex-valued weights to solve the equation. • The new method reproduces well the CPSDs calculated with time domain MC method. - Abstract: In an accelerator driven system (ADS), pulsed spallation neutrons are injected at a constant frequency. The cross power spectral density (CPSD), which can be used for monitoring the subcriticality of the ADS, is composed of the correlated and uncorrelated components. The uncorrelated component is described by a series of the Dirac delta functions that occur at the integer multiples of the pulse repetition frequency. In the present paper, a Monte Carlo method to solve the Fourier transformed neutron transport equation with a periodically pulsed neutron source term has been developed to obtain the CPSD in ADSs. Since the Fourier transformed flux is a complex-valued quantity, the Monte Carlo method introduces complex-valued weights to solve the Fourier transformed equation. The Monte Carlo algorithm used in this paper is similar to the one that was developed by the author of this paper to calculate the neutron noise caused by cross section perturbations. The newly-developed Monte Carlo algorithm is benchmarked to the conventional time domain Monte Carlo simulation technique. The CPSDs are obtained both with the newly-developed frequency domain Monte Carlo method and the conventional time domain Monte Carlo method for a one-dimensional infinite slab. The CPSDs obtained with the frequency domain Monte Carlo method agree well with those with the time domain method. The higher order mode effects on the CPSD in an ADS with a periodically pulsed neutron source are discussed

  2. Gold price effect on stock market: A Markov switching vector error correction approach

    Science.gov (United States)

    Wai, Phoong Seuk; Ismail, Mohd Tahir; Kun, Sek Siok

    2014-06-01

    Gold is a popular precious metal where the demand is driven not only for practical use but also as a popular investments commodity. While stock market represents a country growth, thus gold price effect on stock market behavior as interest in the study. Markov Switching Vector Error Correction Models are applied to analysis the relationship between gold price and stock market changes since real financial data always exhibit regime switching, jumps or missing data through time. Besides, there are numerous specifications of Markov Switching Vector Error Correction Models and this paper will compare the intercept adjusted Markov Switching Vector Error Correction Model and intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model to determine the best model representation in capturing the transition of the time series. Results have shown that gold price has a positive relationship with Malaysia, Thailand and Indonesia stock market and a two regime intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model is able to provide the more significance and reliable result compare to intercept adjusted Markov Switching Vector Error Correction Models.

  3. Markov Decision Process Measurement Model.

    Science.gov (United States)

    LaMar, Michelle M

    2018-03-01

    Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.

  4. Quasi-Feller Markov chains

    Directory of Open Access Journals (Sweden)

    Jean B. Lasserre

    2000-01-01

    Full Text Available We consider the class of Markov kernels for which the weak or strong Feller property fails to hold at some discontinuity set. We provide a simple necessary and sufficient condition for existence of an invariant probability measure as well as a Foster-Lyapunov sufficient condition. We also characterize a subclass, the quasi (weak or strong Feller kernels, for which the sequences of expected occupation measures share the same asymptotic properties as for (weak or strong Feller kernels. In particular, it is shown that the sequences of expected occupation measures of strong and quasi strong-Feller kernels with an invariant probability measure converge setwise to an invariant measure.

  5. Markov process of muscle motors

    International Nuclear Information System (INIS)

    Kondratiev, Yu; Pechersky, E; Pirogov, S

    2008-01-01

    We study a Markov random process describing muscle molecular motor behaviour. Every motor is either bound up with a thin filament or unbound. In the bound state the motor creates a force proportional to its displacement from the neutral position. In both states the motor spends an exponential time depending on the state. The thin filament moves at a velocity proportional to the average of all displacements of all motors. We assume that the time which a motor stays in the bound state does not depend on its displacement. Then one can find an exact solution of a nonlinear equation appearing in the limit of an infinite number of motors

  6. Semi-Markov Chains and Hidden Semi-Markov Models toward Applications Their Use in Reliability and DNA Analysis

    CERN Document Server

    Barbu, Vlad

    2008-01-01

    Semi-Markov processes are much more general and better adapted to applications than the Markov ones because sojourn times in any state can be arbitrarily distributed, as opposed to the geometrically distributed sojourn time in the Markov case. This book concerns with the estimation of discrete-time semi-Markov and hidden semi-Markov processes

  7. Two-state Markov-chain Poisson nature of individual cellphone call statistics

    Science.gov (United States)

    Jiang, Zhi-Qiang; Xie, Wen-Jie; Li, Ming-Xia; Zhou, Wei-Xing; Sornette, Didier

    2016-07-01

    Unfolding the burst patterns in human activities and social interactions is a very important issue especially for understanding the spreading of disease and information and the formation of groups and organizations. Here, we conduct an in-depth study of the temporal patterns of cellphone conversation activities of 73 339 anonymous cellphone users, whose inter-call durations are Weibull distributed. We find that the individual call events exhibit a pattern of bursts, that high activity periods are alternated with low activity periods. In both periods, the number of calls are exponentially distributed for individuals, but power-law distributed for the population. Together with the exponential distributions of inter-call durations within bursts and of the intervals between consecutive bursts, we demonstrate that the individual call activities are driven by two independent Poisson processes, which can be combined within a minimal model in terms of a two-state first-order Markov chain, giving significant fits for nearly half of the individuals. By measuring directly the distributions of call rates across the population, which exhibit power-law tails, we purport the existence of power-law distributions, via the ‘superposition of distributions’ mechanism. Our findings shed light on the origins of bursty patterns in other human activities.

  8. Hidden Markov Model for Stock Selection

    Directory of Open Access Journals (Sweden)

    Nguyet Nguyen

    2015-10-01

    Full Text Available The hidden Markov model (HMM is typically used to predict the hidden regimes of observation data. Therefore, this model finds applications in many different areas, such as speech recognition systems, computational molecular biology and financial market predictions. In this paper, we use HMM for stock selection. We first use HMM to make monthly regime predictions for the four macroeconomic variables: inflation (consumer price index (CPI, industrial production index (INDPRO, stock market index (S&P 500 and market volatility (VIX. At the end of each month, we calibrate HMM’s parameters for each of these economic variables and predict its regimes for the next month. We then look back into historical data to find the time periods for which the four variables had similar regimes with the forecasted regimes. Within those similar periods, we analyze all of the S&P 500 stocks to identify which stock characteristics have been well rewarded during the time periods and assign scores and corresponding weights for each of the stock characteristics. A composite score of each stock is calculated based on the scores and weights of its features. Based on this algorithm, we choose the 50 top ranking stocks to buy. We compare the performances of the portfolio with the benchmark index, S&P 500. With an initial investment of $100 in December 1999, over 15 years, in December 2014, our portfolio had an average gain per annum of 14.9% versus 2.3% for the S&P 500.

  9. Timed Comparisons of Semi-Markov Processes

    DEFF Research Database (Denmark)

    Pedersen, Mathias Ruggaard; Larsen, Kim Guldstrand; Bacci, Giorgio

    2018-01-01

    -Markov processes, and investigate the question of how to compare two semi-Markov processes with respect to their time-dependent behaviour. To this end, we introduce the relation of being “faster than” between processes and study its algorithmic complexity. Through a connection to probabilistic automata we obtain...

  10. Probabilistic Reachability for Parametric Markov Models

    DEFF Research Database (Denmark)

    Hahn, Ernst Moritz; Hermanns, Holger; Zhang, Lijun

    2011-01-01

    Given a parametric Markov model, we consider the problem of computing the rational function expressing the probability of reaching a given set of states. To attack this principal problem, Daws has suggested to first convert the Markov chain into a finite automaton, from which a regular expression...

  11. Inhomogeneous Markov point processes by transformation

    DEFF Research Database (Denmark)

    Jensen, Eva B. Vedel; Nielsen, Linda Stougaard

    2000-01-01

    We construct parametrized models for point processes, allowing for both inhomogeneity and interaction. The inhomogeneity is obtained by applying parametrized transformations to homogeneous Markov point processes. An interesting model class, which can be constructed by this transformation approach......, is that of exponential inhomogeneous Markov point processes. Statistical inference For such processes is discussed in some detail....

  12. Markov-modulated and feedback fluid queues

    NARCIS (Netherlands)

    Scheinhardt, Willem R.W.

    1998-01-01

    In the last twenty years the field of Markov-modulated fluid queues has received considerable attention. In these models a fluid reservoir receives and/or releases fluid at rates which depend on the actual state of a background Markov chain. In the first chapter of this thesis we give a short

  13. Classification Using Markov Blanket for Feature Selection

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Luo, Jian

    2009-01-01

    Selecting relevant features is in demand when a large data set is of interest in a classification task. It produces a tractable number of features that are sufficient and possibly improve the classification performance. This paper studies a statistical method of Markov blanket induction algorithm...... for filtering features and then applies a classifier using the Markov blanket predictors. The Markov blanket contains a minimal subset of relevant features that yields optimal classification performance. We experimentally demonstrate the improved performance of several classifiers using a Markov blanket...... induction as a feature selection method. In addition, we point out an important assumption behind the Markov blanket induction algorithm and show its effect on the classification performance....

  14. Quantum Markov Chain Mixing and Dissipative Engineering

    DEFF Research Database (Denmark)

    Kastoryano, Michael James

    2012-01-01

    This thesis is the fruit of investigations on the extension of ideas of Markov chain mixing to the quantum setting, and its application to problems of dissipative engineering. A Markov chain describes a statistical process where the probability of future events depends only on the state...... of the system at the present point in time, but not on the history of events. Very many important processes in nature are of this type, therefore a good understanding of their behaviour has turned out to be very fruitful for science. Markov chains always have a non-empty set of limiting distributions...... (stationary states). The aim of Markov chain mixing is to obtain (upper and/or lower) bounds on the number of steps it takes for the Markov chain to reach a stationary state. The natural quantum extensions of these notions are density matrices and quantum channels. We set out to develop a general mathematical...

  15. The Bacterial Sequential Markov Coalescent.

    Science.gov (United States)

    De Maio, Nicola; Wilson, Daniel J

    2017-05-01

    Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is

  16. Schmidt games and Markov partitions

    International Nuclear Information System (INIS)

    Tseng, Jimmy

    2009-01-01

    Let T be a C 2 -expanding self-map of a compact, connected, C ∞ , Riemannian manifold M. We correct a minor gap in the proof of a theorem from the literature: the set of points whose forward orbits are nondense has full Hausdorff dimension. Our correction allows us to strengthen the theorem. Combining the correction with Schmidt games, we generalize the theorem in dimension one: given a point x 0 in M, the set of points whose forward orbit closures miss x 0 is a winning set. Finally, our key lemma, the no matching lemma, may be of independent interest in the theory of symbolic dynamics or the theory of Markov partitions

  17. Incorporating teleconnection information into reservoir operating policies using Stochastic Dynamic Programming and a Hidden Markov Model

    Science.gov (United States)

    Turner, Sean; Galelli, Stefano; Wilcox, Karen

    2015-04-01

    Water reservoir systems are often affected by recurring large-scale ocean-atmospheric anomalies, known as teleconnections, that cause prolonged periods of climatological drought. Accurate forecasts of these events -- at lead times in the order of weeks and months -- may enable reservoir operators to take more effective release decisions to improve the performance of their systems. In practice this might mean a more reliable water supply system, a more profitable hydropower plant or a more sustainable environmental release policy. To this end, climate indices, which represent the oscillation of the ocean-atmospheric system, might be gainfully employed within reservoir operating models that adapt the reservoir operation as a function of the climate condition. This study develops a Stochastic Dynamic Programming (SDP) approach that can incorporate climate indices using a Hidden Markov Model. The model simulates the climatic regime as a hidden state following a Markov chain, with the state transitions driven by variation in climatic indices, such as the Southern Oscillation Index. Time series analysis of recorded streamflow data reveals the parameters of separate autoregressive models that describe the inflow to the reservoir under three representative climate states ("normal", "wet", "dry"). These models then define inflow transition probabilities for use in a classic SDP approach. The key advantage of the Hidden Markov Model is that it allows conditioning the operating policy not only on the reservoir storage and the antecedent inflow, but also on the climate condition, thus potentially allowing adaptability to a broader range of climate conditions. In practice, the reservoir operator would effect a water release tailored to a specific climate state based on available teleconnection data and forecasts. The approach is demonstrated on the operation of a realistic, stylised water reservoir with carry-over capacity in South-East Australia. Here teleconnections relating

  18. Finite Markov processes and their applications

    CERN Document Server

    Iosifescu, Marius

    2007-01-01

    A self-contained treatment of finite Markov chains and processes, this text covers both theory and applications. Author Marius Iosifescu, vice president of the Romanian Academy and director of its Center for Mathematical Statistics, begins with a review of relevant aspects of probability theory and linear algebra. Experienced readers may start with the second chapter, a treatment of fundamental concepts of homogeneous finite Markov chain theory that offers examples of applicable models.The text advances to studies of two basic types of homogeneous finite Markov chains: absorbing and ergodic ch

  19. Markov chains models, algorithms and applications

    CERN Document Server

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  20. Markov chains analytic and Monte Carlo computations

    CERN Document Server

    Graham, Carl

    2014-01-01

    Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec

  1. A scaling analysis of a cat and mouse Markov chain

    NARCIS (Netherlands)

    Litvak, Nelli; Robert, Philippe

    Motivated by an original on-line page-ranking algorithm, starting from an arbitrary Markov chain $(C_n)$ on a discrete state space ${\\cal S}$, a Markov chain $(C_n,M_n)$ on the product space ${\\cal S}^2$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain

  2. Semi adiabatic theory of seasonal Markov processes

    Energy Technology Data Exchange (ETDEWEB)

    Talkner, P [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    The dynamics of many natural and technical systems are essentially influenced by a periodic forcing. Analytic solutions of the equations of motion for periodically driven systems are generally not known. Simulations, numerical solutions or in some limiting cases approximate analytic solutions represent the known approaches to study the dynamics of such systems. Besides the regime of weak periodic forces where linear response theory works, the limit of a slow driving force can often be treated analytically using an adiabatic approximation. For this approximation to hold all intrinsic processes must be fast on the time-scale of a period of the external driving force. We developed a perturbation theory for periodically driven Markovian systems that covers the adiabatic regime but also works if the system has a single slow mode that may even be slower than the driving force. We call it the semi adiabatic approximation. Some results of this approximation for a system exhibiting stochastic resonance which usually takes place within the semi adiabatic regime are indicated. (author) 1 fig., 8 refs.

  3. Observation uncertainty in reversible Markov chains.

    Science.gov (United States)

    Metzner, Philipp; Weber, Marcus; Schütte, Christof

    2010-09-01

    In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .

  4. Generated dynamics of Markov and quantum processes

    CERN Document Server

    Janßen, Martin

    2016-01-01

    This book presents Markov and quantum processes as two sides of a coin called generated stochastic processes. It deals with quantum processes as reversible stochastic processes generated by one-step unitary operators, while Markov processes are irreversible stochastic processes generated by one-step stochastic operators. The characteristic feature of quantum processes are oscillations, interference, lots of stationary states in bounded systems and possible asymptotic stationary scattering states in open systems, while the characteristic feature of Markov processes are relaxations to a single stationary state. Quantum processes apply to systems where all variables, that control reversibility, are taken as relevant variables, while Markov processes emerge when some of those variables cannot be followed and are thus irrelevant for the dynamic description. Their absence renders the dynamic irreversible. A further aim is to demonstrate that almost any subdiscipline of theoretical physics can conceptually be put in...

  5. Confluence reduction for Markov automata (extended version)

    NARCIS (Netherlands)

    Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. Recently, the process algebra MAPA was introduced to efficiently model such systems. As always, the state space explosion threatens the analysability of the models

  6. Semi-Markov Arnason-Schwarz models.

    Science.gov (United States)

    King, Ruth; Langrock, Roland

    2016-06-01

    We consider multi-state capture-recapture-recovery data where observed individuals are recorded in a set of possible discrete states. Traditionally, the Arnason-Schwarz model has been fitted to such data where the state process is modeled as a first-order Markov chain, though second-order models have also been proposed and fitted to data. However, low-order Markov models may not accurately represent the underlying biology. For example, specifying a (time-independent) first-order Markov process involves the assumption that the dwell time in each state (i.e., the duration of a stay in a given state) has a geometric distribution, and hence that the modal dwell time is one. Specifying time-dependent or higher-order processes provides additional flexibility, but at the expense of a potentially significant number of additional model parameters. We extend the Arnason-Schwarz model by specifying a semi-Markov model for the state process, where the dwell-time distribution is specified more generally, using, for example, a shifted Poisson or negative binomial distribution. A state expansion technique is applied in order to represent the resulting semi-Markov Arnason-Schwarz model in terms of a simpler and computationally tractable hidden Markov model. Semi-Markov Arnason-Schwarz models come with only a very modest increase in the number of parameters, yet permit a significantly more flexible state process. Model selection can be performed using standard procedures, and in particular via the use of information criteria. The semi-Markov approach allows for important biological inference to be drawn on the underlying state process, for example, on the times spent in the different states. The feasibility of the approach is demonstrated in a simulation study, before being applied to real data corresponding to house finches where the states correspond to the presence or absence of conjunctivitis. © 2015, The International Biometric Society.

  7. A Bayesian model for binary Markov chains

    Directory of Open Access Journals (Sweden)

    Belkheir Essebbar

    2004-02-01

    Full Text Available This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.

  8. Bayesian analysis of Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2006-01-01

    Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...... a partially ordered Markov point process as the auxiliary variable. As the method requires simulation from the "unknown" likelihood, perfect simulation algorithms for spatial point processes become useful....

  9. Subharmonic projections for a quantum Markov semigroup

    International Nuclear Information System (INIS)

    Fagnola, Franco; Rebolledo, Rolando

    2002-01-01

    This article introduces a concept of subharmonic projections for a quantum Markov semigroup, in view of characterizing the support projection of a stationary state in terms of the semigroup generator. These results, together with those of our previous article [J. Math. Phys. 42, 1296 (2001)], lead to a method for proving the existence of faithful stationary states. This is often crucial in the analysis of ergodic properties of quantum Markov semigroups. The method is illustrated by applications to physical models

  10. Transition Effect Matrices and Quantum Markov Chains

    Science.gov (United States)

    Gudder, Stan

    2009-06-01

    A transition effect matrix (TEM) is a quantum generalization of a classical stochastic matrix. By employing a TEM we obtain a quantum generalization of a classical Markov chain. We first discuss state and operator dynamics for a quantum Markov chain. We then consider various types of TEMs and vector states. In particular, we study invariant, equilibrium and singular vector states and investigate projective, bistochastic, invertible and unitary TEMs.

  11. Markov chains of nonlinear Markov processes and an application to a winner-takes-all model for social conformity

    Energy Technology Data Exchange (ETDEWEB)

    Frank, T D [Center for the Ecological Study of Perception and Action, Department of Psychology, University of Connecticut, 406 Babbidge Road, Storrs, CT 06269 (United States)

    2008-07-18

    We discuss nonlinear Markov processes defined on discrete time points and discrete state spaces using Markov chains. In this context, special attention is paid to the distinction between linear and nonlinear Markov processes. We illustrate that the Chapman-Kolmogorov equation holds for nonlinear Markov processes by a winner-takes-all model for social conformity. (fast track communication)

  12. Markov chains of nonlinear Markov processes and an application to a winner-takes-all model for social conformity

    International Nuclear Information System (INIS)

    Frank, T D

    2008-01-01

    We discuss nonlinear Markov processes defined on discrete time points and discrete state spaces using Markov chains. In this context, special attention is paid to the distinction between linear and nonlinear Markov processes. We illustrate that the Chapman-Kolmogorov equation holds for nonlinear Markov processes by a winner-takes-all model for social conformity. (fast track communication)

  13. Markov Processes in Image Processing

    Science.gov (United States)

    Petrov, E. P.; Kharina, N. L.

    2018-05-01

    Digital images are used as an information carrier in different sciences and technologies. The aspiration to increase the number of bits in the image pixels for the purpose of obtaining more information is observed. In the paper, some methods of compression and contour detection on the basis of two-dimensional Markov chain are offered. Increasing the number of bits on the image pixels will allow one to allocate fine object details more precisely, but it significantly complicates image processing. The methods of image processing do not concede by the efficiency to well-known analogues, but surpass them in processing speed. An image is separated into binary images, and processing is carried out in parallel with each without an increase in speed, when increasing the number of bits on the image pixels. One more advantage of methods is the low consumption of energy resources. Only logical procedures are used and there are no computing operations. The methods can be useful in processing images of any class and assignment in processing systems with a limited time and energy resources.

  14. Adaptive Markov Chain Monte Carlo

    KAUST Repository

    Jadoon, Khan

    2016-08-08

    A substantial interpretation of electromagnetic induction (EMI) measurements requires quantifying optimal model parameters and uncertainty of a nonlinear inverse problem. For this purpose, an adaptive Bayesian Markov chain Monte Carlo (MCMC) algorithm is used to assess multi-orientation and multi-offset EMI measurements in an agriculture field with non-saline and saline soil. In the MCMC simulations, posterior distribution was computed using Bayes rule. The electromagnetic forward model based on the full solution of Maxwell\\'s equations was used to simulate the apparent electrical conductivity measured with the configurations of EMI instrument, the CMD mini-Explorer. The model parameters and uncertainty for the three-layered earth model are investigated by using synthetic data. Our results show that in the scenario of non-saline soil, the parameters of layer thickness are not well estimated as compared to layers electrical conductivity because layer thicknesses in the model exhibits a low sensitivity to the EMI measurements, and is hence difficult to resolve. Application of the proposed MCMC based inversion to the field measurements in a drip irrigation system demonstrate that the parameters of the model can be well estimated for the saline soil as compared to the non-saline soil, and provide useful insight about parameter uncertainty for the assessment of the model outputs.

  15. Fitting Hidden Markov Models to Psychological Data

    Directory of Open Access Journals (Sweden)

    Ingmar Visser

    2002-01-01

    Full Text Available Markov models have been used extensively in psychology of learning. Applications of hidden Markov models are rare however. This is partially due to the fact that comprehensive statistics for model selection and model assessment are lacking in the psychological literature. We present model selection and model assessment statistics that are particularly useful in applying hidden Markov models in psychology. These statistics are presented and evaluated by simulation studies for a toy example. We compare AIC, BIC and related criteria and introduce a prediction error measure for assessing goodness-of-fit. In a simulation study, two methods of fitting equality constraints are compared. In two illustrative examples with experimental data we apply selection criteria, fit models with constraints and assess goodness-of-fit. First, data from a concept identification task is analyzed. Hidden Markov models provide a flexible approach to analyzing such data when compared to other modeling methods. Second, a novel application of hidden Markov models in implicit learning is presented. Hidden Markov models are used in this context to quantify knowledge that subjects express in an implicit learning task. This method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.

  16. Discrete-time semi-Markov modeling of human papillomavirus persistence

    Science.gov (United States)

    Mitchell, C. E.; Hudgens, M. G.; King, C. C.; Cu-Uvin, S.; Lo, Y.; Rompalo, A.; Sobel, J.; Smith, J. S.

    2011-01-01

    Multi-state modeling is often employed to describe the progression of a disease process. In epidemiological studies of certain diseases, the disease state is typically only observed at periodic clinical visits, producing incomplete longitudinal data. In this paper we consider fitting semi-Markov models to estimate the persistence of human papillomavirus (HPV) type-specific infection in studies where the status of HPV type(s) is assessed periodically. Simulation study results are presented indicating the semi-Markov estimator is more accurate than an estimator currently used in the HPV literature. The methods are illustrated using data from the HIV Epidemiology Research Study (HERS). PMID:21538985

  17. Continuous-Time Semi-Markov Models in Health Economic Decision Making: An Illustrative Example in Heart Failure Disease Management.

    Science.gov (United States)

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    2016-01-01

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.

  18. Algebraic decay in self-similar Markov chains

    International Nuclear Information System (INIS)

    Hanson, J.D.; Cary, J.R.; Meiss, J.D.

    1985-01-01

    A continuous-time Markov chain is used to model motion in the neighborhood of a critical invariant circle for a Hamiltonian map. States in the infinite chain represent successive rational approximants to the frequency of the invariant circle. For the case of a noble frequency, the chain is self-similar and the nonlinear integral equation for the first passage time distribution is solved exactly. The asymptotic distribution is a power law times a function periodic in the logarithm of the time. For parameters relevant to the critical noble circle, the decay proceeds as t/sup -4.05/

  19. Algebraic decay in self-similar Markov chains

    International Nuclear Information System (INIS)

    Hanson, J.D.; Cary, J.R.; Meiss, J.D.

    1984-10-01

    A continuous time Markov chain is used to model motion in the neighborhood of a critical noble invariant circle in an area-preserving map. States in the infinite chain represent successive rational approximants to the frequency of the invariant circle. The nonlinear integral equation for the first passage time distribution is solved exactly. The asymptotic distribution is a power law times a function periodic in the logarithm of the time. For parameters relevant to Hamiltonian systems the decay proceeds as t -4 05

  20. Hidden Markov models for the activity profile of terrorist groups

    OpenAIRE

    Raghavan, Vasanthan; Galstyan, Aram; Tartakovsky, Alexander G.

    2012-01-01

    The main focus of this work is on developing models for the activity profile of a terrorist group, detecting sudden spurts and downfalls in this profile, and, in general, tracking it over a period of time. Toward this goal, a $d$-state hidden Markov model (HMM) that captures the latent states underlying the dynamics of the group and thus its activity profile is developed. The simplest setting of $d=2$ corresponds to the case where the dynamics are coarsely quantized as Active and Inactive, re...

  1. Dichotomous Markov Noise:. Exact Results for Out-Of Systems

    Science.gov (United States)

    Bena, Ioana

    Nonequilibrium systems driven by additive or multiplicative dichotomous Markov noise appear in a wide variety of physical and mathematical models. We review here some prototypical examples, with an emphasis on analytically-solvable situations. In particular, it has escaped attention till recently that the standard results for the long-time properties of such systems cannot be applied when unstable fixed points are crossed in the asymptotic regime. We show how calculations have to be modified to deal with these cases and present a few relevant applications — the hypersensitive transport, the rocking ratchet, and the stochastic Stokes' drift. These results reinforce the impression that dichotomous noise can be put on par with Gaussian white noise as far as obtaining analytical results is concerned. They convincingly illustrate the interplay between noise and nonlinearity in generating nontrivial behaviors of nonequilibrium systems and point to various practical applications.

  2. Zipf exponent of trajectory distribution in the hidden Markov model

    Science.gov (United States)

    Bochkarev, V. V.; Lerner, E. Yu

    2014-03-01

    This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different.

  3. Zipf exponent of trajectory distribution in the hidden Markov model

    International Nuclear Information System (INIS)

    Bochkarev, V V; Lerner, E Yu

    2014-01-01

    This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different

  4. Performance Modeling of Communication Networks with Markov Chains

    CERN Document Server

    Mo, Jeonghoon

    2010-01-01

    This book is an introduction to Markov chain modeling with applications to communication networks. It begins with a general introduction to performance modeling in Chapter 1 where we introduce different performance models. We then introduce basic ideas of Markov chain modeling: Markov property, discrete time Markov chain (DTMe and continuous time Markov chain (CTMe. We also discuss how to find the steady state distributions from these Markov chains and how they can be used to compute the system performance metric. The solution methodologies include a balance equation technique, limiting probab

  5. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  6. Markov and mixed models with applications

    DEFF Research Database (Denmark)

    Mortensen, Stig Bousgaard

    This thesis deals with mathematical and statistical models with focus on applications in pharmacokinetic and pharmacodynamic (PK/PD) modelling. These models are today an important aspect of the drug development in the pharmaceutical industry and continued research in statistical methodology within...... or uncontrollable factors in an individual. Modelling using SDEs also provides new tools for estimation of unknown inputs to a system and is illustrated with an application to estimation of insulin secretion rates in diabetic patients. Models for the eect of a drug is a broader area since drugs may affect...... for non-parametric estimation of Markov processes are proposed to give a detailed description of the sleep process during the night. Statistically the Markov models considered for sleep states are closely related to the PK models based on SDEs as both models share the Markov property. When the models...

  7. Consistent Estimation of Partition Markov Models

    Directory of Open Access Journals (Sweden)

    Jesús E. García

    2017-04-01

    Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.

  8. Markov decision processes in artificial intelligence

    CERN Document Server

    Sigaud, Olivier

    2013-01-01

    Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision problems under uncertainty as well as Reinforcement Learning problems. Written by experts in the field, this book provides a global view of current research using MDPs in Artificial Intelligence. It starts with an introductory presentation of the fundamental aspects of MDPs (planning in MDPs, Reinforcement Learning, Partially Observable MDPs, Markov games and the use of non-classical criteria). Then it presents more advanced research trends in the domain and gives some concrete examples using illustr

  9. Markov bridges, bisection and variance reduction

    DEFF Research Database (Denmark)

    Asmussen, Søren; Hobolth, Asger

    . In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented......Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints...

  10. Inhomogeneous Markov Models for Describing Driving Patterns

    DEFF Research Database (Denmark)

    Iversen, Emil Banning; Møller, Jan K.; Morales, Juan Miguel

    2017-01-01

    . Specifically, an inhomogeneous Markov model that captures the diurnal variation in the use of a vehicle is presented. The model is defined by the time-varying probabilities of starting and ending a trip, and is justified due to the uncertainty associated with the use of the vehicle. The model is fitted to data...... collected from the actual utilization of a vehicle. Inhomogeneous Markov models imply a large number of parameters. The number of parameters in the proposed model is reduced using B-splines....

  11. Inhomogeneous Markov Models for Describing Driving Patterns

    DEFF Research Database (Denmark)

    Iversen, Jan Emil Banning; Møller, Jan Kloppenborg; Morales González, Juan Miguel

    . Specically, an inhomogeneous Markov model that captures the diurnal variation in the use of a vehicle is presented. The model is dened by the time-varying probabilities of starting and ending a trip and is justied due to the uncertainty associated with the use of the vehicle. The model is tted to data...... collected from the actual utilization of a vehicle. Inhomogeneous Markov models imply a large number of parameters. The number of parameters in the proposed model is reduced using B-splines....

  12. Detecting Structural Breaks using Hidden Markov Models

    DEFF Research Database (Denmark)

    Ntantamis, Christos

    Testing for structural breaks and identifying their location is essential for econometric modeling. In this paper, a Hidden Markov Model (HMM) approach is used in order to perform these tasks. Breaks are defined as the data points where the underlying Markov Chain switches from one state to another....... The estimation of the HMM is conducted using a variant of the Iterative Conditional Expectation-Generalized Mixture (ICE-GEMI) algorithm proposed by Delignon et al. (1997), that permits analysis of the conditional distributions of economic data and allows for different functional forms across regimes...

  13. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  14. Markov processes an introduction for physical scientists

    CERN Document Server

    Gillespie, Daniel T

    1991-01-01

    Markov process theory is basically an extension of ordinary calculus to accommodate functions whos time evolutions are not entirely deterministic. It is a subject that is becoming increasingly important for many fields of science. This book develops the single-variable theory of both continuous and jump Markov processes in a way that should appeal especially to physicists and chemists at the senior and graduate level.Key Features* A self-contained, prgamatic exposition of the needed elements of random variable theory* Logically integrated derviations of the Chapman-Kolmogorov e

  15. The distribution of dairy farm size in Poland: a markov approach based on information theory

    NARCIS (Netherlands)

    Tonini, A.; Jongeneel, R.

    2009-01-01

    This article sets out to analyse the evolution of the dairy farm structure of Poland during the post-socialist period. After focusing on how the farm structure has changed over time, an instrumental variable generalized cross entropy estimator is used to develop and estimate a Markov model in order

  16. Reliability analysis of Markov history-dependent repairable systems with neglected failures

    International Nuclear Information System (INIS)

    Du, Shijia; Zeng, Zhiguo; Cui, Lirong; Kang, Rui

    2017-01-01

    Markov history-dependent repairable systems refer to the Markov repairable systems in which some states are changeable and dependent on recent evolutional history of the system. In practice, many Markov history-dependent repairable systems are subjected to neglected failures, i.e., some failures do not affect system performances if they can be repaired promptly. In this paper, we develop a model based on the theory of aggregated stochastic processes to describe the history-dependent behavior and the effect of neglected failures on the Markov history-dependent repairable systems. Based on the developed model, instantaneous and steady-state availabilities are derived to characterize the reliability of the system. Four reliability-related time distributions, i.e., distribution for the k th working period, distribution for the k th failure period, distribution for the real working time in an effective working period, distribution for the neglected failure time in an effective working period, are also derived to provide a more comprehensive description of the system's reliability. Thanks to the power of the theory of aggregated stochastic processes, closed-form expressions are obtained for all the reliability indexes and time distributions. Finally, the developed indexes and analysis methods are demonstrated by a numerical example. - Highlights: • Markovian history-dependent repairable systems with neglected failures is modeled. • Aggregated stochastic processes are used to derive reliability indexes and time distributions. • Closed-form expressions are derived for the considered indexes and distributions.

  17. Cost Effective Community Based Dementia Screening: A Markov Model Simulation

    Directory of Open Access Journals (Sweden)

    Erin Saito

    2014-01-01

    Full Text Available Background. Given the dementia epidemic and the increasing cost of healthcare, there is a need to assess the economic benefit of community based dementia screening programs. Materials and Methods. Markov model simulations were generated using data obtained from a community based dementia screening program over a one-year period. The models simulated yearly costs of caring for patients based on clinical transitions beginning in pre dementia and extending for 10 years. Results. A total of 93 individuals (74 female, 19 male were screened for dementia and 12 meeting clinical criteria for either mild cognitive impairment (n=7 or dementia (n=5 were identified. Assuming early therapeutic intervention beginning during the year of dementia detection, Markov model simulations demonstrated 9.8% reduction in cost of dementia care over a ten-year simulation period, primarily through increased duration in mild stages and reduced time in more costly moderate and severe stages. Discussion. Community based dementia screening can reduce healthcare costs associated with caring for demented individuals through earlier detection and treatment, resulting in proportionately reduced time in more costly advanced stages.

  18. Prediction of Annual Rainfall Pattern Using Hidden Markov Model ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Hidden Markov model is very influential in stochastic world because of its ... the earth from the clouds. The usual ... Rainfall modelling and ... Markov Models have become popular tools ... environment sciences, University of Jos, plateau state,.

  19. Extending Markov Automata with State and Action Rewards

    NARCIS (Netherlands)

    Guck, Dennis; Timmer, Mark; Blom, Stefan; Bertrand, N.; Bortolussi, L.

    This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton that allows the modelling of systems incorporating rewards in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Our models support both rewards that are

  20. SDI and Markov Chains for Regional Drought Characteristics

    Directory of Open Access Journals (Sweden)

    Chen-Feng Yeh

    2015-08-01

    Full Text Available In recent years, global climate change has altered precipitation patterns, causing uneven spatial and temporal distribution of precipitation that gradually induces precipitation polarization phenomena. Taiwan is located in the subtropical climate zone, with distinct wet and dry seasons, which makes the polarization phenomenon more obvious; this has also led to a large difference between river flows during the wet and dry seasons, which is significantly influenced by precipitation, resulting in hydrological drought. Therefore, to effectively address the growing issue of water shortages, it is necessary to explore and assess the drought characteristics of river systems. In this study, the drought characteristics of northern Taiwan were studied using the streamflow drought index (SDI and Markov chains. Analysis results showed that the year 2002 was a turning point for drought severity in both the Lanyang River and Yilan River basins; the severity of rain events in the Lanyang River basin increased after 2002, and the severity of drought events in the Yilan River basin exhibited a gradual upward trend. In the study of drought severity, analysis results from periods of three months (November to January and six months (November to April have shown significant drought characteristics. In addition, analysis of drought occurrence probabilities using the method of Markov chains has shown that the occurrence probabilities of drought events are higher in the Lanyang River basin than in the Yilan River basin; particularly for extreme events, the occurrence probability of an extreme drought event is 20.6% during the dry season (November to April in the Lanyang River basin, and 3.4% in the Yilan River basin. This study shows that for analysis of drought/wet occurrence probabilities, the results obtained for the drought frequency and occurrence probability using short-term data with the method of Markov chains can be used to predict the long-term occurrence

  1. Perturbation theory for Markov chains via Wasserstein distance

    NARCIS (Netherlands)

    Rudolf, Daniel; Schweizer, Nikolaus

    2017-01-01

    Perturbation theory for Markov chains addresses the question of how small differences in the transition probabilities of Markov chains are reflected in differences between their distributions. We prove powerful and flexible bounds on the distance of the nth step distributions of two Markov chains

  2. A mathematical approach for evaluating Markov models in continuous time without discrete-event simulation.

    Science.gov (United States)

    van Rosmalen, Joost; Toy, Mehlika; O'Mahony, James F

    2013-08-01

    Markov models are a simple and powerful tool for analyzing the health and economic effects of health care interventions. These models are usually evaluated in discrete time using cohort analysis. The use of discrete time assumes that changes in health states occur only at the end of a cycle period. Discrete-time Markov models only approximate the process of disease progression, as clinical events typically occur in continuous time. The approximation can yield biased cost-effectiveness estimates for Markov models with long cycle periods and if no half-cycle correction is made. The purpose of this article is to present an overview of methods for evaluating Markov models in continuous time. These methods use mathematical results from stochastic process theory and control theory. The methods are illustrated using an applied example on the cost-effectiveness of antiviral therapy for chronic hepatitis B. The main result is a mathematical solution for the expected time spent in each state in a continuous-time Markov model. It is shown how this solution can account for age-dependent transition rates and discounting of costs and health effects, and how the concept of tunnel states can be used to account for transition rates that depend on the time spent in a state. The applied example shows that the continuous-time model yields more accurate results than the discrete-time model but does not require much computation time and is easily implemented. In conclusion, continuous-time Markov models are a feasible alternative to cohort analysis and can offer several theoretical and practical advantages.

  3. Quantum Enhanced Inference in Markov Logic Networks.

    Science.gov (United States)

    Wittek, Peter; Gogolin, Christian

    2017-04-19

    Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.

  4. Markov Random Fields on Triangle Meshes

    DEFF Research Database (Denmark)

    Andersen, Vedrana; Aanæs, Henrik; Bærentzen, Jakob Andreas

    2010-01-01

    In this paper we propose a novel anisotropic smoothing scheme based on Markov Random Fields (MRF). Our scheme is formulated as two coupled processes. A vertex process is used to smooth the mesh by displacing the vertices according to a MRF smoothness prior, while an independent edge process label...

  5. A Martingale Decomposition of Discrete Markov Chains

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard

    We consider a multivariate time series whose increments are given from a homogeneous Markov chain. We show that the martingale component of this process can be extracted by a filtering method and establish the corresponding martingale decomposition in closed-form. This representation is useful fo...

  6. Renewal characterization of Markov modulated Poisson processes

    Directory of Open Access Journals (Sweden)

    Marcel F. Neuts

    1989-01-01

    Full Text Available A Markov Modulated Poisson Process (MMPP M(t defined on a Markov chain J(t is a pure jump process where jumps of M(t occur according to a Poisson process with intensity λi whenever the Markov chain J(t is in state i. M(t is called strongly renewal (SR if M(t is a renewal process for an arbitrary initial probability vector of J(t with full support on P={i:λi>0}. M(t is called weakly renewal (WR if there exists an initial probability vector of J(t such that the resulting MMPP is a renewal process. The purpose of this paper is to develop general characterization theorems for the class SR and some sufficiency theorems for the class WR in terms of the first passage times of the bivariate Markov chain [J(t,M(t]. Relevance to the lumpability of J(t is also studied.

  7. Evaluation of Usability Utilizing Markov Models

    Science.gov (United States)

    Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane

    2012-01-01

    Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…

  8. Bayesian analysis for reversible Markov chains

    NARCIS (Netherlands)

    Diaconis, P.; Rolles, S.W.W.

    2006-01-01

    We introduce a natural conjugate prior for the transition matrix of a reversible Markov chain. This allows estimation and testing. The prior arises from random walk with reinforcement in the same way the Dirichlet prior arises from Pólya’s urn. We give closed form normalizing constants, a simple

  9. Bisimulation and Simulation Relations for Markov Chains

    NARCIS (Netherlands)

    Baier, Christel; Hermanns, H.; Katoen, Joost P.; Wolf, Verena; Aceto, L.; Gordon, A.

    2006-01-01

    Formal notions of bisimulation and simulation relation play a central role for any kind of process algebra. This short paper sketches the main concepts for bisimulation and simulation relations for probabilistic systems, modelled by discrete- or continuous-time Markov chains.

  10. Discounted Markov games : generalized policy iteration method

    NARCIS (Netherlands)

    Wal, van der J.

    1978-01-01

    In this paper, we consider two-person zero-sum discounted Markov games with finite state and action spaces. We show that the Newton-Raphson or policy iteration method as presented by Pollats-chek and Avi-Itzhak does not necessarily converge, contradicting a proof of Rao, Chandrasekaran, and Nair.

  11. Optimal dividend distribution under Markov regime switching

    NARCIS (Netherlands)

    Jiang, Z.; Pistorius, M.

    2012-01-01

    We investigate the problem of optimal dividend distribution for a company in the presence of regime shifts. We consider a company whose cumulative net revenues evolve as a Brownian motion with positive drift that is modulated by a finite state Markov chain, and model the discount rate as a

  12. Revisiting Weak Simulation for Substochastic Markov Chains

    DEFF Research Database (Denmark)

    Jansen, David N.; Song, Lei; Zhang, Lijun

    2013-01-01

    of the logic PCTL\\x, and its completeness was conjectured. We revisit this result and show that soundness does not hold in general, but only for Markov chains without divergence. It is refuted for some systems with substochastic distributions. Moreover, we provide a counterexample to completeness...

  13. Fracture Mechanical Markov Chain Crack Growth Model

    DEFF Research Database (Denmark)

    Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard

    1991-01-01

    propagation process can be described by a discrete space Markov theory. The model is applicable to deterministic as well as to random loading. Once the model parameters for a given material have been determined, the results can be used for any structure as soon as the geometrical function is known....

  14. Multi-dimensional quasitoeplitz Markov chains

    Directory of Open Access Journals (Sweden)

    Alexander N. Dudin

    1999-01-01

    Full Text Available This paper deals with multi-dimensional quasitoeplitz Markov chains. We establish a sufficient equilibrium condition and derive a functional matrix equation for the corresponding vector-generating function, whose solution is given algorithmically. The results are demonstrated in the form of examples and applications in queues with BMAP-input, which operate in synchronous random environment.

  15. Markov chains with quasitoeplitz transition matrix

    Directory of Open Access Journals (Sweden)

    Alexander M. Dukhovny

    1989-01-01

    Full Text Available This paper investigates a class of Markov chains which are frequently encountered in various applications (e.g. queueing systems, dams and inventories with feedback. Generating functions of transient and steady state probabilities are found by solving a special Riemann boundary value problem on the unit circle. A criterion of ergodicity is established.

  16. Markov Chain Estimation of Avian Seasonal Fecundity

    Science.gov (United States)

    To explore the consequences of modeling decisions on inference about avian seasonal fecundity we generalize previous Markov chain (MC) models of avian nest success to formulate two different MC models of avian seasonal fecundity that represent two different ways to model renestin...

  17. Noise can speed convergence in Markov chains.

    Science.gov (United States)

    Franzke, Brandon; Kosko, Bart

    2011-10-01

    A new theorem shows that noise can speed convergence to equilibrium in discrete finite-state Markov chains. The noise applies to the state density and helps the Markov chain explore improbable regions of the state space. The theorem ensures that a stochastic-resonance noise benefit exists for states that obey a vector-norm inequality. Such noise leads to faster convergence because the noise reduces the norm components. A corollary shows that a noise benefit still occurs if the system states obey an alternate norm inequality. This leads to a noise-benefit algorithm that requires knowledge of the steady state. An alternative blind algorithm uses only past state information to achieve a weaker noise benefit. Simulations illustrate the predicted noise benefits in three well-known Markov models. The first model is a two-parameter Ehrenfest diffusion model that shows how noise benefits can occur in the class of birth-death processes. The second model is a Wright-Fisher model of genotype drift in population genetics. The third model is a chemical reaction network of zeolite crystallization. A fourth simulation shows a convergence rate increase of 64% for states that satisfy the theorem and an increase of 53% for states that satisfy the corollary. A final simulation shows that even suboptimal noise can speed convergence if the noise applies over successive time cycles. Noise benefits tend to be sharpest in Markov models that do not converge quickly and that do not have strong absorbing states.

  18. Model Checking Infinite-State Markov Chains

    NARCIS (Netherlands)

    Remke, Anne Katharina Ingrid; Haverkort, Boudewijn R.H.M.; Cloth, L.

    2004-01-01

    In this paper algorithms for model checking CSL (continuous stochastic logic) against infinite-state continuous-time Markov chains of so-called quasi birth-death type are developed. In doing so we extend the applicability of CSL model checking beyond the recently proposed case for finite-state

  19. Model Checking Markov Chains: Techniques and Tools

    NARCIS (Netherlands)

    Zapreev, I.S.

    2008-01-01

    This dissertation deals with four important aspects of model checking Markov chains: the development of efficient model-checking tools, the improvement of model-checking algorithms, the efficiency of the state-space reduction techniques, and the development of simulation-based model-checking

  20. Nonlinearly perturbed semi-Markov processes

    CERN Document Server

    Silvestrov, Dmitrii

    2017-01-01

    The book presents new methods of asymptotic analysis for nonlinearly perturbed semi-Markov processes with a finite phase space. These methods are based on special time-space screening procedures for sequential phase space reduction of semi-Markov processes combined with the systematical use of operational calculus for Laurent asymptotic expansions. Effective recurrent algorithms are composed for getting asymptotic expansions, without and with explicit upper bounds for remainders, for power moments of hitting times, stationary and conditional quasi-stationary distributions for nonlinearly perturbed semi-Markov processes. These results are illustrated by asymptotic expansions for birth-death-type semi-Markov processes, which play an important role in various applications. The book will be a useful contribution to the continuing intensive studies in the area. It is an essential reference for theoretical and applied researchers in the field of stochastic processes and their applications that will cont...

  1. Quantum Enhanced Inference in Markov Logic Networks

    Science.gov (United States)

    Wittek, Peter; Gogolin, Christian

    2017-04-01

    Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.

  2. Markov chain of distances between parked cars

    International Nuclear Information System (INIS)

    Seba, Petr

    2008-01-01

    We describe the distribution of distances between parked cars as a solution of certain Markov processes and show that its solution is obtained with the help of a distributional fixed point equation. Under certain conditions the process is solved explicitly. The resulting probability density is compared with the actual parking data measured in the city. (fast track communication)

  3. Continuity Properties of Distances for Markov Processes

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Mao, Hua; Larsen, Kim Guldstrand

    2014-01-01

    In this paper we investigate distance functions on finite state Markov processes that measure the behavioural similarity of non-bisimilar processes. We consider both probabilistic bisimilarity metrics, and trace-based distances derived from standard Lp and Kullback-Leibler distances. Two desirable...

  4. Model Checking Structured Infinite Markov Chains

    NARCIS (Netherlands)

    Remke, Anne Katharina Ingrid

    2008-01-01

    In the past probabilistic model checking hast mostly been restricted to finite state models. This thesis explores the possibilities of model checking with continuous stochastic logic (CSL) on infinite-state Markov chains. We present an in-depth treatment of model checking algorithms for two special

  5. Hidden Markov models for labeled sequences

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose

    1994-01-01

    A hidden Markov model for labeled observations, called a class HMM, is introduced and a maximum likelihood method is developed for estimating the parameters of the model. Instead of training it to model the statistics of the training sequences it is trained to optimize recognition. It resembles MMI...

  6. Efficient Modelling and Generation of Markov Automata

    NARCIS (Netherlands)

    Koutny, M.; Timmer, Mark; Ulidowski, I.; Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the

  7. A Metrized Duality Theorem for Markov Processes

    DEFF Research Database (Denmark)

    Kozen, Dexter; Mardare, Radu Iulian; Panangaden, Prakash

    2014-01-01

    We extend our previous duality theorem for Markov processes by equipping the processes with a pseudometric and the algebras with a notion of metric diameter. We are able to show that the isomorphisms of our previous duality theorem become isometries in this quantitative setting. This opens the wa...

  8. Nonlinear Magnus-induced dynamics and Shapiro spikes for ac and dc driven skyrmions on periodic quasi-one-dimensional substrates

    Science.gov (United States)

    Reichhardt, Charles; Reichhardt, Cynthia J. Olson

    We numerically examine skyrmions interacting with a periodic quasi-one-dimensional substrate. When we drive the skyrmions perpendicular to the substrate periodicity direction, a rich variety of nonlinear Magnus-induced effects arise, in contrast to an overdamped system that shows only a linear velocity-force curve for this geometry. The skyrmion velocity-force curve is strongly nonlinear and we observe a Magnus-induced speed-up effect when the pinning causes the Magnus velocity response to align with the dissipative response. At higher applied drives these components decouple, resulting in strong negative differential conductivity. For skyrmions under combined ac and dc driving, we find a new class of phase locking phenomena in which the velocity-force curves contain a series of what we call Shapiro spikes, distinct from the Shapiro steps observed in overdamped systems. There are also regimes in which the skyrmion moves in the direction opposite to the applied dc drive to give negative mobility.

  9. The application of Markov decision process in restaurant delivery robot

    Science.gov (United States)

    Wang, Yong; Hu, Zhen; Wang, Ying

    2017-05-01

    As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional path planning algorithm is not very ideal. To solve this problem, this paper proposes the Markov dynamic state immediate reward (MDR) path planning algorithm according to the traditional Markov decision process. First of all, it uses MDR to plan a global path, then navigates along this path. When the sensor detects there is no obstructions in front state, increase its immediate state reward value; when the sensor detects there is an obstacle in front, plan a global path that can avoid obstacle with the current position as the new starting point and reduce its state immediate reward value. This continues until the target is reached. When the robot learns for a period of time, it can avoid those places where obstacles are often present when planning the path. By analyzing the simulation experiment, the algorithm has achieved good results in the global path planning under the dynamic environment.

  10. Hidden Markov latent variable models with multivariate longitudinal data.

    Science.gov (United States)

    Song, Xinyuan; Xia, Yemao; Zhu, Hongtu

    2017-03-01

    Cocaine addiction is chronic and persistent, and has become a major social and health problem in many countries. Existing studies have shown that cocaine addicts often undergo episodic periods of addiction to, moderate dependence on, or swearing off cocaine. Given its reversible feature, cocaine use can be formulated as a stochastic process that transits from one state to another, while the impacts of various factors, such as treatment received and individuals' psychological problems on cocaine use, may vary across states. This article develops a hidden Markov latent variable model to study multivariate longitudinal data concerning cocaine use from a California Civil Addict Program. The proposed model generalizes conventional latent variable models to allow bidirectional transition between cocaine-addiction states and conventional hidden Markov models to allow latent variables and their dynamic interrelationship. We develop a maximum-likelihood approach, along with a Monte Carlo expectation conditional maximization (MCECM) algorithm, to conduct parameter estimation. The asymptotic properties of the parameter estimates and statistics for testing the heterogeneity of model parameters are investigated. The finite sample performance of the proposed methodology is demonstrated by simulation studies. The application to cocaine use study provides insights into the prevention of cocaine use. © 2016, The International Biometric Society.

  11. Simulation of daily rainfall through markov chain modeling

    International Nuclear Information System (INIS)

    Sadiq, N.

    2015-01-01

    Being an agricultural country, the inhabitants of dry land in cultivated areas mainly rely on the daily rainfall for watering their fields. A stochastic model based on first order Markov Chain was developed to simulate daily rainfall data for Multan, D. I. Khan, Nawabshah, Chilas and Barkhan for the period 1981-2010. Transitional probability matrices of first order Markov Chain was utilized to generate the daily rainfall occurrence while gamma distribution was used to generate the daily rainfall amount. In order to achieve the parametric values of mentioned cities, method of moments is used to estimate the shape and scale parameters which lead to synthetic sequence generation as per gamma distribution. In this study, unconditional and conditional probabilities of wet and dry days in sum with means and standard deviations are considered as the essential parameters for the simulated stochastic generation of daily rainfalls. It has been found that the computerized synthetic rainfall series concurred pretty well with the actual observed rainfall series. (author)

  12. Extreme event statistics in a drifting Markov chain

    Science.gov (United States)

    Kindermann, Farina; Hohmann, Michael; Lausch, Tobias; Mayer, Daniel; Schmidt, Felix; Widera, Artur

    2017-07-01

    We analyze extreme event statistics of experimentally realized Markov chains with various drifts. Our Markov chains are individual trajectories of a single atom diffusing in a one-dimensional periodic potential. Based on more than 500 individual atomic traces we verify the applicability of the Sparre Andersen theorem to our system despite the presence of a drift. We present detailed analysis of four different rare-event statistics for our system: the distributions of extreme values, of record values, of extreme value occurrence in the chain, and of the number of records in the chain. We observe that, for our data, the shape of the extreme event distributions is dominated by the underlying exponential distance distribution extracted from the atomic traces. Furthermore, we find that even small drifts influence the statistics of extreme events and record values, which is supported by numerical simulations, and we identify cases in which the drift can be determined without information about the underlying random variable distributions. Our results facilitate the use of extreme event statistics as a signal for small drifts in correlated trajectories.

  13. Entropies from Markov Models as Complexity Measures of Embedded Attractors

    Directory of Open Access Journals (Sweden)

    Julián D. Arias-Londoño

    2015-06-01

    Full Text Available This paper addresses the problem of measuring complexity from embedded attractors as a way to characterize changes in the dynamical behavior of different types of systems with a quasi-periodic behavior by observing their outputs. With the aim of measuring the stability of the trajectories of the attractor along time, this paper proposes three new estimations of entropy that are derived from a Markov model of the embedded attractor. The proposed estimators are compared with traditional nonparametric entropy measures, such as approximate entropy, sample entropy and fuzzy entropy, which only take into account the spatial dimension of the trajectory. The method proposes the use of an unsupervised algorithm to find the principal curve, which is considered as the “profile trajectory”, that will serve to adjust the Markov model. The new entropy measures are evaluated using three synthetic experiments and three datasets of physiological signals. In terms of consistency and discrimination capabilities, the results show that the proposed measures perform better than the other entropy measures used for comparison purposes.

  14. Effects of linear and nonlinear time-delayed feedback on the noise-enhanced stability phenomenon in a periodically driven bistable system

    International Nuclear Information System (INIS)

    Jia, Zheng-Lin; Mei, Dong-Cheng

    2011-01-01

    We investigate numerically the effects of time delay on the phenomenon of noise-enhanced stability (NES) in a periodically modulated bistable system. Three types of time-delayed feedback, including linear delayed feedback, nonlinear delayed feedback and global delayed feedback, are considered. We find a non-monotonic behaviour of the mean first-passage time (MFPT) as a function of the delay time τ, with a maximum in the case of linear delayed feedback and with a minimum in the case of nonlinear delayed feedback. There are two peculiar values of τ around which the NES phenomenon is enhanced or weakened. For the case of global delayed feedback, the increase of τ always weakens the NES phenomenon. Moreover, we also show that the amplitude A and the frequency Ω of the periodic forcing play an opposite role in the NES phenomenon, i.e. the increase of A weakens the NES effect while the increase of Ω enhances it. These observations demonstrate that the time-delayed feedback can be used as a feasible control scheme for the NES phenomenon

  15. Environmentally Driven Increases in Type 2 Diabetes and Obesity in Pima Indians and Non-Pimas in Mexico Over a 15-Year Period: The Maycoba Project.

    Science.gov (United States)

    Esparza-Romero, Julian; Valencia, Mauro E; Urquidez-Romero, Rene; Chaudhari, Lisa S; Hanson, Robert L; Knowler, William C; Ravussin, Eric; Bennett, Peter H; Schulz, Leslie O

    2015-11-01

    The global epidemics of type 2 diabetes and obesity have been attributed to the interaction between lifestyle changes and genetic predisposition to these diseases. We compared the prevalences of type 2 diabetes and obesity in Mexican Pima Indians, presumed to have a high genetic predisposition to these diseases, to those in their non-Pima neighbors, both of whom over a 15-year period experienced a transition from a traditional to a more modern lifestyle. Prevalence of diabetes, impaired fasting glucose, impaired glucose tolerance, and obesity in Mexican Pimas (n = 359) and non-Pima Mexicans (n = 251) were determined in 2010 using methods identical to those used in 1995. During this 15-year period, age-adjusted diabetes prevalence was unchanged in Pima men (5.8% in 1995 vs. 6.1% in 2010) yet increased in non-Pima men from 0.0 to 8.6% (P obesity increased significantly in all groups (6.6 vs. 15.7% in Pima men; 8.5 vs. 20.5% in non-Pima men; 18.9. vs 36.3% in Pima women; 29.5 vs. 42.9% in non-Pima women). Type 2 diabetes prevalence increased between 1995 and 2010 in non-Pima men, and to a lesser degree in women of both groups, but it did not increase in Pima men. Prevalence of obesity increased among Pimas and non-Pimas of both sexes. These changes occurred concomitantly with an environmental transition from a traditional to a more modernized lifestyle. © 2015 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.

  16. Emergence of periodic order in electric-field-driven planar nematic liquid crystals: An exclusive ac effect absent in static fields

    Science.gov (United States)

    Krishnamurthy, K. S.; Kumar, Pramoda

    2007-11-01

    We report, for a nematic liquid crystal with a low conductivity anisotropy, an ac field generated transition from a uniformly planar to a periodically modulated director configuration with the wave vector parallel to the initial director. Significantly, with unblocked electrodes, this instability is not excited by dc fields. Additionally, in very low frequency square wave fields, it occurs transiently after each polarity reversal, vanishing completely during field constancy. The time of occurrence of maximum distortion after polarity reversal decreases exponentially with voltage. The time dependence of optical phase change during transient distortion is nearly Gaussian. The pattern threshold Vc is linear in f , f denoting the frequency; the critical wave number qc of the modulation scales nearly linearly as f to a peak at ˜50Hz before falling slightly thereafter. The observed Vc(f) and qc(f) characteristics differ from the predictions of the standard model (SM). The instability may be interpreted as a special case of the Carr-Helfrich distortion suppressed in static fields due to weak charge focusing and strong charge injection. Its transient nature in the low frequency regime is suggestive of the possible role of gradient flexoelectric effect in its occurrence. The study includes measurement of certain elastic and viscosity parameters relevant to the application of the SM.

  17. Nuclear security assessment with Markov model approach

    International Nuclear Information System (INIS)

    Suzuki, Mitsutoshi; Terao, Norichika

    2013-01-01

    Nuclear security risk assessment with the Markov model based on random event is performed to explore evaluation methodology for physical protection in nuclear facilities. Because the security incidences are initiated by malicious and intentional acts, expert judgment and Bayes updating are used to estimate scenario and initiation likelihood, and it is assumed that the Markov model derived from stochastic process can be applied to incidence sequence. Both an unauthorized intrusion as Design Based Threat (DBT) and a stand-off attack as beyond-DBT are assumed to hypothetical facilities, and performance of physical protection and mitigation and minimization of consequence are investigated to develop the assessment methodology in a semi-quantitative manner. It is shown that cooperation between facility operator and security authority is important to respond to the beyond-DBT incidence. (author)

  18. MARKOV CHAIN PORTFOLIO LIQUIDITY OPTIMIZATION MODEL

    Directory of Open Access Journals (Sweden)

    Eder Oliveira Abensur

    2014-05-01

    Full Text Available The international financial crisis of September 2008 and May 2010 showed the importance of liquidity as an attribute to be considered in portfolio decisions. This study proposes an optimization model based on available public data, using Markov chain and Genetic Algorithms concepts as it considers the classic duality of risk versus return and incorporating liquidity costs. The work intends to propose a multi-criterion non-linear optimization model using liquidity based on a Markov chain. The non-linear model was tested using Genetic Algorithms with twenty five Brazilian stocks from 2007 to 2009. The results suggest that this is an innovative development methodology and useful for developing an efficient and realistic financial portfolio, as it considers many attributes such as risk, return and liquidity.

  19. An interlacing theorem for reversible Markov chains

    International Nuclear Information System (INIS)

    Grone, Robert; Salamon, Peter; Hoffmann, Karl Heinz

    2008-01-01

    Reversible Markov chains are an indispensable tool in the modeling of a vast class of physical, chemical, biological and statistical problems. Examples include the master equation descriptions of relaxing physical systems, stochastic optimization algorithms such as simulated annealing, chemical dynamics of protein folding and Markov chain Monte Carlo statistical estimation. Very often the large size of the state spaces requires the coarse graining or lumping of microstates into fewer mesoscopic states, and a question of utmost importance for the validity of the physical model is how the eigenvalues of the corresponding stochastic matrix change under this operation. In this paper we prove an interlacing theorem which gives explicit bounds on the eigenvalues of the lumped stochastic matrix. (fast track communication)

  20. An interlacing theorem for reversible Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Grone, Robert; Salamon, Peter [Department of Mathematics and Statistics, San Diego State University, San Diego, CA 92182-7720 (United States); Hoffmann, Karl Heinz [Institut fuer Physik, Technische Universitaet Chemnitz, D-09107 Chemnitz (Germany)

    2008-05-30

    Reversible Markov chains are an indispensable tool in the modeling of a vast class of physical, chemical, biological and statistical problems. Examples include the master equation descriptions of relaxing physical systems, stochastic optimization algorithms such as simulated annealing, chemical dynamics of protein folding and Markov chain Monte Carlo statistical estimation. Very often the large size of the state spaces requires the coarse graining or lumping of microstates into fewer mesoscopic states, and a question of utmost importance for the validity of the physical model is how the eigenvalues of the corresponding stochastic matrix change under this operation. In this paper we prove an interlacing theorem which gives explicit bounds on the eigenvalues of the lumped stochastic matrix. (fast track communication)

  1. Stochastic Dynamics through Hierarchically Embedded Markov Chains.

    Science.gov (United States)

    Vasconcelos, Vítor V; Santos, Fernando P; Santos, Francisco C; Pacheco, Jorge M

    2017-02-03

    Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects-such as mutations in evolutionary dynamics and a random exploration of choices in social systems-including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.

  2. Exact solution of the hidden Markov processes

    Science.gov (United States)

    Saakian, David B.

    2017-11-01

    We write a master equation for the distributions related to hidden Markov processes (HMPs) and solve it using a functional equation. Thus the solution of HMPs is mapped exactly to the solution of the functional equation. For a general case the latter can be solved only numerically. We derive an exact expression for the entropy of HMPs. Our expression for the entropy is an alternative to the ones given before by the solution of integral equations. The exact solution is possible because actually the model can be considered as a generalized random walk on a one-dimensional strip. While we give the solution for the two second-order matrices, our solution can be easily generalized for the L values of the Markov process and M values of observables: We should be able to solve a system of L functional equations in the space of dimension M -1 .

  3. Handbook of Markov chain Monte Carlo

    CERN Document Server

    Brooks, Steve

    2011-01-01

    ""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.

  4. Second Order Optimality in Markov Decision Chains

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2017-01-01

    Roč. 53, č. 6 (2017), s. 1086-1099 ISSN 0023-5954 R&D Projects: GA ČR GA15-10331S Institutional support: RVO:67985556 Keywords : Markov decision chains * second order optimality * optimalilty conditions for transient, discounted and average models * policy and value iterations Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 0.379, year: 2016 http://library.utia.cas.cz/separaty/2017/E/sladky-0485146.pdf

  5. Dynamical fluctuations for semi-Markov processes

    Czech Academy of Sciences Publication Activity Database

    Maes, C.; Netočný, Karel; Wynants, B.

    2009-01-01

    Roč. 42, č. 36 (2009), 365002/1-365002/21 ISSN 1751-8113 R&D Projects: GA ČR GC202/07/J051 Institutional research plan: CEZ:AV0Z10100520 Keywords : nonequilibrium fluctuations * semi-Markov processes Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.577, year: 2009 http://www.iop.org/EJ/abstract/1751-8121/42/36/365002

  6. Analysis of a quantum Markov chain

    International Nuclear Information System (INIS)

    Marbeau, J.; Gudder, S.

    1990-01-01

    A quantum chain is analogous to a classical stationary Markov chain except that the probability measure is replaced by a complex amplitude measure and the transition probability matrix is replaced by a transition amplitude matrix. After considering the general situation, we study a particular example of a quantum chain whose transition amplitude matrix has the form of a Dirichlet matrix. Such matrices generate a discrete analog of the usual continuum Feynman amplitude. We then compute the probability distribution for these quantum chains

  7. Modelling of cyclical stratigraphy using Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Kulatilake, P.H.S.W.

    1987-07-01

    State-of-the-art on modelling of cyclical stratigraphy using first-order Markov chains is reviewed. Shortcomings of the presently available procedures are identified. A procedure which eliminates all the identified shortcomings is presented. Required statistical tests to perform this modelling are given in detail. An example (the Oficina formation in eastern Venezuela) is given to illustrate the presented procedure. 12 refs., 3 tabs. 1 fig.

  8. Markov Chains For Testing Redundant Software

    Science.gov (United States)

    White, Allan L.; Sjogren, Jon A.

    1990-01-01

    Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.

  9. Operational Markov Condition for Quantum Processes

    Science.gov (United States)

    Pollock, Felix A.; Rodríguez-Rosario, César; Frauenheim, Thomas; Paternostro, Mauro; Modi, Kavan

    2018-01-01

    We derive a necessary and sufficient condition for a quantum process to be Markovian which coincides with the classical one in the relevant limit. Our condition unifies all previously known definitions for quantum Markov processes by accounting for all potentially detectable memory effects. We then derive a family of measures of non-Markovianity with clear operational interpretations, such as the size of the memory required to simulate a process or the experimental falsifiability of a Markovian hypothesis.

  10. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    Science.gov (United States)

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  11. Temperature scaling method for Markov chains.

    Science.gov (United States)

    Crosby, Lonnie D; Windus, Theresa L

    2009-01-22

    The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.

  12. Analysis on the Spatial-Temporal Dynamics of Financial Agglomeration with Markov Chain Approach in China

    Directory of Open Access Journals (Sweden)

    Weimin Chen

    2014-01-01

    Full Text Available The standard approach to studying financial industrial agglomeration is to construct measures of the degree of agglomeration within financial industry. But such measures often fail to exploit the convergence or divergence of financial agglomeration. In this paper, we apply Markov chain approach to diagnose the convergence of financial agglomeration in China based on the location quotient coefficients across the provincial regions over 1993–2011. The estimation of Markov transition probability matrix offers more detailed insights into the mechanics of financial agglomeration evolution process in China during the research period. The results show that the spatial evolution of financial agglomeration changes faster in the period of 2003–2011 than that in the period of 1993–2002. Furthermore, there exists a very uneven financial development patterns, but there is regional convergence for financial agglomeration in China.

  13. Stencil method: a Markov model for transport in porous media

    Science.gov (United States)

    Delgoshaie, A. H.; Tchelepi, H.; Jenny, P.

    2016-12-01

    In porous media the transport of fluid is dominated by flow-field heterogeneity resulting from the underlying transmissibility field. Since the transmissibility is highly uncertain, many realizations of a geological model are used to describe the statistics of the transport phenomena in a Monte Carlo framework. One possible way to avoid the high computational cost of physics-based Monte Carlo simulations is to model the velocity field as a Markov process and use Markov Chain Monte Carlo. In previous works multiple Markov models for discrete velocity processes have been proposed. These models can be divided into two general classes of Markov models in time and Markov models in space. Both of these choices have been shown to be effective to some extent. However some studies have suggested that the Markov property cannot be confirmed for a temporal Markov process; Therefore there is not a consensus about the validity and value of Markov models in time. Moreover, previous spacial Markov models have only been used for modeling transport on structured networks and can not be readily applied to model transport in unstructured networks. In this work we propose a novel approach for constructing a Markov model in time (stencil method) for a discrete velocity process. The results form the stencil method are compared to previously proposed spacial Markov models for structured networks. The stencil method is also applied to unstructured networks and can successfully describe the dispersion of particles in this setting. Our conclusion is that both temporal Markov models and spacial Markov models for discrete velocity processes can be valid for a range of model parameters. Moreover, we show that the stencil model can be more efficient in many practical settings and is suited to model dispersion both on structured and unstructured networks.

  14. Constructing Dynamic Event Trees from Markov Models

    International Nuclear Information System (INIS)

    Paolo Bucci; Jason Kirschenbaum; Tunc Aldemir; Curtis Smith; Ted Wood

    2006-01-01

    In the probabilistic risk assessment (PRA) of process plants, Markov models can be used to model accurately the complex dynamic interactions between plant physical process variables (e.g., temperature, pressure, etc.) and the instrumentation and control system that monitors and manages the process. One limitation of this approach that has prevented its use in nuclear power plant PRAs is the difficulty of integrating the results of a Markov analysis into an existing PRA. In this paper, we explore a new approach to the generation of failure scenarios and their compilation into dynamic event trees from a Markov model of the system. These event trees can be integrated into an existing PRA using software tools such as SAPHIRE. To implement our approach, we first construct a discrete-time Markov chain modeling the system of interest by: (a) partitioning the process variable state space into magnitude intervals (cells), (b) using analytical equations or a system simulator to determine the transition probabilities between the cells through the cell-to-cell mapping technique, and, (c) using given failure/repair data for all the components of interest. The Markov transition matrix thus generated can be thought of as a process model describing the stochastic dynamic behavior of the finite-state system. We can therefore search the state space starting from a set of initial states to explore all possible paths to failure (scenarios) with associated probabilities. We can also construct event trees of arbitrary depth by tracing paths from a chosen initiating event and recording the following events while keeping track of the probabilities associated with each branch in the tree. As an example of our approach, we use the simple level control system often used as benchmark in the literature with one process variable (liquid level in a tank), and three control units: a drain unit and two supply units. Each unit includes a separate level sensor to observe the liquid level in the tank

  15. LP Model for Periodic Recruitment and Retrenchment of Manpower ...

    African Journals Online (AJOL)

    user

    The system also allows a periodic recruitment and retrenchment for a finite time interval. In addition to the ... manpower planning models which are based on Markov chain models. .... Moreover fractional values are approximated to be integers ...

  16. Markov chains and semi-Markov models in time-to-event analysis.

    Science.gov (United States)

    Abner, Erin L; Charnigo, Richard J; Kryscio, Richard J

    2013-10-25

    A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields.

  17. Monte Carlo Simulation of Markov, Semi-Markov, and Generalized Semi- Markov Processes in Probabilistic Risk Assessment

    Science.gov (United States)

    English, Thomas

    2005-01-01

    A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.

  18. A Markov Process Inspired Cellular Automata Model of Road Traffic

    OpenAIRE

    Wang, Fa; Li, Li; Hu, Jianming; Ji, Yan; Yao, Danya; Zhang, Yi; Jin, Xuexiang; Su, Yuelong; Wei, Zheng

    2008-01-01

    To provide a more accurate description of the driving behaviors in vehicle queues, a namely Markov-Gap cellular automata model is proposed in this paper. It views the variation of the gap between two consequent vehicles as a Markov process whose stationary distribution corresponds to the observed distribution of practical gaps. The multiformity of this Markov process provides the model enough flexibility to describe various driving behaviors. Two examples are given to show how to specialize i...

  19. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  20. Deteksi Fraud Menggunakan Metode Model Markov Tersembunyi Pada Proses Bisnis

    Directory of Open Access Journals (Sweden)

    Andrean Hutama Koosasi

    2017-03-01

    Full Text Available Model Markov Tersembunyi merupakan sebuah metode statistik berdasarkan Model Markov sederhana yang memodelkan sistem serta membaginya dalam 2 (dua state, state tersembunyi dan state observasi. Dalam pengerjaan tugas akhir ini, penulis mengusulkan penggunaan metode Model Markov Tersembunyi untuk menemukan fraud didalam sebuah pelaksanaan proses bisnis. Dengan penggunaan metode Model Markov Tersembunyi ini, maka pengamatan terhadap elemen penyusun sebuah kasus/kejadian, yakni beberapa aktivitas, akan diperoleh sebuah nilai peluang, yang sekaligus memberikan prediksi terhadap kasus/kejadian tersebut, sebuah fraud atau tidak. Hasil ekpserimen ini menunjukkan bahwa metode yang diusulkan mampu memberikan prediksi akhir dengan evaluasi TPR sebesar 87,5% dan TNR sebesar 99,4%.

  1. Alignment-free Transcriptomic and Metatranscriptomic Comparison Using Sequencing Signatures with Variable Length Markov Chains.

    Science.gov (United States)

    Liao, Weinan; Ren, Jie; Wang, Kun; Wang, Shun; Zeng, Feng; Wang, Ying; Sun, Fengzhu

    2016-11-23

    The comparison between microbial sequencing data is critical to understand the dynamics of microbial communities. The alignment-based tools analyzing metagenomic datasets require reference sequences and read alignments. The available alignment-free dissimilarity approaches model the background sequences with Fixed Order Markov Chain (FOMC) yielding promising results for the comparison of microbial communities. However, in FOMC, the number of parameters grows exponentially with the increase of the order of Markov Chain (MC). Under a fixed high order of MC, the parameters might not be accurately estimated owing to the limitation of sequencing depth. In our study, we investigate an alternative to FOMC to model background sequences with the data-driven Variable Length Markov Chain (VLMC) in metatranscriptomic data. The VLMC originally designed for long sequences was extended to apply to high-throughput sequencing reads and the strategies to estimate the corresponding parameters were developed. The flexible number of parameters in VLMC avoids estimating the vast number of parameters of high-order MC under limited sequencing depth. Different from the manual selection in FOMC, VLMC determines the MC order adaptively. Several beta diversity measures based on VLMC were applied to compare the bacterial RNA-Seq and metatranscriptomic datasets. Experiments show that VLMC outperforms FOMC to model the background sequences in transcriptomic and metatranscriptomic samples. A software pipeline is available at https://d2vlmc.codeplex.com.

  2. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series.

    Science.gov (United States)

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  3. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series

    Science.gov (United States)

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  4. How Complex, Probable, and Predictable is Genetically Driven Red Queen Chaos?

    Science.gov (United States)

    Duarte, Jorge; Rodrigues, Carla; Januário, Cristina; Martins, Nuno; Sardanyés, Josep

    2015-12-01

    Coevolution between two antagonistic species has been widely studied theoretically for both ecologically- and genetically-driven Red Queen dynamics. A typical outcome of these systems is an oscillatory behavior causing an endless series of one species adaptation and others counter-adaptation. More recently, a mathematical model combining a three-species food chain system with an adaptive dynamics approach revealed genetically driven chaotic Red Queen coevolution. In the present article, we analyze this mathematical model mainly focusing on the impact of species rates of evolution (mutation rates) in the dynamics. Firstly, we analytically proof the boundedness of the trajectories of the chaotic attractor. The complexity of the coupling between the dynamical variables is quantified using observability indices. By using symbolic dynamics theory, we quantify the complexity of genetically driven Red Queen chaos computing the topological entropy of existing one-dimensional iterated maps using Markov partitions. Co-dimensional two bifurcation diagrams are also built from the period ordering of the orbits of the maps. Then, we study the predictability of the Red Queen chaos, found in narrow regions of mutation rates. To extend the previous analyses, we also computed the likeliness of finding chaos in a given region of the parameter space varying other model parameters simultaneously. Such analyses allowed us to compute a mean predictability measure for the system in the explored region of the parameter space. We found that genetically driven Red Queen chaos, although being restricted to small regions of the analyzed parameter space, might be highly unpredictable.

  5. Markov Chain Analysis of Musical Dice Games

    Science.gov (United States)

    Volchenkov, D.; Dawin, J. R.

    2012-07-01

    A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.

  6. Pruning Boltzmann networks and hidden Markov models

    DEFF Research Database (Denmark)

    Pedersen, Morten With; Stork, D.

    1996-01-01

    We present sensitivity-based pruning algorithms for general Boltzmann networks. Central to our methods is the efficient calculation of a second-order approximation to the true weight saliencies in a cross-entropy error. Building upon previous work which shows a formal correspondence between linear...... Boltzmann chains and hidden Markov models (HMMs), we argue that our method can be applied to HMMs as well. We illustrate pruning on Boltzmann zippers, which are equivalent to two HMMs with cross-connection links. We verify that our second-order approximation preserves the rank ordering of weight saliencies...

  7. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Kashyap Manohar

    2008-01-01

    Full Text Available Abstract This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  8. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Chris Winstead

    2008-04-01

    Full Text Available This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  9. Evolving the structure of hidden Markov Models

    DEFF Research Database (Denmark)

    won, K. J.; Prugel-Bennett, A.; Krogh, A.

    2006-01-01

    A genetic algorithm (GA) is proposed for finding the structure of hidden Markov Models (HMMs) used for biological sequence analysis. The GA is designed to preserve biologically meaningful building blocks. The search through the space of HMM structures is combined with optimization of the emission...... and transition probabilities using the classic Baum-Welch algorithm. The system is tested on the problem of finding the promoter and coding region of C. jejuni. The resulting HMM has a superior discrimination ability to a handcrafted model that has been published in the literature....

  10. Honest Importance Sampling with Multiple Markov Chains.

    Science.gov (United States)

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable

  11. ANALYSING ACCEPTANCE SAMPLING PLANS BY MARKOV CHAINS

    Directory of Open Access Journals (Sweden)

    Mohammad Mirabi

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: In this research, a Markov analysis of acceptance sampling plans in a single stage and in two stages is proposed, based on the quality of the items inspected. In a stage of this policy, if the number of defective items in a sample of inspected items is more than the upper threshold, the batch is rejected. However, the batch is accepted if the number of defective items is less than the lower threshold. Nonetheless, when the number of defective items falls between the upper and lower thresholds, the decision-making process continues to inspect the items and collect further samples. The primary objective is to determine the optimal values of the upper and lower thresholds using a Markov process to minimise the total cost associated with a batch acceptance policy. A solution method is presented, along with a numerical demonstration of the application of the proposed methodology.

    AFRIKAANSE OPSOMMING: In hierdie navorsing word ’n Markov-ontleding gedoen van aannamemonsternemingsplanne wat plaasvind in ’n enkele stap of in twee stappe na gelang van die kwaliteit van die items wat geïnspekteer word. Indien die eerste monster toon dat die aantal defektiewe items ’n boonste grens oorskry, word die lot afgekeur. Indien die eerste monster toon dat die aantal defektiewe items minder is as ’n onderste grens, word die lot aanvaar. Indien die eerste monster toon dat die aantal defektiewe items in die gebied tussen die boonste en onderste grense lê, word die besluitnemingsproses voortgesit en verdere monsters word geneem. Die primêre doel is om die optimale waardes van die booonste en onderste grense te bepaal deur gebruik te maak van ’n Markov-proses sodat die totale koste verbonde aan die proses geminimiseer kan word. ’n Oplossing word daarna voorgehou tesame met ’n numeriese voorbeeld van die toepassing van die voorgestelde oplossing.

  12. Vulnerability of networks of interacting Markov chains.

    Science.gov (United States)

    Kocarev, L; Zlatanov, N; Trajanov, D

    2010-05-13

    The concept of vulnerability is introduced for a model of random, dynamical interactions on networks. In this model, known as the influence model, the nodes are arranged in an arbitrary network, while the evolution of the status at a node is according to an internal Markov chain, but with transition probabilities that depend not only on the current status of that node but also on the statuses of the neighbouring nodes. Vulnerability is treated analytically and numerically for several networks with different topological structures, as well as for two real networks--the network of infrastructures and the EU power grid--identifying the most vulnerable nodes of these networks.

  13. Genetic Algorithms Principles Towards Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Nabil M. Hewahi

    2011-10-01

    Full Text Available In this paper we propose a general approach based on Genetic Algorithms (GAs to evolve Hidden Markov Models (HMM. The problem appears when experts assign probability values for HMM, they use only some limited inputs. The assigned probability values might not be accurate to serve in other cases related to the same domain. We introduce an approach based on GAs to find
    out the suitable probability values for the HMM to be mostly correct in more cases than what have been used to assign the probability values.

  14. Information-Theoretic Performance Analysis of Sensor Networks via Markov Modeling of Time Series Data.

    Science.gov (United States)

    Li, Yue; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Yue Li; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Wettergren, Thomas A; Li, Yue; Ray, Asok; Jha, Devesh K

    2018-06-01

    This paper presents information-theoretic performance analysis of passive sensor networks for detection of moving targets. The proposed method falls largely under the category of data-level information fusion in sensor networks. To this end, a measure of information contribution for sensors is formulated in a symbolic dynamics framework. The network information state is approximately represented as the largest principal component of the time series collected across the network. To quantify each sensor's contribution for generation of the information content, Markov machine models as well as x-Markov (pronounced as cross-Markov) machine models, conditioned on the network information state, are constructed; the difference between the conditional entropies of these machines is then treated as an approximate measure of information contribution by the respective sensors. The x-Markov models represent the conditional temporal statistics given the network information state. The proposed method has been validated on experimental data collected from a local area network of passive sensors for target detection, where the statistical characteristics of environmental disturbances are similar to those of the target signal in the sense of time scale and texture. A distinctive feature of the proposed algorithm is that the network decisions are independent of the behavior and identity of the individual sensors, which is desirable from computational perspectives. Results are presented to demonstrate the proposed method's efficacy to correctly identify the presence of a target with very low false-alarm rates. The performance of the underlying algorithm is compared with that of a recent data-driven, feature-level information fusion algorithm. It is shown that the proposed algorithm outperforms the other algorithm.

  15. Epitope discovery with phylogenetic hidden Markov models.

    LENUS (Irish Health Repository)

    Lacerda, Miguel

    2010-05-01

    Existing methods for the prediction of immunologically active T-cell epitopes are based on the amino acid sequence or structure of pathogen proteins. Additional information regarding the locations of epitopes may be acquired by considering the evolution of viruses in hosts with different immune backgrounds. In particular, immune-dependent evolutionary patterns at sites within or near T-cell epitopes can be used to enhance epitope identification. We have developed a mutation-selection model of T-cell epitope evolution that allows the human leukocyte antigen (HLA) genotype of the host to influence the evolutionary process. This is one of the first examples of the incorporation of environmental parameters into a phylogenetic model and has many other potential applications where the selection pressures exerted on an organism can be related directly to environmental factors. We combine this novel evolutionary model with a hidden Markov model to identify contiguous amino acid positions that appear to evolve under immune pressure in the presence of specific host immune alleles and that therefore represent potential epitopes. This phylogenetic hidden Markov model provides a rigorous probabilistic framework that can be combined with sequence or structural information to improve epitope prediction. As a demonstration, we apply the model to a data set of HIV-1 protein-coding sequences and host HLA genotypes.

  16. Neyman, Markov processes and survival analysis.

    Science.gov (United States)

    Yang, Grace

    2013-07-01

    J. Neyman used stochastic processes extensively in his applied work. One example is the Fix and Neyman (F-N) competing risks model (1951) that uses finite homogeneous Markov processes to analyse clinical trials with breast cancer patients. We revisit the F-N model, and compare it with the Kaplan-Meier (K-M) formulation for right censored data. The comparison offers a way to generalize the K-M formulation to include risks of recovery and relapses in the calculation of a patient's survival probability. The generalization is to extend the F-N model to a nonhomogeneous Markov process. Closed-form solutions of the survival probability are available in special cases of the nonhomogeneous processes, like the popular multiple decrement model (including the K-M model) and Chiang's staging model, but these models do not consider recovery and relapses while the F-N model does. An analysis of sero-epidemiology current status data with recurrent events is illustrated. Fix and Neyman used Neyman's RBAN (regular best asymptotic normal) estimates for the risks, and provided a numerical example showing the importance of considering both the survival probability and the length of time of a patient living a normal life in the evaluation of clinical trials. The said extension would result in a complicated model and it is unlikely to find analytical closed-form solutions for survival analysis. With ever increasing computing power, numerical methods offer a viable way of investigating the problem.

  17. Unmixing hyperspectral images using Markov random fields

    International Nuclear Information System (INIS)

    Eches, Olivier; Dobigeon, Nicolas; Tourneret, Jean-Yves

    2011-01-01

    This paper proposes a new spectral unmixing strategy based on the normal compositional model that exploits the spatial correlations between the image pixels. The pure materials (referred to as endmembers) contained in the image are assumed to be available (they can be obtained by using an appropriate endmember extraction algorithm), while the corresponding fractions (referred to as abundances) are estimated by the proposed algorithm. Due to physical constraints, the abundances have to satisfy positivity and sum-to-one constraints. The image is divided into homogeneous distinct regions having the same statistical properties for the abundance coefficients. The spatial dependencies within each class are modeled thanks to Potts-Markov random fields. Within a Bayesian framework, prior distributions for the abundances and the associated hyperparameters are introduced. A reparametrization of the abundance coefficients is proposed to handle the physical constraints (positivity and sum-to-one) inherent to hyperspectral imagery. The parameters (abundances), hyperparameters (abundance mean and variance for each class) and the classification map indicating the classes of all pixels in the image are inferred from the resulting joint posterior distribution. To overcome the complexity of the joint posterior distribution, Markov chain Monte Carlo methods are used to generate samples asymptotically distributed according to the joint posterior of interest. Simulations conducted on synthetic and real data are presented to illustrate the performance of the proposed algorithm.

  18. Markov transitions and the propagation of chaos

    International Nuclear Information System (INIS)

    Gottlieb, A.

    1998-01-01

    The propagation of chaos is a central concept of kinetic theory that serves to relate the equations of Boltzmann and Vlasov to the dynamics of many-particle systems. Propagation of chaos means that molecular chaos, i.e., the stochastic independence of two random particles in a many-particle system, persists in time, as the number of particles tends to infinity. We establish a necessary and sufficient condition for a family of general n-particle Markov processes to propagate chaos. This condition is expressed in terms of the Markov transition functions associated to the n-particle processes, and it amounts to saying that chaos of random initial states propagates if it propagates for pure initial states. Our proof of this result relies on the weak convergence approach to the study of chaos due to Sztitman and Tanaka. We assume that the space in which the particles live is homomorphic to a complete and separable metric space so that we may invoke Prohorov's theorem in our proof. We also show that, if the particles can be in only finitely many states, then molecular chaos implies that the specific entropies in the n-particle distributions converge to the entropy of the limiting single-particle distribution

  19. Asymptotic evolution of quantum Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Novotny, Jaroslav [FNSPE, CTU in Prague, 115 19 Praha 1 - Stare Mesto (Czech Republic); Alber, Gernot [Institut fuer Angewandte Physik, Technische Universitaet Darmstadt, D-64289 Darmstadt (Germany)

    2012-07-01

    The iterated quantum operations, so called quantum Markov chains, play an important role in various branches of physics. They constitute basis for many discrete models capable to explore fundamental physical problems, such as the approach to thermal equilibrium, or the asymptotic dynamics of macroscopic physical systems far from thermal equilibrium. On the other hand, in the more applied area of quantum technology they also describe general characteristic properties of quantum networks or they can describe different quantum protocols in the presence of decoherence. A particularly, an interesting aspect of these quantum Markov chains is their asymptotic dynamics and its characteristic features. We demonstrate there is always a vector subspace (typically low-dimensional) of so-called attractors on which the resulting superoperator governing the iterative time evolution of quantum states can be diagonalized and in which the asymptotic quantum dynamics takes place. As the main result interesting algebraic relations are presented for this set of attractors which allow to specify their dual basis and to determine them in a convenient way. Based on this general theory we show some generalizations concerning the theory of fixed points or asymptotic evolution of random quantum operations.

  20. Monotone measures of ergodicity for Markov chains

    Directory of Open Access Journals (Sweden)

    J. Keilson

    1998-01-01

    Full Text Available The following paper, first written in 1974, was never published other than as part of an internal research series. Its lack of publication is unrelated to the merits of the paper and the paper is of current importance by virtue of its relation to the relaxation time. A systematic discussion is provided of the approach of a finite Markov chain to ergodicity by proving the monotonicity of an important set of norms, each measures of egodicity, whether or not time reversibility is present. The paper is of particular interest because the discussion of the relaxation time of a finite Markov chain [2] has only been clean for time reversible chains, a small subset of the chains of interest. This restriction is not present here. Indeed, a new relaxation time quoted quantifies the relaxation time for all finite ergodic chains (cf. the discussion of Q1(t below Equation (1.7]. This relaxation time was developed by Keilson with A. Roy in his thesis [6], yet to be published.

  1. Pathwise duals of monotone and additive Markov processes

    Czech Academy of Sciences Publication Activity Database

    Sturm, A.; Swart, Jan M.

    -, - (2018) ISSN 0894-9840 R&D Projects: GA ČR GAP201/12/2613 Institutional support: RVO:67985556 Keywords : pathwise duality * monotone Markov process * additive Markov process * interacting particle system Subject RIV: BA - General Mathematics Impact factor: 0.854, year: 2016 http://library.utia.cas.cz/separaty/2016/SI/swart-0465436.pdf

  2. An introduction to hidden Markov models for biological sequences

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose

    1998-01-01

    A non-matematical tutorial on hidden Markov models (HMMs) plus a description of one of the applications of HMMs: gene finding.......A non-matematical tutorial on hidden Markov models (HMMs) plus a description of one of the applications of HMMs: gene finding....

  3. Asymptotics for Estimating Equations in Hidden Markov Models

    DEFF Research Database (Denmark)

    Hansen, Jørgen Vinsløv; Jensen, Jens Ledet

    Results on asymptotic normality for the maximum likelihood estimate in hidden Markov models are extended in two directions. The stationarity assumption is relaxed, which allows for a covariate process influencing the hidden Markov process. Furthermore a class of estimating equations is considered...

  4. Efficient Incorporation of Markov Random Fields in Change Detection

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Nielsen, Allan Aasbjerg; Carstensen, Jens Michael

    2009-01-01

    of noise, implying that the pixel-wise classifier is also noisy. There is thus a need for incorporating local homogeneity constraints into such a change detection framework. For this modelling task Markov Random Fields are suitable. Markov Random Fields have, however, previously been plagued by lack...

  5. Markov trace on the Yokonuma-Hecke algebra

    International Nuclear Information System (INIS)

    Juyumaya, J.

    2002-11-01

    The objective of this note is to prove that there exists a Markov trace on the Yokonuma-Hecke algebra. A motivation to define a Markov trace is to get polynomial invariants for knots in the sense of Jones construction. (author)

  6. Compositionality for Markov reward chains with fast and silent transitions

    NARCIS (Netherlands)

    Markovski, J.; Sokolova, A.; Trcka, N.; Vink, de E.P.

    2009-01-01

    A parallel composition is defined for Markov reward chains with stochastic discontinuity, and with fast and silent transitions. In this setting, compositionality with respect to the relevant aggregation preorders is established. For Markov reward chains with fast transitions the preorders are

  7. Model Checking Markov Reward Models with Impulse Rewards

    NARCIS (Netherlands)

    Cloth, Lucia; Katoen, Joost-Pieter; Khattri, Maneesh; Pulungan, Reza; Bondavalli, Andrea; Haverkort, Boudewijn; Tang, Dong

    This paper considers model checking of Markov reward models (MRMs), continuous-time Markov chains with state rewards as well as impulse rewards. The reward extension of the logic CSL (Continuous Stochastic Logic) is interpreted over such MRMs, and two numerical algorithms are provided to check the

  8. Recursive smoothers for hidden discrete-time Markov chains

    Directory of Open Access Journals (Sweden)

    Lakhdar Aggoun

    2005-01-01

    Full Text Available We consider a discrete-time Markov chain observed through another Markov chain. The proposed model extends models discussed by Elliott et al. (1995. We propose improved recursive formulae to update smoothed estimates of processes related to the model. These recursive estimates are used to update the parameter of the model via the expectation maximization (EM algorithm.

  9. First hitting probabilities for semi markov chains and estimation

    DEFF Research Database (Denmark)

    Georgiadis, Stylianos

    2017-01-01

    We first consider a stochastic system described by an absorbing semi-Markov chain with finite state space and we introduce the absorption probability to a class of recurrent states. Afterwards, we study the first hitting probability to a subset of states for an irreducible semi-Markov chain...

  10. ANALYTIC WORD RECOGNITION WITHOUT SEGMENTATION BASED ON MARKOV RANDOM FIELDS

    NARCIS (Netherlands)

    Coisy, C.; Belaid, A.

    2004-01-01

    In this paper, a method for analytic handwritten word recognition based on causal Markov random fields is described. The words models are HMMs where each state corresponds to a letter; each letter is modelled by a NSHP­HMM (Markov field). Global models are build dynamically, and used for recognition

  11. A Markov decision model for optimising economic production lot size ...

    African Journals Online (AJOL)

    Adopting such a Markov decision process approach, the states of a Markov chain represent possible states of demand. The decision of whether or not to produce additional inventory units is made using dynamic programming. This approach demonstrates the existence of an optimal state-dependent EPL size, and produces ...

  12. Portfolio allocation under the vendor managed inventory: A Markov ...

    African Journals Online (AJOL)

    Portfolio allocation under the vendor managed inventory: A Markov decision process. ... Journal of Applied Sciences and Environmental Management ... This study provides a review of Markov decision processes and investigates its suitability for solutions to portfolio allocation problems under vendor managed inventory in ...

  13. Logics and Models for Stochastic Analysis Beyond Markov Chains

    DEFF Research Database (Denmark)

    Zeng, Kebin

    , because of the generality of ME distributions, we have to leave the world of Markov chains. To support ME distributions with multiple exits, we introduce a multi-exits ME distribution together with a process algebra MEME to express the systems having the semantics as Markov renewal processes with ME...

  14. Introduction to the numerical solutions of Markov chains

    CERN Document Server

    Stewart, Williams J

    1994-01-01

    A cornerstone of applied probability, Markov chains can be used to help model how plants grow, chemicals react, and atoms diffuse - and applications are increasingly being found in such areas as engineering, computer science, economics, and education. To apply the techniques to real problems, however, it is necessary to understand how Markov chains can be solved numerically. In this book, the first to offer a systematic and detailed treatment of the numerical solution of Markov chains, William Stewart provides scientists on many levels with the power to put this theory to use in the actual world, where it has applications in areas as diverse as engineering, economics, and education. His efforts make for essential reading in a rapidly growing field. Here, Stewart explores all aspects of numerically computing solutions of Markov chains, especially when the state is huge. He provides extensive background to both discrete-time and continuous-time Markov chains and examines many different numerical computing metho...

  15. The Effects of Oil Price Shocks on Turkish Business Cycle: A Markov Switching Approach

    Directory of Open Access Journals (Sweden)

    Vasif Abiyev

    2015-10-01

    Full Text Available Purpose - The purpose of this study is to investigate the relationship between oil price changes and the output growth in Turkey. Design/methodology/approach - The data were taken from International Financial Statistics databases, consisting of monthly data for the period 1986:01-2014:09. Different univariate Markov - switching regime autoregressive models are specified and estimated. Among them we selected univariate MSIH(3 - AR(2 model for output and extended it to verify if the inclusion of various asymmetric oil price shocks as an exogenous variable improves the ability of the Markov switching model. Four different oil price shocks are considered. Findings - We find that among various oil price shocks, only net oil price increases have negative effects on output growth and mitigate the magnitude of some recessionary periods in Turkey. However, it doesn’t strongly explain the behavior of business cycle in Turkey. Research limitations/implications - Our results suggest that the inclusion of other fundamental financial factors in the bivariate Markov switching model of aggregate economic activity and oil price changes becomes important to explicitly detect the negative impact of oil price shocks on output in Turkey. Originality/value - Our results support the existence of a negative relationship between oil price increases and output growth mentioned in the literature and empirical studies on Turkey.

  16. Learning Markov Decision Processes for Model Checking

    DEFF Research Database (Denmark)

    Mao, Hua; Chen, Yingke; Jaeger, Manfred

    2012-01-01

    . The proposed learning algorithm is adapted from algorithms for learning deterministic probabilistic finite automata, and extended to include both probabilistic and nondeterministic transitions. The algorithm is empirically analyzed and evaluated by learning system models of slot machines. The evaluation......Constructing an accurate system model for formal model verification can be both resource demanding and time-consuming. To alleviate this shortcoming, algorithms have been proposed for automatically learning system models based on observed system behaviors. In this paper we extend the algorithm...... on learning probabilistic automata to reactive systems, where the observed system behavior is in the form of alternating sequences of inputs and outputs. We propose an algorithm for automatically learning a deterministic labeled Markov decision process model from the observed behavior of a reactive system...

  17. Learning Markov models for stationary system behaviors

    DEFF Research Database (Denmark)

    Chen, Yingke; Mao, Hua; Jaeger, Manfred

    2012-01-01

    to a single long observation sequence, and in these situations existing automatic learning methods cannot be applied. In this paper, we adapt algorithms for learning variable order Markov chains from a single observation sequence of a target system, so that stationary system properties can be verified using......Establishing an accurate model for formal verification of an existing hardware or software system is often a manual process that is both time consuming and resource demanding. In order to ease the model construction phase, methods have recently been proposed for automatically learning accurate...... the learned model. Experiments demonstrate that system properties (formulated as stationary probabilities of LTL formulas) can be reliably identified using the learned model....

  18. Neuroevolution Mechanism for Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Nabil M. Hewahi

    2011-12-01

    Full Text Available Hidden Markov Model (HMM is a statistical model based on probabilities. HMM is becoming one of the major models involved in many applications such as natural language
    processing, handwritten recognition, image processing, prediction systems and many more. In this research we are concerned with finding out the best HMM for a certain application domain. We propose a neuroevolution process that is based first on converting the HMM to a neural network, then generating many neural networks at random where each represents a HMM. We proceed by
    applying genetic operators to obtain new set of neural networks where each represents HMMs, and updating the population. Finally select the best neural network based on a fitness function.

  19. Improved hidden Markov model for nosocomial infections.

    Science.gov (United States)

    Khader, Karim; Leecaster, Molly; Greene, Tom; Samore, Matthew; Thomas, Alun

    2014-12-01

    We propose a novel hidden Markov model (HMM) for parameter estimation in hospital transmission models, and show that commonly made simplifying assumptions can lead to severe model misspecification and poor parameter estimates. A standard HMM that embodies two commonly made simplifying assumptions, namely a fixed patient count and binomially distributed detections is compared with a new alternative HMM that does not require these simplifying assumptions. Using simulated data, we demonstrate how each of the simplifying assumptions used by the standard model leads to model misspecification, whereas the alternative model results in accurate parameter estimates. © The Authors 2013. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  20. Estimation and uncertainty of reversible Markov models.

    Science.gov (United States)

    Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank

    2015-11-07

    Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software--http://pyemma.org--as of version 2.0.

  1. Monte Carlo simulation of Markov unreliability models

    International Nuclear Information System (INIS)

    Lewis, E.E.; Boehm, F.

    1984-01-01

    A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)

  2. Recombination Processes and Nonlinear Markov Chains.

    Science.gov (United States)

    Pirogov, Sergey; Rybko, Alexander; Kalinina, Anastasia; Gelfand, Mikhail

    2016-09-01

    Bacteria are known to exchange genetic information by horizontal gene transfer. Since the frequency of homologous recombination depends on the similarity between the recombining segments, several studies examined whether this could lead to the emergence of subspecies. Most of them simulated fixed-size Wright-Fisher populations, in which the genetic drift should be taken into account. Here, we use nonlinear Markov processes to describe a bacterial population evolving under mutation and recombination. We consider a population structure as a probability measure on the space of genomes. This approach implies the infinite population size limit, and thus, the genetic drift is not assumed. We prove that under these conditions, the emergence of subspecies is impossible.

  3. SHARP ENTRYWISE PERTURBATION BOUNDS FOR MARKOV CHAINS.

    Science.gov (United States)

    Thiede, Erik; VAN Koten, Brian; Weare, Jonathan

    For many Markov chains of practical interest, the invariant distribution is extremely sensitive to perturbations of some entries of the transition matrix, but insensitive to others; we give an example of such a chain, motivated by a problem in computational statistical physics. We have derived perturbation bounds on the relative error of the invariant distribution that reveal these variations in sensitivity. Our bounds are sharp, we do not impose any structural assumptions on the transition matrix or on the perturbation, and computing the bounds has the same complexity as computing the invariant distribution or computing other bounds in the literature. Moreover, our bounds have a simple interpretation in terms of hitting times, which can be used to draw intuitive but rigorous conclusions about the sensitivity of a chain to various types of perturbations.

  4. A Markov Chain Model for Contagion

    Directory of Open Access Journals (Sweden)

    Angelos Dassios

    2014-11-01

    Full Text Available We introduce a bivariate Markov chain counting process with contagion for modelling the clustering arrival of loss claims with delayed settlement for an insurance company. It is a general continuous-time model framework that also has the potential to be applicable to modelling the clustering arrival of events, such as jumps, bankruptcies, crises and catastrophes in finance, insurance and economics with both internal contagion risk and external common risk. Key distributional properties, such as the moments and probability generating functions, for this process are derived. Some special cases with explicit results and numerical examples and the motivation for further actuarial applications are also discussed. The model can be considered a generalisation of the dynamic contagion process introduced by Dassios and Zhao (2011.

  5. Markov state models of protein misfolding

    Science.gov (United States)

    Sirur, Anshul; De Sancho, David; Best, Robert B.

    2016-02-01

    Markov state models (MSMs) are an extremely useful tool for understanding the conformational dynamics of macromolecules and for analyzing MD simulations in a quantitative fashion. They have been extensively used for peptide and protein folding, for small molecule binding, and for the study of native ensemble dynamics. Here, we adapt the MSM methodology to gain insight into the dynamics of misfolded states. To overcome possible flaws in root-mean-square deviation (RMSD)-based metrics, we introduce a novel discretization approach, based on coarse-grained contact maps. In addition, we extend the MSM methodology to include "sink" states in order to account for the irreversibility (on simulation time scales) of processes like protein misfolding. We apply this method to analyze the mechanism of misfolding of tandem repeats of titin domains, and how it is influenced by confinement in a chaperonin-like cavity.

  6. Multivariate Markov chain modeling for stock markets

    Science.gov (United States)

    Maskawa, Jun-ichi

    2003-06-01

    We study a multivariate Markov chain model as a stochastic model of the price changes of portfolios in the framework of the mean field approximation. The time series of price changes are coded into the sequences of up and down spins according to their signs. We start with the discussion for small portfolios consisting of two stock issues. The generalization of our model to arbitrary size of portfolio is constructed by a recurrence relation. The resultant form of the joint probability of the stationary state coincides with Gibbs measure assigned to each configuration of spin glass model. Through the analysis of actual portfolios, it has been shown that the synchronization of the direction of the price changes is well described by the model.

  7. Anatomy Ontology Matching Using Markov Logic Networks

    Directory of Open Access Journals (Sweden)

    Chunhua Li

    2016-01-01

    Full Text Available The anatomy of model species is described in ontologies, which are used to standardize the annotations of experimental data, such as gene expression patterns. To compare such data between species, we need to establish relationships between ontologies describing different species. Ontology matching is a kind of solutions to find semantic correspondences between entities of different ontologies. Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. We combine several different matching strategies through first-order logic formulas according to the structure of anatomy ontologies. Experiments on the adult mouse anatomy and the human anatomy have demonstrated the effectiveness of proposed approach in terms of the quality of result alignment.

  8. Crossing over...Markov meets Mendel.

    Science.gov (United States)

    Mneimneh, Saad

    2012-01-01

    Chromosomal crossover is a biological mechanism to combine parental traits. It is perhaps the first mechanism ever taught in any introductory biology class. The formulation of crossover, and resulting recombination, came about 100 years after Mendel's famous experiments. To a great extent, this formulation is consistent with the basic genetic findings of Mendel. More importantly, it provides a mathematical insight for his two laws (and corrects them). From a mathematical perspective, and while it retains similarities, genetic recombination guarantees diversity so that we do not rapidly converge to the same being. It is this diversity that made the study of biology possible. In particular, the problem of genetic mapping and linkage-one of the first efforts towards a computational approach to biology-relies heavily on the mathematical foundation of crossover and recombination. Nevertheless, as students we often overlook the mathematics of these phenomena. Emphasizing the mathematical aspect of Mendel's laws through crossover and recombination will prepare the students to make an early realization that biology, in addition to being experimental, IS a computational science. This can serve as a first step towards a broader curricular transformation in teaching biological sciences. I will show that a simple and modern treatment of Mendel's laws using a Markov chain will make this step possible, and it will only require basic college-level probability and calculus. My personal teaching experience confirms that students WANT to know Markov chains because they hear about them from bioinformaticists all the time. This entire exposition is based on three homework problems that I designed for a course in computational biology. A typical reader is, therefore, an instructional staff member or a student in a computational field (e.g., computer science, mathematics, statistics, computational biology, bioinformatics). However, other students may easily follow by omitting the

  9. Crossing over...Markov meets Mendel.

    Directory of Open Access Journals (Sweden)

    Saad Mneimneh

    Full Text Available Chromosomal crossover is a biological mechanism to combine parental traits. It is perhaps the first mechanism ever taught in any introductory biology class. The formulation of crossover, and resulting recombination, came about 100 years after Mendel's famous experiments. To a great extent, this formulation is consistent with the basic genetic findings of Mendel. More importantly, it provides a mathematical insight for his two laws (and corrects them. From a mathematical perspective, and while it retains similarities, genetic recombination guarantees diversity so that we do not rapidly converge to the same being. It is this diversity that made the study of biology possible. In particular, the problem of genetic mapping and linkage-one of the first efforts towards a computational approach to biology-relies heavily on the mathematical foundation of crossover and recombination. Nevertheless, as students we often overlook the mathematics of these phenomena. Emphasizing the mathematical aspect of Mendel's laws through crossover and recombination will prepare the students to make an early realization that biology, in addition to being experimental, IS a computational science. This can serve as a first step towards a broader curricular transformation in teaching biological sciences. I will show that a simple and modern treatment of Mendel's laws using a Markov chain will make this step possible, and it will only require basic college-level probability and calculus. My personal teaching experience confirms that students WANT to know Markov chains because they hear about them from bioinformaticists all the time. This entire exposition is based on three homework problems that I designed for a course in computational biology. A typical reader is, therefore, an instructional staff member or a student in a computational field (e.g., computer science, mathematics, statistics, computational biology, bioinformatics. However, other students may easily follow by

  10. Bayesian tomography by interacting Markov chains

    Science.gov (United States)

    Romary, T.

    2017-12-01

    In seismic tomography, we seek to determine the velocity of the undergound from noisy first arrival travel time observations. In most situations, this is an ill posed inverse problem that admits several unperfect solutions. Given an a priori distribution over the parameters of the velocity model, the Bayesian formulation allows to state this problem as a probabilistic one, with a solution under the form of a posterior distribution. The posterior distribution is generally high dimensional and may exhibit multimodality. Moreover, as it is known only up to a constant, the only sensible way to addressthis problem is to try to generate simulations from the posterior. The natural tools to perform these simulations are Monte Carlo Markov chains (MCMC). Classical implementations of MCMC algorithms generally suffer from slow mixing: the generated states are slow to enter the stationary regime, that is to fit the observations, and when one mode of the posterior is eventually identified, it may become difficult to visit others. Using a varying temperature parameter relaxing the constraint on the data may help to enter the stationary regime. Besides, the sequential nature of MCMC makes them ill fitted toparallel implementation. Running a large number of chains in parallel may be suboptimal as the information gathered by each chain is not mutualized. Parallel tempering (PT) can be seen as a first attempt to make parallel chains at different temperatures communicate but only exchange information between current states. In this talk, I will show that PT actually belongs to a general class of interacting Markov chains algorithm. I will also show that this class enables to design interacting schemes that can take advantage of the whole history of the chain, by authorizing exchanges toward already visited states. The algorithms will be illustrated with toy examples and an application to first arrival traveltime tomography.

  11. Modeling Uncertainty of Directed Movement via Markov Chains

    Directory of Open Access Journals (Sweden)

    YIN Zhangcai

    2015-10-01

    Full Text Available Probabilistic time geography (PTG is suggested as an extension of (classical time geography, in order to present the uncertainty of an agent located at the accessible position by probability. This may provide a quantitative basis for most likely finding an agent at a location. In recent years, PTG based on normal distribution or Brown bridge has been proposed, its variance, however, is irrelevant with the agent's speed or divergent with the increase of the speed; so they are difficult to take into account application pertinence and stability. In this paper, a new method is proposed to model PTG based on Markov chain. Firstly, a bidirectional conditions Markov chain is modeled, the limit of which, when the moving speed is large enough, can be regarded as the Brown bridge, thus has the characteristics of digital stability. Then, the directed movement is mapped to Markov chains. The essential part is to build step length, the state space and transfer matrix of Markov chain according to the space and time position of directional movement, movement speed information, to make sure the Markov chain related to the movement speed. Finally, calculating continuously the probability distribution of the directed movement at any time by the Markov chains, it can be get the possibility of an agent located at the accessible position. Experimental results show that, the variance based on Markov chains not only is related to speed, but also is tending towards stability with increasing the agent's maximum speed.

  12. Prediction of annual precipitation on the territory of south Serbia using Markov chains

    Directory of Open Access Journals (Sweden)

    Lukić Predrag

    2013-01-01

    Full Text Available Prediction of precipitation is one of the important factors that affect the sectors such as industry, agriculture, environmental protection, and their related fields. The stochastic method based on a Markov chain model is used in the paper to predict the annual precipitation in the territory of South Serbia for the period 2009-2013. For this purpose, the precipitation data rainfall recorded on the four synoptic stations were used for the period 1980-2010. [Projekat Ministarstva nauke Republike Srbije, br. TR 37003: Razvoj hidroinformacionog sistema za praćenje i ranu najavu suša

  13. Markov processes from K. Ito's perspective (AM-155)

    CERN Document Server

    Stroock, Daniel W

    2003-01-01

    Kiyosi Itô''s greatest contribution to probability theory may be his introduction of stochastic differential equations to explain the Kolmogorov-Feller theory of Markov processes. Starting with the geometric ideas that guided him, this book gives an account of Itô''s program. The modern theory of Markov processes was initiated by A. N. Kolmogorov. However, Kolmogorov''s approach was too analytic to reveal the probabilistic foundations on which it rests. In particular, it hides the central role played by the simplest Markov processes: those with independent, identically distributed incremen

  14. Sampling rare fluctuations of discrete-time Markov chains

    Science.gov (United States)

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  15. Markov's theorem and algorithmically non-recognizable combinatorial manifolds

    International Nuclear Information System (INIS)

    Shtan'ko, M A

    2004-01-01

    We prove the theorem of Markov on the existence of an algorithmically non-recognizable combinatorial n-dimensional manifold for every n≥4. We construct for the first time a concrete manifold which is algorithmically non-recognizable. A strengthened form of Markov's theorem is proved using the combinatorial methods of regular neighbourhoods and handle theory. The proofs coincide for all n≥4. We use Borisov's group with insoluble word problem. It has two generators and twelve relations. The use of this group forms the base for proving the strengthened form of Markov's theorem

  16. Volcanic Eruption Forecasts From Accelerating Rates of Drumbeat Long-Period Earthquakes

    Science.gov (United States)

    Bell, Andrew F.; Naylor, Mark; Hernandez, Stephen; Main, Ian G.; Gaunt, H. Elizabeth; Mothes, Patricia; Ruiz, Mario

    2018-02-01

    Accelerating rates of quasiperiodic "drumbeat" long-period earthquakes (LPs) are commonly reported before eruptions at andesite and dacite volcanoes, and promise insights into the nature of fundamental preeruptive processes and improved eruption forecasts. Here we apply a new Bayesian Markov chain Monte Carlo gamma point process methodology to investigate an exceptionally well-developed sequence of drumbeat LPs preceding a recent large vulcanian explosion at Tungurahua volcano, Ecuador. For more than 24 hr, LP rates increased according to the inverse power law trend predicted by material failure theory, and with a retrospectively forecast failure time that agrees with the eruption onset within error. LPs resulted from repeated activation of a single characteristic source driven by accelerating loading, rather than a distributed failure process, showing that similar precursory trends can emerge from quite different underlying physics. Nevertheless, such sequences have clear potential for improving forecasts of eruptions at Tungurahua and analogous volcanoes.

  17. Cyclic Markov chains with an application to an intermediate ENSO model

    Directory of Open Access Journals (Sweden)

    R. A. Pasmanter

    2003-01-01

    Full Text Available We develop the theory of cyclic Markov chains and apply it to the El Niño-Southern Oscillation (ENSO predictability problem. At the core of Markov chain modelling is a partition of the state space such that the transition rates between different state space cells can be computed and used most efficiently. We apply a partition technique, which divides the state space into multidimensional cells containing an equal number of data points. This partition leads to mathematical properties of the transition matrices which can be exploited further such as to establish connections with the dynamical theory of unstable periodic orbits. We introduce the concept of most and least predictable states. The data basis of our analysis consists of a multicentury-long data set obtained from an intermediate coupled atmosphere-ocean model of the tropical Pacific. This cyclostationary Markov chain approach captures the spring barrier in ENSO predictability and gives insight also into the dependence of ENSO predictability on the climatic state.

  18. Spectral analysis and markov switching model of Indonesia business cycle

    Science.gov (United States)

    Fajar, Muhammad; Darwis, Sutawanir; Darmawan, Gumgum

    2017-03-01

    This study aims to investigate the Indonesia business cycle encompassing the determination of smoothing parameter (λ) on Hodrick-Prescott filter. Subsequently, the components of the filter output cycles were analyzed using a spectral method useful to know its characteristics, and Markov switching regime modeling is made to forecast the probability recession and expansion regimes. The data used in the study is real GDP (1983Q1 - 2016Q2). The results of the study are: a) Hodrick-Prescott filter on real GDP of Indonesia to be optimal when the value of the smoothing parameter is 988.474, b) Indonesia business cycle has amplitude varies between±0.0071 to±0.01024, and the duration is between 4 to 22 quarters, c) the business cycle can be modelled by MSIV-AR (2) but regime periodization is generated this model not perfect exactly with real regime periodzation, and d) Based on the model MSIV-AR (2) obtained long-term probabilities in the expansion regime: 0.4858 and in the recession regime: 0.5142.

  19. Filtering of a Markov Jump Process with Counting Observations

    International Nuclear Information System (INIS)

    Ceci, C.; Gerardi, A.

    2000-01-01

    This paper concerns the filtering of an R d -valued Markov pure jump process when only the total number of jumps are observed. Strong and weak uniqueness for the solutions of the filtering equations are discussed

  20. The Independence of Markov's Principle in Type Theory

    DEFF Research Database (Denmark)

    Coquand, Thierry; Mannaa, Bassel

    2017-01-01

    for the generic point of this model. Instead we design an extension of type theory, which intuitively extends type theory by the addition of a generic point of Cantor space. We then show the consistency of this extension by a normalization argument. Markov's principle does not hold in this extension......In this paper, we show that Markov's principle is not derivable in dependent type theory with natural numbers and one universe. One way to prove this would be to remark that Markov's principle does not hold in a sheaf model of type theory over Cantor space, since Markov's principle does not hold......, and it follows that it cannot be proved in type theory....

  1. Classification of customer lifetime value models using Markov chain

    Science.gov (United States)

    Permana, Dony; Pasaribu, Udjianna S.; Indratno, Sapto W.; Suprayogi

    2017-10-01

    A firm’s potential reward in future time from a customer can be determined by customer lifetime value (CLV). There are some mathematic methods to calculate it. One method is using Markov chain stochastic model. Here, a customer is assumed through some states. Transition inter the states follow Markovian properties. If we are given some states for a customer and the relationships inter states, then we can make some Markov models to describe the properties of the customer. As Markov models, CLV is defined as a vector contains CLV for a customer in the first state. In this paper we make a classification of Markov Models to calculate CLV. Start from two states of customer model, we make develop in many states models. The development a model is based on weaknesses in previous model. Some last models can be expected to describe how real characters of customers in a firm.

  2. Continuous-time Markov decision processes theory and applications

    CERN Document Server

    Guo, Xianping

    2009-01-01

    This volume provides the first book entirely devoted to recent developments on the theory and applications of continuous-time Markov decision processes (MDPs). The MDPs presented here include most of the cases that arise in applications.

  3. A simplified parsimonious higher order multivariate Markov chain model

    Science.gov (United States)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.

  4. A tridiagonal parsimonious higher order multivariate Markov chain model

    Science.gov (United States)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, we present a tridiagonal parsimonious higher-order multivariate Markov chain model (TPHOMMCM). Moreover, estimation method of the parameters in TPHOMMCM is give. Numerical experiments illustrate the effectiveness of TPHOMMCM.

  5. Optimisation of Hidden Markov Model using Baum–Welch algorithm ...

    Indian Academy of Sciences (India)

    The present work is a part of development of Hidden Markov Model. (HMM) based ... the Himalaya. In this work, HMMs have been developed for forecasting of maximum and minimum ..... data collection teams of Snow and Avalanche Study.

  6. Markov chain: a predictive model for manpower planning | Ezugwu ...

    African Journals Online (AJOL)

    In respect of organizational management, numerous previous studies have ... and to forecast the academic staff structure of the university in the next five years. ... Keywords: Markov Chain, Transition Probability Matrix, Manpower Planning, ...

  7. A Novel Method for Decoding Any High-Order Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Fei Ye

    2014-01-01

    Full Text Available This paper proposes a novel method for decoding any high-order hidden Markov model. First, the high-order hidden Markov model is transformed into an equivalent first-order hidden Markov model by Hadar’s transformation. Next, the optimal state sequence of the equivalent first-order hidden Markov model is recognized by the existing Viterbi algorithm of the first-order hidden Markov model. Finally, the optimal state sequence of the high-order hidden Markov model is inferred from the optimal state sequence of the equivalent first-order hidden Markov model. This method provides a unified algorithm framework for decoding hidden Markov models including the first-order hidden Markov model and any high-order hidden Markov model.

  8. Markov Chain Models for the Stochastic Modeling of Pitting Corrosion

    OpenAIRE

    Valor, A.; Caleyo, F.; Alfonso, L.; Velázquez, J. C.; Hallen, J. M.

    2013-01-01

    The stochastic nature of pitting corrosion of metallic structures has been widely recognized. It is assumed that this kind of deterioration retains no memory of the past, so only the current state of the damage influences its future development. This characteristic allows pitting corrosion to be categorized as a Markov process. In this paper, two different models of pitting corrosion, developed using Markov chains, are presented. Firstly, a continuous-time, nonhomogeneous linear growth (pure ...

  9. On mean reward variance in semi-Markov processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2005-01-01

    Roč. 62, č. 3 (2005), s. 387-397 ISSN 1432-2994 R&D Projects: GA ČR(CZ) GA402/05/0115; GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov and semi-Markov processes with rewards * variance of cumulative reward * asymptotic behaviour Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.259, year: 2005

  10. Reliability estimation of semi-Markov systems: a case study

    International Nuclear Information System (INIS)

    Ouhbi, Brahim; Limnios, Nikolaos

    1997-01-01

    In this article, we are concerned with the estimation of the reliability and the availability of a turbo-generator rotor using a set of data observed in a real engineering situation provided by Electricite De France (EDF). The rotor is modeled by a semi-Markov process, which is used to estimate the rotor's reliability and availability. To do this, we present a method for estimating the semi-Markov kernel from a censored data

  11. Quantum tomography, phase-space observables and generalized Markov kernels

    International Nuclear Information System (INIS)

    Pellonpaeae, Juha-Pekka

    2009-01-01

    We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.

  12. Hidden Markov models in automatic speech recognition

    Science.gov (United States)

    Wrzoskowicz, Adam

    1993-11-01

    This article describes a method for constructing an automatic speech recognition system based on hidden Markov models (HMMs). The author discusses the basic concepts of HMM theory and the application of these models to the analysis and recognition of speech signals. The author provides algorithms which make it possible to train the ASR system and recognize signals on the basis of distinct stochastic models of selected speech sound classes. The author describes the specific components of the system and the procedures used to model and recognize speech. The author discusses problems associated with the choice of optimal signal detection and parameterization characteristics and their effect on the performance of the system. The author presents different options for the choice of speech signal segments and their consequences for the ASR process. The author gives special attention to the use of lexical, syntactic, and semantic information for the purpose of improving the quality and efficiency of the system. The author also describes an ASR system developed by the Speech Acoustics Laboratory of the IBPT PAS. The author discusses the results of experiments on the effect of noise on the performance of the ASR system and describes methods of constructing HMM's designed to operate in a noisy environment. The author also describes a language for human-robot communications which was defined as a complex multilevel network from an HMM model of speech sounds geared towards Polish inflections. The author also added mandatory lexical and syntactic rules to the system for its communications vocabulary.

  13. Stability and perturbations of countable Markov maps

    Science.gov (United States)

    Jordan, Thomas; Munday, Sara; Sahlsten, Tuomas

    2018-04-01

    Let T and , , be countable Markov maps such that the branches of converge pointwise to the branches of T, as . We study the stability of various quantities measuring the singularity (dimension, Hölder exponent etc) of the topological conjugacy between and T when . This is a well-understood problem for maps with finitely-many branches, and the quantities are stable for small ɛ, that is, they converge to their expected values if . For the infinite branch case their stability might be expected to fail, but we prove that even in the infinite branch case the quantity is stable under some natural regularity assumptions on and T (under which, for instance, the Hölder exponent of fails to be stable). Our assumptions apply for example in the case of Gauss map, various Lüroth maps and accelerated Manneville-Pomeau maps when varying the parameter α. For the proof we introduce a mass transportation method from the cusp that allows us to exploit thermodynamical ideas from the finite branch case. Dedicated to the memory of Bernd O Stratmann

  14. Lectures from Markov processes to Brownian motion

    CERN Document Server

    Chung, Kai Lai

    1982-01-01

    This book evolved from several stacks of lecture notes written over a decade and given in classes at slightly varying levels. In transforming the over­ lapping material into a book, I aimed at presenting some of the best features of the subject with a minimum of prerequisities and technicalities. (Needless to say, one man's technicality is another's professionalism. ) But a text frozen in print does not allow for the latitude of the classroom; and the tendency to expand becomes harder to curb without the constraints of time and audience. The result is that this volume contains more topics and details than I had intended, but I hope the forest is still visible with the trees. The book begins at the beginning with the Markov property, followed quickly by the introduction of option al times and martingales. These three topics in the discrete parameter setting are fully discussed in my book A Course In Probability Theory (second edition, Academic Press, 1974). The latter will be referred to throughout this book ...

  15. Markov branching in the vertex splitting model

    International Nuclear Information System (INIS)

    Stefánsson, Sigurdur Örn

    2012-01-01

    We study a special case of the vertex splitting model which is a recent model of randomly growing trees. For any finite maximum vertex degree D, we find a one parameter model, with parameter α element of [0,1] which has a so-called Markov branching property. When D=∞ we find a two parameter model with an additional parameter γ element of [0,1] which also has this feature. In the case D = 3, the model bears resemblance to Ford's α-model of phylogenetic trees and when D=∞ it is similar to its generalization, the αγ-model. For α = 0, the model reduces to the well known model of preferential attachment. In the case α > 0, we prove convergence of the finite volume probability measures, generated by the growth rules, to a measure on infinite trees which is concentrated on the set of trees with a single spine. We show that the annealed Hausdorff dimension with respect to the infinite volume measure is 1/α. When γ = 0 the model reduces to a model of growing caterpillar graphs in which case we prove that the Hausdorff dimension is almost surely 1/α and that the spectral dimension is almost surely 2/(1 + α). We comment briefly on the distribution of vertex degrees and correlations between degrees of neighbouring vertices

  16. Bayesian posterior distributions without Markov chains.

    Science.gov (United States)

    Cole, Stephen R; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B

    2012-03-01

    Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976-1983) assessing the relation between residential exposure to magnetic fields and the development of childhood cancer. Results from rejection sampling (odds ratio (OR) = 1.69, 95% posterior interval (PI): 0.57, 5.00) were similar to MCMC results (OR = 1.69, 95% PI: 0.58, 4.95) and approximations from data-augmentation priors (OR = 1.74, 95% PI: 0.60, 5.06). In example 2, the authors apply rejection sampling to a cohort study of 315 human immunodeficiency virus seroconverters (1984-1998) to assess the relation between viral load after infection and 5-year incidence of acquired immunodeficiency syndrome, adjusting for (continuous) age at seroconversion and race. In this more complex example, rejection sampling required a notably longer run time than MCMC sampling but remained feasible and again yielded similar results. The transparency of the proposed approach comes at a price of being less broadly applicable than MCMC.

  17. Markov source model for printed music decoding

    Science.gov (United States)

    Kopec, Gary E.; Chou, Philip A.; Maltz, David A.

    1995-03-01

    This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.

  18. NonMarkov Ito Processes with 1- state memory

    Science.gov (United States)

    McCauley, Joseph L.

    2010-08-01

    A Markov process, by definition, cannot depend on any previous state other than the last observed state. An Ito process implies the Fokker-Planck and Kolmogorov backward time partial differential eqns. for transition densities, which in turn imply the Chapman-Kolmogorov eqn., but without requiring the Markov condition. We present a class of Ito process superficially resembling Markov processes, but with 1-state memory. In finance, such processes would obey the efficient market hypothesis up through the level of pair correlations. These stochastic processes have been mislabeled in recent literature as 'nonlinear Markov processes'. Inspired by Doob and Feller, who pointed out that the ChapmanKolmogorov eqn. is not restricted to Markov processes, we exhibit a Gaussian Ito transition density with 1-state memory in the drift coefficient that satisfies both of Kolmogorov's partial differential eqns. and also the Chapman-Kolmogorov eqn. In addition, we show that three of the examples from McKean's seminal 1966 paper are also nonMarkov Ito processes. Last, we show that the transition density of the generalized Black-Scholes type partial differential eqn. describes a martingale, and satisfies the ChapmanKolmogorov eqn. This leads to the shortest-known proof that the Green function of the Black-Scholes eqn. with variable diffusion coefficient provides the so-called martingale measure of option pricing.

  19. Switching Markov chains for a holistic modeling of SIS unavailability

    International Nuclear Information System (INIS)

    Mechri, Walid; Simon, Christophe; BenOthman, Kamel

    2015-01-01

    This paper proposes a holistic approach to model the Safety Instrumented Systems (SIS). The model is based on Switching Markov Chain and integrates several parameters like Common Cause Failure, Imperfect Proof testing, partial proof testing, etc. The basic concepts of Switching Markov Chain applied to reliability analysis are introduced and a model to compute the unavailability for a case study is presented. The proposed Switching Markov Chain allows us to assess the effect of each parameter on the SIS performance. The proposed method ensures the relevance of the results. - Highlights: • A holistic approach to model the unavailability safety systems using Switching Markov chains. • The model integrates several parameters like probability of failure due to the test, the probability of not detecting a failure in a test. • The basic concepts of the Switching Markov Chains are introduced and applied to compute the unavailability for safety systems. • The proposed Switching Markov Chain allows assessing the effect of each parameter on the chemical reactor performance

  20. Singular Perturbation for the Discounted Continuous Control of Piecewise Deterministic Markov Processes

    International Nuclear Information System (INIS)

    Costa, O. L. V.; Dufour, F.

    2011-01-01

    This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP’s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space ℝ n . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter ε>0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as ε goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as ε goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.

  1. Decoding and modelling of time series count data using Poisson hidden Markov model and Markov ordinal logistic regression models.

    Science.gov (United States)

    Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I

    2018-01-01

    Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.

  2. Maximally reliable Markov chains under energy constraints.

    Science.gov (United States)

    Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam

    2009-07-01

    Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.

  3. Optimum equipment maintenance/replacement policy. Part 2: Markov decision approach

    Science.gov (United States)

    Charng, T.

    1982-01-01

    Dynamic programming was utilized as an alternative optimization technique to determine an optimal policy over a given time period. According to a joint effect of the probabilistic transition of states and the sequence of decision making, the optimal policy is sought such that a set of decisions optimizes the long-run expected average cost (or profit) per unit time. Provision of an alternative measure for the expected long-run total discounted costs is also considered. A computer program based on the concept of the Markov Decision Process was developed and tested. The program code listing, the statement of a sample problem, and the computed results are presented.

  4. Markov random field and Gaussian mixture for segmented MRI-based partial volume correction in PET

    International Nuclear Information System (INIS)

    Bousse, Alexandre; Thomas, Benjamin A; Erlandsson, Kjell; Hutton, Brian F; Pedemonte, Stefano; Ourselin, Sébastien; Arridge, Simon

    2012-01-01

    In this paper we propose a segmented magnetic resonance imaging (MRI) prior-based maximum penalized likelihood deconvolution technique for positron emission tomography (PET) images. The model assumes the existence of activity classes that behave like a hidden Markov random field (MRF) driven by the segmented MRI. We utilize a mean field approximation to compute the likelihood of the MRF. We tested our method on both simulated and clinical data (brain PET) and compared our results with PET images corrected with the re-blurred Van Cittert (VC) algorithm, the simplified Guven (SG) algorithm and the region-based voxel-wise (RBV) technique. We demonstrated our algorithm outperforms the VC algorithm and outperforms SG and RBV corrections when the segmented MRI is inconsistent (e.g. mis-segmentation, lesions, etc) with the PET image. (paper)

  5. Population synthesis of radio and gamma-ray millisecond pulsars using Markov Chain Monte Carlo techniques

    Science.gov (United States)

    Gonthier, Peter L.; Koh, Yew-Meng; Kust Harding, Alice

    2016-04-01

    We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and gamma-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of thirteen radio surveys as well as the MSP birth rate in the Galaxy and the number of MSPs detected by Fermi. We explore various high-energy emission geometries like the slot gap, outer gap, two pole caustic and pair starved polar cap models. The parameters associated with the birth distributions for the mass accretion rate, magnetic field, and period distributions are well constrained. With the set of four free parameters, we employ Markov Chain Monte Carlo simulations to explore the model parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and gamma-ray pulsar characteristics. We estimate the contribution of MSPs to the diffuse gamma-ray background with a special focus on the Galactic Center.We express our gratitude for the generous support of the National Science Foundation (RUI: AST-1009731), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program (NNX09AQ71G).

  6. Markov Chain Models for the Stochastic Modeling of Pitting Corrosion

    Directory of Open Access Journals (Sweden)

    A. Valor

    2013-01-01

    Full Text Available The stochastic nature of pitting corrosion of metallic structures has been widely recognized. It is assumed that this kind of deterioration retains no memory of the past, so only the current state of the damage influences its future development. This characteristic allows pitting corrosion to be categorized as a Markov process. In this paper, two different models of pitting corrosion, developed using Markov chains, are presented. Firstly, a continuous-time, nonhomogeneous linear growth (pure birth Markov process is used to model external pitting corrosion in underground pipelines. A closed-form solution of the system of Kolmogorov's forward equations is used to describe the transition probability function in a discrete pit depth space. The transition probability function is identified by correlating the stochastic pit depth mean with the empirical deterministic mean. In the second model, the distribution of maximum pit depths in a pitting experiment is successfully modeled after the combination of two stochastic processes: pit initiation and pit growth. Pit generation is modeled as a nonhomogeneous Poisson process, in which induction time is simulated as the realization of a Weibull process. Pit growth is simulated using a nonhomogeneous Markov process. An analytical solution of Kolmogorov's system of equations is also found for the transition probabilities from the first Markov state. Extreme value statistics is employed to find the distribution of maximum pit depths.

  7. Prediction of inspection intervals using the Markov analysis

    International Nuclear Information System (INIS)

    Rea, R.; Arellano, J.

    2005-01-01

    To solve the unmanageable number of states of Markov of systems that have a great number of components, it is intends a modification to the method of Markov, denominated Markov truncated analysis, in which is assumed that it is worthless the dependence among faults of components. With it the number of states is increased in a lineal way (not exponential) with the number of components of the system, simplifying the analysis vastly. As example, the proposed method was applied to the system HPCS of the CLV considering its 18 main components. It thinks about that each component can take three states: operational, with hidden fault and with revealed fault. Additionally, it takes into account the configuration of the system HPCS by means of a block diagram of dependability to estimate their unavailability at level system. The results of the model here proposed are compared with other methods and approaches used to simplify the Markov analysis. It also intends the modification of the intervals of inspection of three components of the system HPCS. This finishes with base in the developed Markov model and in the maximum time allowed by the code ASME (NUREG-1482) to inspect components of systems that are in reservation in nuclear power plants. (Author)

  8. Benchmarking of a Markov multizone model of contaminant transport.

    Science.gov (United States)

    Jones, Rachael M; Nicas, Mark

    2014-10-01

    A Markov chain model previously applied to the simulation of advection and diffusion process of gaseous contaminants is extended to three-dimensional transport of particulates in indoor environments. The model framework and assumptions are described. The performance of the Markov model is benchmarked against simple conventional models of contaminant transport. The Markov model is able to replicate elutriation predictions of particle deposition with distance from a point source, and the stirred settling of respirable particles. Comparisons with turbulent eddy diffusion models indicate that the Markov model exhibits numerical diffusion in the first seconds after release, but over time accurately predicts mean lateral dispersion. The Markov model exhibits some instability with grid length aspect when turbulence is incorporated by way of the turbulent diffusion coefficient, and advection is present. However, the magnitude of prediction error may be tolerable for some applications and can be avoided by incorporating turbulence by way of fluctuating velocity (e.g. turbulence intensity). © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  9. Automated generation of partial Markov chain from high level descriptions

    International Nuclear Information System (INIS)

    Brameret, P.-A.; Rauzy, A.; Roussel, J.-M.

    2015-01-01

    We propose an algorithm to generate partial Markov chains from high level implicit descriptions, namely AltaRica models. This algorithm relies on two components. First, a variation on Dijkstra's algorithm to compute shortest paths in a graph. Second, the definition of a notion of distance to select which states must be kept and which can be safely discarded. The proposed method solves two problems at once. First, it avoids a manual construction of Markov chains, which is both tedious and error prone. Second, up the price of acceptable approximations, it makes it possible to push back dramatically the exponential blow-up of the size of the resulting chains. We report experimental results that show the efficiency of the proposed approach. - Highlights: • We generate Markov chains from a higher level safety modeling language (AltaRica). • We use a variation on Dijkstra's algorithm to generate partial Markov chains. • Hence we solve two problems: the first problem is the tedious manual construction of Markov chains. • The second problem is the blow-up of the size of the chains, at the cost of decent approximations. • The experimental results highlight the efficiency of the method

  10. Distinguishing patterns in the dynamics of long-term medication use by Markov analysis: beyond persistence

    Directory of Open Access Journals (Sweden)

    Lammers Jan-Willem J

    2007-07-01

    Full Text Available Abstract Background In order to accurately distinguish gaps of varying length in drug treatment for chronic conditions from discontinuation without resuming therapy, short-term observation does not suffice. Thus, the use of inhalation corticosteroids (ICS in the long-term, during a ten-year period is investigated. To describe medication use as a continuum, taking into account the timeliness and consistency of refilling, a Markov model is proposed. Methods Patients, that filled at least one prescription in 1993, were selected from the PHARMO medical record linkage system (RLS containing >95% prescription dispensings per patient originating from community pharmacy records of 6 medium-sized cities in the Netherlands. The probabilities of continuous use, the refilling of at least one ICS prescription in each year of follow-up, and medication free periods were assessed by Markov analysis. Stratified analysis according to new use was performed. Results The transition probabilities of the refilling of at least one ICS prescription in the subsequent year of follow-up, were assessed for each year of follow-up and for the total study period. The change of transition probabilities in time was evaluated, e.g. the probability of continuing ICS use of starters in the first two years (51% of follow-up increased to more than 70% in the following years. The probabilities of different patterns of medication use were assessed: continuous use (7.7%, cumulative medication gaps (1–8 years 69.1% and discontinuing (23.2% during ten-year follow-up for new users. New users had lower probability of continuous use (7.7% and more variability in ICS refill patterns than previous users (56%. Conclusion In addition to well-established methods in epidemiology to ascertain compliance and persistence, a Markov model could be useful to further specify the variety of possible patterns of medication use within the continuum of adherence. This Markov model describes variation in

  11. A two-states Markov-switching model of inflation in France and the USA: credible target VS inflation spiral

    OpenAIRE

    B. HEITZ

    2005-01-01

    This paper seeks to apply the general framework of Markov-switching models to inflation in France and in the USA. We propose a model where inflation can, alternatively, follow two regimes: the first one, where inflation is stationary, is interpreted as a situation where there exists a credible inflation target, even if it is not explicit; the second one where inflation is integrated. Moreover, observing that the two oil shocks were followed by accelerating inflation periods, we allow dependen...

  12. The Impact of Short-Sale Constraints on Asset Allocation Strategies via the Backward Markov Chain Approximation Method

    OpenAIRE

    Carl Chiarella; Chih-Ying Hsiao

    2005-01-01

    This paper considers an asset allocation strategy over a finite period under investment uncertainty and short-sale constraints as a continuous time stochastic control problem. Investment uncertainty is characterised by a stochastic interest rate and inflation risk. If there are no short-sale constraints, the optimal asset allocation strategy can be solved analytically. We consider several kinds of short-sale constraints and employ the backward Markov chain approximation method to explore the ...

  13. Markov chain modelling of pitting corrosion in underground pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Caleyo, F. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico)], E-mail: fcaleyo@gmail.com; Velazquez, J.C. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico); Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado, 10400 La Habana (Cuba); Hallen, J.M. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico)

    2009-09-15

    A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.

  14. Influence of credit scoring on the dynamics of Markov chain

    Science.gov (United States)

    Galina, Timofeeva

    2015-11-01

    Markov processes are widely used to model the dynamics of a credit portfolio and forecast the portfolio risk and profitability. In the Markov chain model the loan portfolio is divided into several groups with different quality, which determined by presence of indebtedness and its terms. It is proposed that dynamics of portfolio shares is described by a multistage controlled system. The article outlines mathematical formalization of controls which reflect the actions of the bank's management in order to improve the loan portfolio quality. The most important control is the organization of approval procedure of loan applications. The credit scoring is studied as a control affecting to the dynamic system. Different formalizations of "good" and "bad" consumers are proposed in connection with the Markov chain model.

  15. Markov chain modelling of pitting corrosion in underground pipelines

    International Nuclear Information System (INIS)

    Caleyo, F.; Velazquez, J.C.; Valor, A.; Hallen, J.M.

    2009-01-01

    A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.

  16. Markov chain solution of photon multiple scattering through turbid slabs.

    Science.gov (United States)

    Lin, Ying; Northrop, William F; Li, Xuesong

    2016-11-14

    This work introduces a Markov Chain solution to model photon multiple scattering through turbid slabs via anisotropic scattering process, i.e., Mie scattering. Results show that the proposed Markov Chain model agree with commonly used Monte Carlo simulation for various mediums such as medium with non-uniform phase functions and absorbing medium. The proposed Markov Chain solution method successfully converts the complex multiple scattering problem with practical phase functions into a matrix form and solves transmitted/reflected photon angular distributions by matrix multiplications. Such characteristics would potentially allow practical inversions by matrix manipulation or stochastic algorithms where widely applied stochastic methods such as Monte Carlo simulations usually fail, and thus enable practical diagnostics reconstructions such as medical diagnosis, spray analysis, and atmosphere sciences.

  17. The spectral method and ergodic theorems for general Markov chains

    International Nuclear Information System (INIS)

    Nagaev, S V

    2015-01-01

    We study the ergodic properties of Markov chains with an arbitrary state space and prove a geometric ergodic theorem. The method of the proof is new: it may be described as an operator method. Our main result is an ergodic theorem for Harris-Markov chains in the case when the return time to some fixed set has finite expectation. Our conditions for the transition function are more general than those used by Athreya-Ney and Nummelin. Unlike them, we impose restrictions not on the original transition function but on the transition function of an embedded Markov chain constructed from the return times to the fixed set mentioned above. The proof uses the spectral theory of linear operators on a Banach space

  18. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  19. Prediction of inspection intervals using the Markov analysis; Prediccion de intervalos de inspeccion utilizando analisis de Markov

    Energy Technology Data Exchange (ETDEWEB)

    Rea, R.; Arellano, J. [IIE, Calle Reforma 113, Col. Palmira, Cuernavaca, Morelos (Mexico)]. e-mail: rrea@iie.org.mx

    2005-07-01

    To solve the unmanageable number of states of Markov of systems that have a great number of components, it is intends a modification to the method of Markov, denominated Markov truncated analysis, in which is assumed that it is worthless the dependence among faults of components. With it the number of states is increased in a lineal way (not exponential) with the number of components of the system, simplifying the analysis vastly. As example, the proposed method was applied to the system HPCS of the CLV considering its 18 main components. It thinks about that each component can take three states: operational, with hidden fault and with revealed fault. Additionally, it takes into account the configuration of the system HPCS by means of a block diagram of dependability to estimate their unavailability at level system. The results of the model here proposed are compared with other methods and approaches used to simplify the Markov analysis. It also intends the modification of the intervals of inspection of three components of the system HPCS. This finishes with base in the developed Markov model and in the maximum time allowed by the code ASME (NUREG-1482) to inspect components of systems that are in reservation in nuclear power plants. (Author)

  20. Analyzing the profit-loss sharing contracts with Markov model

    Directory of Open Access Journals (Sweden)

    Imam Wahyudi

    2016-12-01

    Full Text Available The purpose of this paper is to examine how to use first order Markov chain to build a reliable monitoring system for the profit-loss sharing based contracts (PLS as the mode of financing contracts in Islamic bank with censored continuous-time observations. The paper adopts the longitudinal analysis with the first order Markov chain framework. Laplace transform was used with homogenous continuous time assumption, from discretized generator matrix, to generate the transition matrix. Various metrics, i.e.: eigenvalue and eigenvector were used to test the first order Markov chain assumption. Cox semi parametric model was used also to analyze the momentum and waiting time effect as non-Markov behavior. The result shows that first order Markov chain is powerful as a monitoring tool for Islamic banks. We find that waiting time negatively affected present rating downgrade (upgrade significantly. Likewise, momentum covariate showed negative effect. Finally, the result confirms that different origin rating have different movement behavior. The paper explores the potential of Markov chain framework as a risk management tool for Islamic banks. It provides valuable insight and integrative model for banks to manage their borrower accounts. This model can be developed to be a powerful early warning system to identify which borrower needs to be monitored intensively. Ultimately, this model could potentially increase the efficiency, productivity and competitiveness of Islamic banks in Indonesia. The analysis used only rating data. Further study should be able to give additional information about the determinant factors of rating movement of the borrowers by incorporating various factors such as contract-related factors, bank-related factors, borrower-related factors and macroeconomic factors.

  1. Application of Hidden Markov Models in Biomolecular Simulations.

    Science.gov (United States)

    Shukla, Saurabh; Shamsi, Zahra; Moffett, Alexander S; Selvam, Balaji; Shukla, Diwakar

    2017-01-01

    Hidden Markov models (HMMs) provide a framework to analyze large trajectories of biomolecular simulation datasets. HMMs decompose the conformational space of a biological molecule into finite number of states that interconvert among each other with certain rates. HMMs simplify long timescale trajectories for human comprehension, and allow comparison of simulations with experimental data. In this chapter, we provide an overview of building HMMs for analyzing bimolecular simulation datasets. We demonstrate the procedure for building a Hidden Markov model for Met-enkephalin peptide simulation dataset and compare the timescales of the process.

  2. Hidden Markov processes theory and applications to biology

    CERN Document Server

    Vidyasagar, M

    2014-01-01

    This book explores important aspects of Markov and hidden Markov processes and the applications of these ideas to various problems in computational biology. The book starts from first principles, so that no previous knowledge of probability is necessary. However, the work is rigorous and mathematical, making it useful to engineers and mathematicians, even those not interested in biological applications. A range of exercises is provided, including drills to familiarize the reader with concepts and more advanced problems that require deep thinking about the theory. Biological applications are t

  3. Transportation and concentration inequalities for bifurcating Markov chains

    DEFF Research Database (Denmark)

    Penda, S. Valère Bitseki; Escobar-Bach, Mikael; Guillin, Arnaud

    2017-01-01

    We investigate the transportation inequality for bifurcating Markov chains which are a class of processes indexed by a regular binary tree. Fitting well models like cell growth when each individual gives birth to exactly two offsprings, we use transportation inequalities to provide useful...... concentration inequalities.We also study deviation inequalities for the empirical means under relaxed assumptions on the Wasserstein contraction for the Markov kernels. Applications to bifurcating nonlinear autoregressive processes are considered for point-wise estimates of the non-linear autoregressive...

  4. Dynamic modeling of presence of occupants using inhomogeneous Markov chains

    DEFF Research Database (Denmark)

    Andersen, Philip Hvidthøft Delff; Iversen, Anne; Madsen, Henrik

    2014-01-01

    on time of day, and by use of a filter of the observations it is able to capture per-employee sequence dynamics. Simulations using this method are compared with simulations using homogeneous Markov chains and show far better ability to reproduce key properties of the data. The method is based...... on inhomogeneous Markov chains with where the transition probabilities are estimated using generalized linear models with polynomials, B-splines, and a filter of passed observations as inputs. For treating the dispersion of the data series, a hierarchical model structure is used where one model is for low presence...

  5. Markov chain analysis of single spin flip Ising simulations

    International Nuclear Information System (INIS)

    Hennecke, M.

    1997-01-01

    The Markov processes defined by random and loop-based schemes for single spin flip attempts in Monte Carlo simulations of the 2D Ising model are investigated, by explicitly constructing their transition matrices. Their analysis reveals that loops over all lattice sites using a Metropolis-type single spin flip probability often do not define ergodic Markov chains, and have distorted dynamical properties even if they are ergodic. The transition matrices also enable a comparison of the dynamics of random versus loop spin selection and Glauber versus Metropolis probabilities

  6. Hierarchical Multiple Markov Chain Model for Unsupervised Texture Segmentation

    Czech Academy of Sciences Publication Activity Database

    Scarpa, G.; Gaetano, R.; Haindl, Michal; Zerubia, J.

    2009-01-01

    Roč. 18, č. 8 (2009), s. 1830-1843 ISSN 1057-7149 R&D Projects: GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : Classification * texture analysis * segmentation * hierarchical image models * Markov process Subject RIV: BD - Theory of Information Impact factor: 2.848, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-hierarchical multiple markov chain model for unsupervised texture segmentation.pdf

  7. Quantum Markov processes and applications in many-body systems

    International Nuclear Information System (INIS)

    Temme, P. K.

    2010-01-01

    This thesis is concerned with the investigation of quantum as well as classical Markov processes and their application in the field of strongly correlated many-body systems. A Markov process is a special kind of stochastic process, which is determined by an evolution that is independent of its history and only depends on the current state of the system. The application of Markov processes has a long history in the field of statistical mechanics and classical many-body theory. Not only are Markov processes used to describe the dynamics of stochastic systems, but they predominantly also serve as a practical method that allows for the computation of fundamental properties of complex many-body systems by means of probabilistic algorithms. The aim of this thesis is to investigate the properties of quantum Markov processes, i.e. Markov processes taking place in a quantum mechanical state space, and to gain a better insight into complex many-body systems by means thereof. Moreover, we formulate a novel quantum algorithm which allows for the computation of the thermal and ground states of quantum many-body systems. After a brief introduction to quantum Markov processes we turn to an investigation of their convergence properties. We find bounds on the convergence rate of the quantum process by generalizing geometric bounds found for classical processes. We generalize a distance measure that serves as the basis for our investigations, the chi-square divergence, to non-commuting probability spaces. This divergence allows for a convenient generalization of the detailed balance condition to quantum processes. We then devise the quantum algorithm that can be seen as the natural generalization of the ubiquitous Metropolis algorithm to simulate quantum many-body Hamiltonians. By this we intend to provide further evidence, that a quantum computer can serve as a fully-fledged quantum simulator, which is not only capable of describing the dynamical evolution of quantum systems, but

  8. Bisimulation on Markov Processes over Arbitrary Measurable Spaces

    DEFF Research Database (Denmark)

    Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand

    2014-01-01

    We introduce a notion of bisimulation on labelled Markov Processes over generic measurable spaces in terms of arbitrary binary relations. Our notion of bisimulation is proven to coincide with the coalgebraic definition of Aczel and Mendler in terms of the Giry functor, which associates with a mea......We introduce a notion of bisimulation on labelled Markov Processes over generic measurable spaces in terms of arbitrary binary relations. Our notion of bisimulation is proven to coincide with the coalgebraic definition of Aczel and Mendler in terms of the Giry functor, which associates...

  9. Detecting Faults By Use Of Hidden Markov Models

    Science.gov (United States)

    Smyth, Padhraic J.

    1995-01-01

    Frequency of false alarms reduced. Faults in complicated dynamic system (e.g., antenna-aiming system, telecommunication network, or human heart) detected automatically by method of automated, continuous monitoring. Obtains time-series data by sampling multiple sensor outputs at discrete intervals of t and processes data via algorithm determining whether system in normal or faulty state. Algorithm implements, among other things, hidden first-order temporal Markov model of states of system. Mathematical model of dynamics of system not needed. Present method is "prior" method mentioned in "Improved Hidden-Markov-Model Method of Detecting Faults" (NPO-18982).

  10. On a Markov chain roulette-type game

    International Nuclear Information System (INIS)

    El-Shehawey, M A; El-Shreef, Gh A

    2009-01-01

    A Markov chain on non-negative integers which arises in a roulette-type game is discussed. The transition probabilities are p 01 = ρ, p Nj = δ Nj , p i,i+W = q, p i,i-1 = p = 1 - q, 1 ≤ W < N, 0 ≤ ρ ≤ 1, N - W < j ≤ N and i = 1, 2, ..., N - W. Using formulae for the determinant of a partitioned matrix, a closed form expression for the solution of the Markov chain roulette-type game is deduced. The present analysis is supported by two mathematical models from tumor growth and war with bargaining

  11. DEVELOPMENT OF REACTION-DRIVEN IONIC TRANSPORT MEMBRANES (ITMs) TECHNOLOGY: PHASE IV/BUDGET PERIOD 6 “Development of ITM Oxygen Technology for Integration in IGCC and Other Advanced Power Generation Systems”

    Energy Technology Data Exchange (ETDEWEB)

    David, Studer

    2012-03-01

    Air Products and Chemicals, along with development participants and in association with the U.S. Department of Energy, has made substantial progress in developing a novel air separation technology. Unlike conventional cryogenic processes, this method uses high-temperature ceramic membranes to produce high-purity oxygen. The membranes selectively transport oxygen ions with high flux and infinite theoretical selectivity. Reaction-driven ceramic membranes are fabricated from non-porous, multi-component metallic oxides, operate at temperatures typically over 700°C, and have exceptionally high oxygen flux and selectivity. Oxygen from low-pressure air permeates as oxygen ions through the ceramic membrane and is consumed through chemical reactions, thus creating a chemical driving force that pulls oxygen ions across the membrane at high rates. The oxygen reacts with a hydrocarbon fuel in a partial oxidation process to produce a hydrogen and carbon monoxide mixture – synthesis gas. This project expands the partial-oxidation scope of ITM technology beyond natural gas feed and investigates the potential for ITM reaction-driven technology to be used in conjunction with gasification and pyrolysis technologies to provide more economical routes for producing hydrogen and synthesis gas. This report presents an overview of the ITM reaction-driven development effort, including ceramic materials development, fabrication and testing of small-scale ceramic modules, ceramic modeling, and the investigation of gasifier integration schemes

  12. Using Markov Chains to predict the natural progression of diabetic retinopathy.

    Science.gov (United States)

    Srikanth, Priyanka

    2015-01-01

    To study the natural progression of diabetic retinopathy in patients with type 2 diabetes. This was an observational study of 153 cases with type 2 diabetes from 2010 to 2013. The state of patient was noted at end of each year and transition matrices were developed to model movement between years. Patients who progressed to severe non-proliferative diabetic retinopathy (NPDR) were treated. Markov Chains and Chi-square test were used for statistical analysis. We modelled the transition of 153 patients from NPDR to blindness on an annual basis. At the end of year 3, we compared results from the Markov model versus actual data. The results from Chi-square test confirmed that there was statistically no significant difference (P=0.70) which provided assurance that the model was robust to estimate mean sojourn times. The key finding was that a patient entering the system in mild NPDR state is expected to stay in that state for 5y followed by 1.07y in moderate NPDR, be in the severe NPDR state for 1.33y before moving into PDR for roughly 8y. It is therefore expected that such a patient entering the model in a state of mild NPDR will enter blindness after 15.29y. Patients stay for long time periods in mild NPDR before transitioning into moderate NPDR. However, they move rapidly from moderate NPDR to proliferative diabetic retinopathy (PDR) and stay in that state for long periods before transitioning into blindness.

  13. The use of Markov chains in forecasting wind speed: Matlab source code and applied case study

    Directory of Open Access Journals (Sweden)

    Ionuţ Alexandru Petre

    2017-01-01

    Full Text Available The ability to predict the wind speed has an important role for renewable energy industry which relies on wind speed forecasts in order to calculate the power a wind farm can produce in an area. There are several well-known methods to predict wind speed, but in this paper we focus on short-term wind forecasting using Markov chains. Often gaps can be found in the time series of the wind speed measurements and repeating the measurements is usually not a valid option. In this study it is shown that using Markov chains these gaps from the time series can be filled (they can be generated in an efficient way, but only when the missing data is for a short period of time. Also, the developed Matlab programms that are used in the case study, are included in the paper beeing presented and commented by the authors. In the case study data from a wind farm in Italy is used. The available data are as average wind speed at an interval of 10 minutes in the time period 11/23/2005 - 4/27/2006.

  14. Study on the Evolution of Weights on the Market of Competitive Products using Markov Chains

    Directory of Open Access Journals (Sweden)

    Daniel Mihai Amariei

    2016-10-01

    Full Text Available In this paper aims the application through the Markov Process mode, within the software product WinQSB, Markov chain in the establishment of the development on the market of five brands of athletic shoes.

  15. The explicit form of the rate function for semi-Markov processes and its contractions

    Science.gov (United States)

    Sughiyama, Yuki; Kobayashi, Testuya J.

    2018-03-01

    We derive the explicit form of the rate function for semi-Markov processes. Here, the ‘random time change trick’ plays an essential role. Also, by exploiting the contraction principle of large deviation theory to the explicit form, we show that the fluctuation theorem (Gallavotti-Cohen symmetry) holds for semi-Markov cases. Furthermore, we elucidate that our rate function is an extension of the level 2.5 rate function for Markov processes to semi-Markov cases.

  16. Berman-Konsowa principle for reversible Markov jump processes

    NARCIS (Netherlands)

    Hollander, den W.Th.F.; Jansen, S.

    2013-01-01

    In this paper we prove a version of the Berman-Konsowa principle for reversible Markov jump processes on Polish spaces. The Berman-Konsowa principle provides a variational formula for the capacity of a pair of disjoint measurable sets. There are two versions, one involving a class of probability

  17. Evaluating The Markov Assumption For Web Usage Mining

    DEFF Research Database (Denmark)

    Jespersen, S.; Pedersen, Torben Bach; Thorhauge, J.

    2003-01-01

    ) model~\\cite{borges99data}. These techniques typically rely on the \\textit{Markov assumption with history depth} $n$, i.e., it is assumed that the next requested page is only dependent on the last $n$ pages visited. This is not always valid, i.e. false browsing patterns may be discovered. However, to our...

  18. Testing the Adequacy of a Semi-Markov Process

    Science.gov (United States)

    2015-09-17

    classical Brownian motion are two common examples of martingales. Submartingales and supermartingales are two extended classes of martingales. They... movements using Semi-Markov processes,” Tourism Management, Vol. 32, No. 4, 2011, pp. 844–851. [4] Titman, A. C. and Sharples, L. D., “Model

  19. Portfolio Optimization in a Semi-Markov Modulated Market

    International Nuclear Information System (INIS)

    Ghosh, Mrinal K.; Goswami, Anindya; Kumar, Suresh K.

    2009-01-01

    We address a portfolio optimization problem in a semi-Markov modulated market. We study both the terminal expected utility optimization on finite time horizon and the risk-sensitive portfolio optimization on finite and infinite time horizon. We obtain optimal portfolios in relevant cases. A numerical procedure is also developed to compute the optimal expected terminal utility for finite horizon problem

  20. Harmonic spectral components in time sequences of Markov correlated events

    Science.gov (United States)

    Mazzetti, Piero; Carbone, Anna

    2017-07-01

    The paper concerns the analysis of the conditions allowing time sequences of Markov correlated events give rise to a line power spectrum having a relevant physical interest. It is found that by specializing the Markov matrix in order to represent closed loop sequences of events with arbitrary distribution, generated in a steady physical condition, a large set of line spectra, covering all possible frequency values, is obtained. The amplitude of the spectral lines is given by a matrix equation based on a generalized Markov matrix involving the Fourier transform of the distribution functions representing the time intervals between successive events of the sequence. The paper is a complement of a previous work where a general expression for the continuous power spectrum was given. In that case the Markov matrix was left in a more general form, thus preventing the possibility of finding line spectra of physical interest. The present extension is also suggested by the interest of explaining the emergence of a broad set of waves found in the electro and magneto-encephalograms, whose frequency ranges from 0.5 to about 40Hz, in terms of the effects produced by chains of firing neurons within the complex neural network of the brain. An original model based on synchronized closed loop sequences of firing neurons is proposed, and a few numerical simulations are reported as an application of the above cited equation.

  1. Elements of the theory of Markov processes and their applications

    CERN Document Server

    Bharucha-Reid, A T

    2010-01-01

    This graduate-level text and reference in probability, with numerous applications to several fields of science, presents nonmeasure-theoretic introduction to theory of Markov processes. The work also covers mathematical models based on the theory, employed in various applied fields. Prerequisites are a knowledge of elementary probability theory, mathematical statistics, and analysis. Appendixes. Bibliographies. 1960 edition.

  2. A Parallel Solver for Large-Scale Markov Chains

    Czech Academy of Sciences Publication Activity Database

    Benzi, M.; Tůma, Miroslav

    2002-01-01

    Roč. 41, - (2002), s. 135-153 ISSN 0168-9274 R&D Projects: GA AV ČR IAA2030801; GA ČR GA101/00/1035 Keywords : parallel preconditioning * iterative methods * discrete Markov chains * generalized inverses * singular matrices * graph partitioning * AINV * Bi-CGSTAB Subject RIV: BA - General Mathematics Impact factor: 0.504, year: 2002

  3. Markov chain model for demersal fish catch analysis in Indonesia

    Science.gov (United States)

    Firdaniza; Gusriani, N.

    2018-03-01

    As an archipelagic country, Indonesia has considerable potential fishery resources. One of the fish resources that has high economic value is demersal fish. Demersal fish is a fish with a habitat in the muddy seabed. Demersal fish scattered throughout the Indonesian seas. Demersal fish production in each Indonesia’s Fisheries Management Area (FMA) varies each year. In this paper we have discussed the Markov chain model for demersal fish yield analysis throughout all Indonesia’s Fisheries Management Area. Data of demersal fish catch in every FMA in 2005-2014 was obtained from Directorate of Capture Fisheries. From this data a transition probability matrix is determined by the number of transitions from the catch that lie below the median or above the median. The Markov chain model of demersal fish catch data was an ergodic Markov chain model, so that the limiting probability of the Markov chain model can be determined. The predictive value of demersal fishing yields was obtained by calculating the combination of limiting probability with average catch results below the median and above the median. The results showed that for 2018 and long-term demersal fishing results in most of FMA were below the median value.

  4. Shape Modelling Using Markov Random Field Restoration of Point Correspondences

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Hilger, Klaus Baggesen

    2003-01-01

    A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized sh...

  5. On a saddlepoint approximation to the Markov binomial distribution

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    A nonstandard saddlepoint approximation to the distribution of a sum of Markov dependent trials is introduced. The relative error of the approximation is studied, not only for the number of summands tending to infinity, but also for the parameter approaching the boundary of its definition range...

  6. Envelopes of Sets of Measures, Tightness, and Markov Control Processes

    International Nuclear Information System (INIS)

    Gonzalez-Hernandez, J.; Hernandez-Lerma, O.

    1999-01-01

    We introduce upper and lower envelopes for sets of measures on an arbitrary topological space, which are then used to give a tightness criterion. These concepts are applied to show the existence of optimal policies for a class of Markov control processes

  7. A Constraint Model for Constrained Hidden Markov Models

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2009-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we extend HMMs with constraints and show how the familiar Viterbi algorithm can be generalized, based on constraint solving ...

  8. The How and Why of Interactive Markov Chains

    NARCIS (Netherlands)

    Hermanns, H.; Katoen, Joost P.; de Boer, F.S; Bonsangue, S.H.; Leuschel, M

    2010-01-01

    This paper reviews the model of interactive Markov chains (IMCs, for short), an extension of labelled transition systems with exponentially delayed transitions. We show that IMCs are closed under parallel composition and hiding, and show how IMCs can be compositionally aggregated prior to analysis

  9. Performance criteria for graph clustering and Markov cluster experiments

    NARCIS (Netherlands)

    S. van Dongen

    2000-01-01

    textabstractIn~[1] a cluster algorithm for graphs was introduced called the Markov cluster algorithm or MCL~algorithm. The algorithm is based on simulation of (stochastic) flow in graphs by means of alternation of two operators, expansion and inflation. The results in~[2] establish an intrinsic

  10. Estimation of the workload correlation in a Markov fluid queue

    NARCIS (Netherlands)

    Kaynar, B.; Mandjes, M.R.H.

    2013-01-01

    This paper considers a Markov fluid queue, focusing on the correlation function of the stationary workload process. A simulation-based computation technique is proposed, which relies on a coupling idea. Then an upper bound on the variance of the resulting estimator is given, which reveals how the

  11. Cascade probabilistic function and the Markov's processes. Chapter 1

    International Nuclear Information System (INIS)

    2002-01-01

    In the Chapter 1 the physical and mathematical descriptions of radiation processes are carried out. The relation of the cascade probabilistic functions (CPF) for electrons, protons, alpha-particles and ions with Markov's chain is shown. The algorithms for CPF calculation with accounting energy losses are given

  12. Indefinite metric, quantum axiomatics, and the Markov property

    International Nuclear Information System (INIS)

    Brownell, F.H.

    1978-01-01

    In answer to a remark of Jauch, a set of axioms for an 'indefinite metric' formulation of quantum electro-dynamics is presented, and the connection with orthocomplementation noted. Here a strict version of the Markov property apparently fails, leading to a novel interpretation. (Auth.)

  13. Optimisation of Hidden Markov Model using Baum–Welch algorithm

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 126; Issue 1. Optimisation of Hidden Markov Model using Baum–Welch algorithm for prediction of maximum and minimum temperature over Indian Himalaya. J C Joshi Tankeshwar Kumar Sunita Srivastava Divya Sachdeva. Volume 126 Issue 1 February 2017 ...

  14. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  15. Social security as Markov equilibrium in OLG models: A note

    DEFF Research Database (Denmark)

    Gonzalez Eiras, Martin

    2011-01-01

    I refine and extend the Markov perfect equilibrium of the social security policy game in Forni (2005) for the special case of logarithmic utility. Under the restriction that the policy function be continuous, instead of differentiable, the equilibrium is globally well defined and its dynamics...

  16. Efficient Approximation of Optimal Control for Markov Games

    DEFF Research Database (Denmark)

    Fearnley, John; Rabe, Markus; Schewe, Sven

    2011-01-01

    We study the time-bounded reachability problem for continuous-time Markov decision processes (CTMDPs) and games (CTMGs). Existing techniques for this problem use discretisation techniques to break time into discrete intervals, and optimal control is approximated for each interval separately...

  17. The deviation matrix of a continuous-time Markov chain

    NARCIS (Netherlands)

    Coolen-Schrijner, P.; van Doorn, E.A.

    2001-01-01

    The deviation matrix of an ergodic, continuous-time Markov chain with transition probability matrix $P(.)$ and ergodic matrix $\\Pi$ is the matrix $D \\equiv \\int_0^{\\infty} (P(t)-\\Pi)dt$. We give conditions for $D$ to exist and discuss properties and a representation of $D$. The deviation matrix of a

  18. The deviation matrix of a continuous-time Markov chain

    NARCIS (Netherlands)

    Coolen-Schrijner, Pauline; van Doorn, Erik A.

    2002-01-01

    he deviation matrix of an ergodic, continuous-time Markov chain with transition probability matrix $P(.)$ and ergodic matrix $\\Pi$ is the matrix $D \\equiv \\int_0^{\\infty} (P(t)-\\Pi)dt$. We give conditions for $D$ to exist and discuss properties and a representation of $D$. The deviation matrix of a

  19. Performability assessment by model checking of Markov reward models

    NARCIS (Netherlands)

    Baier, Christel; Cloth, L.; Haverkort, Boudewijn R.H.M.; Hermanns, H.; Katoen, Joost P.

    2010-01-01

    This paper describes efficient procedures for model checking Markov reward models, that allow us to evaluate, among others, the performability of computer-communication systems. We present the logic CSRL (Continuous Stochastic Reward Logic) to specify performability measures. It provides flexibility

  20. A Markov Model for Commen-Cause Failures

    DEFF Research Database (Denmark)

    Platz, Ole

    1984-01-01

    A continuous time four-state Markov chain is shown to cover several of the models that have been used for describing dependencies between failures of components in redundant systems. Among these are the models derived by Marshall and Olkin and by Freund and models for one-out-of-three and two...

  1. Exact goodness-of-fit tests for Markov chains.

    Science.gov (United States)

    Besag, J; Mondal, D

    2013-06-01

    Goodness-of-fit tests are useful in assessing whether a statistical model is consistent with available data. However, the usual χ² asymptotics often fail, either because of the paucity of the data or because a nonstandard test statistic is of interest. In this article, we describe exact goodness-of-fit tests for first- and higher order Markov chains, with particular attention given to time-reversible ones. The tests are obtained by conditioning on the sufficient statistics for the transition probabilities and are implemented by simple Monte Carlo sampling or by Markov chain Monte Carlo. They apply both to single and to multiple sequences and allow a free choice of test statistic. Three examples are given. The first concerns multiple sequences of dry and wet January days for the years 1948-1983 at Snoqualmie Falls, Washington State, and suggests that standard analysis may be misleading. The second one is for a four-state DNA sequence and lends support to the original conclusion that a second-order Markov chain provides an adequate fit to the data. The last one is six-state atomistic data arising in molecular conformational dynamics simulation of solvated alanine dipeptide and points to strong evidence against a first-order reversible Markov chain at 6 picosecond time steps. © 2013, The International Biometric Society.

  2. Some remarks about the thermodynamics of discrete finite Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Siboni, S. [Trento Univ. (Italy). Facolta` di Ingegneria, Dip. di Ingegneria dei Materiali

    1998-08-01

    The Author propose a simple way to define a Hamiltonian for aperiodic Markov chains and to apply these chains in a thermodynamical context. The basic thermodynamic functions are correspondingly calculated. A quite intriguing and nontrivial application to stochastic automata is also pointed out.

  3. Complete Axiomatization for the Bisimilarity Distance on Markov Chains

    DEFF Research Database (Denmark)

    Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand

    2016-01-01

    In this paper we propose a complete axiomatization of the bisimilarity distance of Desharnais et al. for the class of finite labelled Markov chains. Our axiomatization is given in the style of a quantitative extension of equational logic recently proposed by Mardare, Panangaden, and Plotkin (LICS...

  4. Adiabatic condition and the quantum hitting time of Markov chains

    International Nuclear Information System (INIS)

    Krovi, Hari; Ozols, Maris; Roland, Jeremie

    2010-01-01

    We present an adiabatic quantum algorithm for the abstract problem of searching marked vertices in a graph, or spatial search. Given a random walk (or Markov chain) P on a graph with a set of unknown marked vertices, one can define a related absorbing walk P ' where outgoing transitions from marked vertices are replaced by self-loops. We build a Hamiltonian H(s) from the interpolated Markov chain P(s)=(1-s)P+sP ' and use it in an adiabatic quantum algorithm to drive an initial superposition over all vertices to a superposition over marked vertices. The adiabatic condition implies that, for any reversible Markov chain and any set of marked vertices, the running time of the adiabatic algorithm is given by the square root of the classical hitting time. This algorithm therefore demonstrates a novel connection between the adiabatic condition and the classical notion of hitting time of a random walk. It also significantly extends the scope of previous quantum algorithms for this problem, which could only obtain a full quadratic speedup for state-transitive reversible Markov chains with a unique marked vertex.

  5. Exploring Mass Perception with Markov Chain Monte Carlo

    Science.gov (United States)

    Cohen, Andrew L.; Ross, Michael G.

    2009-01-01

    Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…

  6. Model checking conditional CSL for continuous-time Markov chains

    DEFF Research Database (Denmark)

    Gao, Yang; Xu, Ming; Zhan, Naijun

    2013-01-01

    In this paper, we consider the model-checking problem of continuous-time Markov chains (CTMCs) with respect to conditional logic. To the end, we extend Continuous Stochastic Logic introduced in Aziz et al. (2000) [1] to Conditional Continuous Stochastic Logic (CCSL) by introducing a conditional...

  7. Markov chain aggregation for agent-based models

    CERN Document Server

    Banisch, Sven

    2016-01-01

    This self-contained text develops a Markov chain approach that makes the rigorous analysis of a class of microscopic models that specify the dynamics of complex systems at the individual level possible. It presents a general framework of aggregation in agent-based and related computational models, one which makes use of lumpability and information theory in order to link the micro and macro levels of observation. The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent-based model (ABM), which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. An explicit formal representation of a resulting “micro-chain” including microscopic transition rates is derived for a class of models by using the random mapping representation of a Markov process. The type of probability distribution used to implement the stochastic part of the model, which defines the upd...

  8. Power plant reliability calculation with Markov chain models

    International Nuclear Information System (INIS)

    Senegacnik, A.; Tuma, M.

    1998-01-01

    In the paper power plant operation is modelled using continuous time Markov chains with discrete state space. The model is used to compute the power plant reliability and the importance and influence of individual states, as well as the transition probabilities between states. For comparison the model is fitted to data for coal and nuclear power plants recorded over several years. (orig.) [de

  9. On the Metric-based Approximate Minimization of Markov Chains

    DEFF Research Database (Denmark)

    Bacci, Giovanni; Bacci, Giorgio; Larsen, Kim Guldstrand

    2018-01-01

    In this paper we address the approximate minimization problem of Markov Chains (MCs) from a behavioral metric-based perspective. Specifically, given a finite MC and a positive integer k, we are looking for an MC with at most k states having minimal distance to the original. The metric considered...

  10. Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion

    DEFF Research Database (Denmark)

    Zunino, Andrea; Lange, Katrine; Melnikova, Yulia

    2014-01-01

    We present a study on the inversion of seismic reflection data generated from a synthetic reservoir model. Our aim is to invert directly for rock facies and porosity of the target reservoir zone. We solve this inverse problem using a Markov chain Monte Carlo (McMC) method to handle the nonlinear...

  11. Students' Progress throughout Examination Process as a Markov Chain

    Science.gov (United States)

    Hlavatý, Robert; Dömeová, Ludmila

    2014-01-01

    The paper is focused on students of Mathematical methods in economics at the Czech university of life sciences (CULS) in Prague. The idea is to create a model of students' progress throughout the whole course using the Markov chain approach. Each student has to go through various stages of the course requirements where his success depends on the…

  12. Markov chains with quasitoeplitz transition matrix: first zero hitting

    Directory of Open Access Journals (Sweden)

    Alexander M. Dukhovny

    1989-01-01

    Full Text Available This paper continues the investigation of Markov Chains with a quasitoeplitz transition matrix. Generating functions of first zero hitting probabilities and mean times are found by the solution of special Riemann boundary value problems on the unit circle. Duality is discussed.

  13. On the Total Variation Distance of Semi-Markov Chains

    DEFF Research Database (Denmark)

    Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand

    2015-01-01

    Semi-Markov chains (SMCs) are continuous-time probabilistic transition systems where the residence time on states is governed by generic distributions on the positive real line. This paper shows the tight relation between the total variation distance on SMCs and their model checking problem over...

  14. On the Metric-Based Approximate Minimization of Markov Chains

    DEFF Research Database (Denmark)

    Bacci, Giovanni; Bacci, Giorgio; Larsen, Kim Guldstrand

    2017-01-01

    We address the behavioral metric-based approximate minimization problem of Markov Chains (MCs), i.e., given a finite MC and a positive integer k, we are interested in finding a k-state MC of minimal distance to the original. By considering as metric the bisimilarity distance of Desharnais at al...

  15. Monte Carlo methods for the reliability analysis of Markov systems

    International Nuclear Information System (INIS)

    Buslik, A.J.

    1985-01-01

    This paper presents Monte Carlo methods for the reliability analysis of Markov systems. Markov models are useful in treating dependencies between components. The present paper shows how the adjoint Monte Carlo method for the continuous time Markov process can be derived from the method for the discrete-time Markov process by a limiting process. The straightforward extensions to the treatment of mean unavailability (over a time interval) are given. System unavailabilities can also be estimated; this is done by making the system failed states absorbing, and not permitting repair from them. A forward Monte Carlo method is presented in which the weighting functions are related to the adjoint function. In particular, if the exact adjoint function is known then weighting factors can be constructed such that the exact answer can be obtained with a single Monte Carlo trial. Of course, if the exact adjoint function is known, there is no need to perform the Monte Carlo calculation. However, the formulation is useful since it gives insight into choices of the weight factors which will reduce the variance of the estimator

  16. Book Review: "Hidden Markov Models for Time Series: An ...

    African Journals Online (AJOL)

    Hidden Markov Models for Time Series: An Introduction using R. by Walter Zucchini and Iain L. MacDonald. Chapman & Hall (CRC Press), 2009. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT · http://dx.doi.org/10.4314/saaj.v10i1.61717 · AJOL African Journals Online.

  17. K­MEANS CLUSTERING FOR HIDDEN MARKOV MODEL

    NARCIS (Netherlands)

    Perrone, M.P.; Connell, S.D.

    2004-01-01

    An unsupervised k­means clustering algorithm for hidden Markov models is described and applied to the task of generating subclass models for individual handwritten character classes. The algorithm is compared to a related clustering method and shown to give a relative change in the error rate of as

  18. The Candy model revisited: Markov properties and inference

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette); R.S. Stoica

    2001-01-01

    textabstractThis paper studies the Candy model, a marked point process introduced by Stoica et al. (2000). We prove Ruelle and local stability, investigate its Markov properties, and discuss how the model may be sampled. Finally, we consider estimation of the model parameters and present some

  19. Embedding a State Space Model Into a Markov Decision Process

    DEFF Research Database (Denmark)

    Nielsen, Lars Relund; Jørgensen, Erik; Højsgaard, Søren

    2011-01-01

    In agriculture Markov decision processes (MDPs) with finite state and action space are often used to model sequential decision making over time. For instance, states in the process represent possible levels of traits of the animal and transition probabilities are based on biological models...

  20. Markov queue game with virtual reality strategies | Nwobi-Okoye ...

    African Journals Online (AJOL)

    A non cooperative markov game with several unique characteristics was introduced. Some of these characteristics include: the existence of a single phase multi server queuing model and markovian transition matrix/matrices for each game, introduction of virtual situations (virtual reality) or dummies to improve the chances ...

  1. Critical Age-Dependent Branching Markov Processes and their ...

    Indian Academy of Sciences (India)

    This paper studies: (i) the long-time behaviour of the empirical distribution of age and normalized position of an age-dependent critical branching Markov process conditioned on non-extinction; and (ii) the super-process limit of a sequence of age-dependent critical branching Brownian motions.

  2. On Characterisation of Markov Processes Via Martingale Problems

    Indian Academy of Sciences (India)

    This extension is used to improve on a criterion for a probability measure to be invariant for the semigroup associated with the Markov process. We also give examples of martingale problems that are well-posed in the class of solutions which are continuous in probability but for which no r.c.l.l. solution exists.

  3. Markov Stochastic Technique to Determine Galactic Cosmic Ray ...

    Indian Academy of Sciences (India)

    A new numerical model of particle propagation in the Galaxy has been developed, which allows the study of cosmic-ray production and propagation in 2D. The model has been used to solve cosmic ray diffusive transport equation with a complete network of nuclear interactions using the time backward Markov stochastic ...

  4. A theoretical Markov chain model for evaluating correctional ...

    African Journals Online (AJOL)

    In this paper a stochastic method is applied in the study of the long time effect of confinement in a correctional institution on the behaviour of a person with criminal tendencies. The approach used is Markov chain, which uses past history to predict the state of a system in the future. A model is developed for comparing the ...

  5. Optimal Number of States in Hidden Markov Models and its ...

    African Journals Online (AJOL)

    In this paper, Hidden Markov Model is applied to model human movements as to facilitate an automatic detection of the same. A number of activities were simulated with the help of two persons. The four movements considered are walking, sitting down-getting up, fall while walking and fall while standing. The data is ...

  6. When are two Markov chains the same? | Cowen | Quaestiones ...

    African Journals Online (AJOL)

    Given two one-sided Markov chains, the authors illustrate a procedure for ascertaining whether they are essentially the same. Precisely, they show how one can determine whether they are block-isomorphic. An application to hydrology is investigated with an example. Quaestiones Mathematicae 23(2000), 507–513 ...

  7. Efficient Modelling and Generation of Markov Automata (extended version)

    NARCIS (Netherlands)

    Timmer, Mark; Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    2012-01-01

    This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the

  8. Operations and support cost modeling using Markov chains

    Science.gov (United States)

    Unal, Resit

    1989-01-01

    Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.

  9. Markov chain Monte Carlo methods in radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Hugtenburg, R.P.

    2001-01-01

    The Markov chain method can be used to incorporate measured data in Monte Carlo based radiotherapy treatment planning. This paper shows that convergence to the measured data, within the target precision, is achievable. Relative output factors for blocked fields and oblique beams are shown to compare well with independent measurements according to the same criterion. (orig.)

  10. Markov LIMID processes for representing and solving renewal problems

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Kristensen, Anders Ringgaard; Nilsson, Dennis

    2014-01-01

    to model a Markov Limid Process, where each TemLimid represents a macro action. Algorithms are presented to find optimal plans for a sequence of such macro actions. Use of algorithms is illustrated based on an extended version of an example from pig production originally used to introduce the Limid concept...

  11. An integral equation approach to the interval reliability of systems modelled by finite semi-Markov processes

    International Nuclear Information System (INIS)

    Csenki, A.

    1995-01-01

    The interval reliability for a repairable system which alternates between working and repair periods is defined as the probability of the system being functional throughout a given time interval. In this paper, a set of integral equations is derived for this dependability measure, under the assumption that the system is modelled by an irreducible finite semi-Markov process. The result is applied to the semi-Markov model of a two-unit system with sequential preventive maintenance. The method used for the numerical solution of the resulting system of integral equations is a two-point trapezoidal rule. The system of implementation is the matrix computation package MATLAB on the Apple Macintosh SE/30. The numerical results are discussed and compared with those from simulation

  12. Local and global statistical dynamical properties of chaotic Markov analytic maps and repellers: A coarse grained and spectral perspective

    International Nuclear Information System (INIS)

    MacKernan, Donal; Basios, Vasileios

    2009-01-01

    The statistical properties of chaotic Markov analytic maps and equivalent repellers are investigated through matrix representations of the Frobenius-Perron operator (U). The associated basis sets are constructed using Chebyshev functions and Markov partitions which can be tailored to examine statistical dynamical properties associated with observables having support over local regions or for example, about periodic orbits. The decay properties of corresponding time correlations functions are given by a analytic expression of the spectra of U which is expected to be valid for a much larger class of systems than that studied here. An explicit and general expression is also derived for the correction factor to the dynamical zeta functions occurring when analytic function spaces are not invariant under U.

  13. The Role of Consumer Confidence as a Leading Indicator on Stock Returns: A Markov Switching Approach

    Directory of Open Access Journals (Sweden)

    Koy AYBEN

    2017-04-01

    Full Text Available Investor’s psychological and emotional factors lead to irrationality in financial decision making and anomalies in prices. Investor sentiment and psychology help to elucidate phenomena in financial markets that cannot be explained by traditional theory. The aim of this study is two-fold: it investigates whether mutual regime switching behavior exists between the consumer indices and equity index, and examines their dynamics in response to each other in different regimes. This study applies the Markov Regime Switching model to monthly data from the BIST100 Return Index, Bloomberg Confidence Index, TUIK Confidence Index, Real Sector Confidence Index for the period between 2007:01 and 2016:06. The results indicate if consumer indices point out negative signals, capital market still gains in normal periods of economy. If they only in a recession or an expansion regime do, each of the indices moves in the same direction.

  14. Hidden Markov models applied to a subsequence of the Xylella fastidiosa genome

    Directory of Open Access Journals (Sweden)

    Silva Cibele Q. da

    2003-01-01

    Full Text Available Dependencies in DNA sequences are frequently modeled using Markov models. However, Markov chains cannot account for heterogeneity that may be present in different regions of the same DNA sequence. Hidden Markov models are more realistic than Markov models since they allow for the identification of heterogeneous regions of a DNA sequence. In this study we present an application of hidden Markov models to a subsequence of the Xylella fastidiosa DNA data. We found that a three-state model provides a good description for the data considered.

  15. Multi-site Stochastic Simulation of Daily Streamflow with Markov Chain and KNN Algorithm

    Science.gov (United States)

    Mathai, J.; Mujumdar, P.

    2017-12-01

    A key focus of this study is to develop a method which is physically consistent with the hydrologic processes that can capture short-term characteristics of daily hydrograph as well as the correlation of streamflow in temporal and spatial domains. In complex water resource systems, flow fluctuations at small time intervals require that discretisation be done at small time scales such as daily scales. Also, simultaneous generation of synthetic flows at different sites in the same basin are required. We propose a method to equip water managers with a streamflow generator within a stochastic streamflow simulation framework. The motivation for the proposed method is to generate sequences that extend beyond the variability represented in the historical record of streamflow time series. The method has two steps: In step 1, daily flow is generated independently at each station by a two-state Markov chain, with rising limb increments randomly sampled from a Gamma distribution and the falling limb modelled as exponential recession and in step 2, the streamflow generated in step 1 is input to a nonparametric K-nearest neighbor (KNN) time series bootstrap resampler. The KNN model, being data driven, does not require assumptions on the dependence structure of the time series. A major limitation of KNN based streamflow generators is that they do not produce new values, but merely reshuffle the historical data to generate realistic streamflow sequences. However, daily flow generated using the Markov chain approach is capable of generating a rich variety of streamflow sequences. Furthermore, the rising and falling limbs of daily hydrograph represent different physical processes, and hence they need to be modelled individually. Thus, our method combines the strengths of the two approaches. We show the utility of the method and improvement over the traditional KNN by simulating daily streamflow sequences at 7 locations in the Godavari River basin in India.

  16. Statistical identification with hidden Markov models of large order splitting strategies in an equity market

    Science.gov (United States)

    Vaglica, Gabriella; Lillo, Fabrizio; Mantegna, Rosario N.

    2010-07-01

    Large trades in a financial market are usually split into smaller parts and traded incrementally over extended periods of time. We address these large trades as hidden orders. In order to identify and characterize hidden orders, we fit hidden Markov models to the time series of the sign of the tick-by-tick inventory variation of market members of the Spanish Stock Exchange. Our methodology probabilistically detects trading sequences, which are characterized by a significant majority of buy or sell transactions. We interpret these patches of sequential buying or selling transactions as proxies of the traded hidden orders. We find that the time, volume and number of transaction size distributions of these patches are fat tailed. Long patches are characterized by a large fraction of market orders and a low participation rate, while short patches have a large fraction of limit orders and a high participation rate. We observe the existence of a buy-sell asymmetry in the number, average length, average fraction of market orders and average participation rate of the detected patches. The detected asymmetry is clearly dependent on the local market trend. We also compare the hidden Markov model patches with those obtained with the segmentation method used in Vaglica et al (2008 Phys. Rev. E 77 036110), and we conclude that the former ones can be interpreted as a partition of the latter ones.

  17. A Hidden Markov Model for Analysis of Frontline Veterinary Data for Emerging Zoonotic Disease Surveillance

    Science.gov (United States)

    Robertson, Colin; Sawford, Kate; Gunawardana, Walimunige S. N.; Nelson, Trisalyn A.; Nathoo, Farouk; Stephen, Craig

    2011-01-01

    Surveillance systems tracking health patterns in animals have potential for early warning of infectious disease in humans, yet there are many challenges that remain before this can be realized. Specifically, there remains the challenge of detecting early warning signals for diseases that are not known or are not part of routine surveillance for named diseases. This paper reports on the development of a hidden Markov model for analysis of frontline veterinary sentinel surveillance data from Sri Lanka. Field veterinarians collected data on syndromes and diagnoses using mobile phones. A model for submission patterns accounts for both sentinel-related and disease-related variability. Models for commonly reported cattle diagnoses were estimated separately. Region-specific weekly average prevalence was estimated for each diagnoses and partitioned into normal and abnormal periods. Visualization of state probabilities was used to indicate areas and times of unusual disease prevalence. The analysis suggests that hidden Markov modelling is a useful approach for surveillance datasets from novel populations and/or having little historical baselines. PMID:21949763

  18. Markov Chain-Based Acute Effect Estimation of Air Pollution on Elder Asthma Hospitalization

    Directory of Open Access Journals (Sweden)

    Li Luo

    2017-01-01

    Full Text Available Background. Asthma caused substantial economic and health care burden and is susceptible to air pollution. Particularly, when it comes to elder asthma patient (older than 65, the phenomenon is more significant. The aim of this study is to investigate the Markov-based acute effects of air pollution on elder asthma hospitalizations, in forms of transition probabilities. Methods. A retrospective, population-based study design was used to assess temporal patterns in hospitalizations for asthma in a region of Sichuan province, China. Approximately 12 million residents were covered during this period. Relative risk analysis and Markov chain model were employed on daily hospitalization state estimation. Results. Among PM2.5, PM10, NO2, and SO2, only SO2 was significant. When air pollution is severe, the transition probability from a low-admission state (previous day to high-admission state (next day is 35.46%, while it is 20.08% when air pollution is mild. In particular, for female-cold subgroup, the counterparts are 30.06% and 0.01%, respectively. Conclusions. SO2 was a significant risk factor for elder asthma hospitalization. When air pollution worsened, the transition probabilities from each state to high admission states increase dramatically. This phenomenon appeared more evidently, especially in female-cold subgroup (which is in cold season for female admissions. Based on our work, admission amount forecast, asthma intervention, and corresponding healthcare allocation can be done.

  19. Crop Growing Periods and Irrigation Needs in Bahia State North ...

    African Journals Online (AJOL)

    Markov chain probabilities of days with dry and wet soil were computed for each decade of the year. Soil moisture averages and probabilities were used to determine the optimum crop growing periods at the stations. The amounts of supplementary irrigation necessary to maintain the soil moisture above selected levels ...

  20. The semi-Markov process. Generalizations and calculation rules for application in the analysis of systems

    International Nuclear Information System (INIS)

    Hirschmann, H.

    1983-06-01

    The consequences of the basic assumptions of the semi-Markov process as defined from a homogeneous renewal process with a stationary Markov condition are reviewed. The notion of the semi-Markov process is generalized by its redefinition from a nonstationary Markov renewal process. For both the nongeneralized and the generalized case a representation of the first order conditional state probabilities is derived in terms of the transition probabilities of the Markov renewal process. Some useful calculation rules (regeneration rules) are derived for the conditional state probabilities of the semi-Markov process. Compared to the semi-Markov process in its usual definition the generalized process allows the analysis of a larger class of systems. For instance systems with arbitrarily distributed lifetimes of their components can be described. There is also a chance to describe systems which are modified during time by forces or manipulations from outside. (Auth.)

  1. Identification of Optimal Policies in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    46 2010, č. 3 (2010), s. 558-570 ISSN 0023-5954. [International Conference on Mathematical Methods in Economy and Industry. České Budějovice, 15.06.2009-18.06.2009] R&D Projects: GA ČR(CZ) GA402/08/0107; GA ČR GA402/07/1113 Institutional research plan: CEZ:AV0Z10750506 Keywords : finite state Markov decision processes * discounted and average costs * elimination of suboptimal policies Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/E/sladky-identification of optimal policies in markov decision processes.pdf

  2. Active Learning of Markov Decision Processes for System Verification

    DEFF Research Database (Denmark)

    Chen, Yingke; Nielsen, Thomas Dyhre

    2012-01-01

    deterministic Markov decision processes from data by actively guiding the selection of input actions. The algorithm is empirically analyzed by learning system models of slot machines, and it is demonstrated that the proposed active learning procedure can significantly reduce the amount of data required...... demanding process, and this shortcoming has motivated the development of algorithms for automatically learning system models from observed system behaviors. Recently, algorithms have been proposed for learning Markov decision process representations of reactive systems based on alternating sequences...... of input/output observations. While alleviating the problem of manually constructing a system model, the collection/generation of observed system behaviors can also prove demanding. Consequently we seek to minimize the amount of data required. In this paper we propose an algorithm for learning...

  3. Recursive utility in a Markov environment with stochastic growth.

    Science.gov (United States)

    Hansen, Lars Peter; Scheinkman, José A

    2012-07-24

    Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron-Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility.

  4. Multivariate longitudinal data analysis with mixed effects hidden Markov models.

    Science.gov (United States)

    Raffa, Jesse D; Dubin, Joel A

    2015-09-01

    Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.

  5. Variable context Markov chains for HIV protease cleavage site prediction.

    Science.gov (United States)

    Oğul, Hasan

    2009-06-01

    Deciphering the knowledge of HIV protease specificity and developing computational tools for detecting its cleavage sites in protein polypeptide chain are very desirable for designing efficient and specific chemical inhibitors to prevent acquired immunodeficiency syndrome. In this study, we developed a generative model based on a generalization of variable order Markov chains (VOMC) for peptide sequences and adapted the model for prediction of their cleavability by certain proteases. The new method, called variable context Markov chains (VCMC), attempts to identify the context equivalence based on the evolutionary similarities between individual amino acids. It was applied for HIV-1 protease cleavage site prediction problem and shown to outperform existing methods in terms of prediction accuracy on a common dataset. In general, the method is a promising tool for prediction of cleavage sites of all proteases and encouraged to be used for any kind of peptide classification problem as well.

  6. A Markov chain model for CANDU feeder pipe degradation

    International Nuclear Information System (INIS)

    Datla, S.; Dinnie, K.; Usmani, A.; Yuan, X.-X.

    2008-01-01

    There is need for risk based approach to manage feeder pipe degradation to ensure safe operation by minimizing the nuclear safety risk. The current lack of understanding of some fundamental degradation mechanisms will result in uncertainty in predicting the rupture frequency. There are still concerns caused by uncertainties in the inspection techniques and engineering evaluations which should be addressed in the current procedures. A probabilistic approach is therefore useful in quantifying the risk and also it provides a tool for risk based decision making. This paper discusses the application of Markov chain model for feeder pipes in order to predict and manage the risks associated with the existing and future aging-related feeder degradation mechanisms. The major challenge in the approach is the lack of service data in characterizing the transition probabilities of the Markov model. The paper also discusses various approaches in estimating plant specific degradation rates. (author)

  7. Geometric allocation approaches in Markov chain Monte Carlo

    International Nuclear Information System (INIS)

    Todo, S; Suwa, H

    2013-01-01

    The Markov chain Monte Carlo method is a versatile tool in statistical physics to evaluate multi-dimensional integrals numerically. For the method to work effectively, we must consider the following key issues: the choice of ensemble, the selection of candidate states, the optimization of transition kernel, algorithm for choosing a configuration according to the transition probabilities. We show that the unconventional approaches based on the geometric allocation of probabilities or weights can improve the dynamics and scaling of the Monte Carlo simulation in several aspects. Particularly, the approach using the irreversible kernel can reduce or sometimes completely eliminate the rejection of trial move in the Markov chain. We also discuss how the space-time interchange technique together with Walker's method of aliases can reduce the computational time especially for the case where the number of candidates is large, such as models with long-range interactions

  8. Pattern statistics on Markov chains and sensitivity to parameter estimation

    Directory of Open Access Journals (Sweden)

    Nuel Grégory

    2006-10-01

    Full Text Available Abstract Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,.... Results: In the particular case where pattern statistics (overlap counting only computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.

  9. Inference with constrained hidden Markov models in PRISM

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2010-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference. De......_different are integrated. We experimentally validate our approach on the biologically motivated problem of global pairwise alignment.......A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference...

  10. Sentiment classification technology based on Markov logic networks

    Science.gov (United States)

    He, Hui; Li, Zhigang; Yao, Chongchong; Zhang, Weizhe

    2016-07-01

    With diverse online media emerging, there is a growing concern of sentiment classification problem. At present, text sentiment classification mainly utilizes supervised machine learning methods, which feature certain domain dependency. On the basis of Markov logic networks (MLNs), this study proposed a cross-domain multi-task text sentiment classification method rooted in transfer learning. Through many-to-one knowledge transfer, labeled text sentiment classification, knowledge was successfully transferred into other domains, and the precision of the sentiment classification analysis in the text tendency domain was improved. The experimental results revealed the following: (1) the model based on a MLN demonstrated higher precision than the single individual learning plan model. (2) Multi-task transfer learning based on Markov logical networks could acquire more knowledge than self-domain learning. The cross-domain text sentiment classification model could significantly improve the precision and efficiency of text sentiment classification.

  11. HMMEditor: a visual editing tool for profile hidden Markov model

    Directory of Open Access Journals (Sweden)

    Cheng Jianlin

    2008-03-01

    Full Text Available Abstract Background Profile Hidden Markov Model (HMM is a powerful statistical model to represent a family of DNA, RNA, and protein sequences. Profile HMM has been widely used in bioinformatics research such as sequence alignment, gene structure prediction, motif identification, protein structure prediction, and biological database search. However, few comprehensive, visual editing tools for profile HMM are publicly available. Results We develop a visual editor for profile Hidden Markov Models (HMMEditor. HMMEditor can visualize the profile HMM architecture, transition probabilities, and emission probabilities. Moreover, it provides functions to edit and save HMM and parameters. Furthermore, HMMEditor allows users to align a sequence against the profile HMM and to visualize the corresponding Viterbi path. Conclusion HMMEditor provides a set of unique functions to visualize and edit a profile HMM. It is a useful tool for biological sequence analysis and modeling. Both HMMEditor software and web service are freely available.

  12. Current driven wiggler

    Science.gov (United States)

    Tournes, C.; Aucouturier, J.; Arnaud, B.; Brasile, J. P.; Convert, G.; Simon, M.

    1992-07-01

    A current-driven wiggler is the cornerstone of an innovative, compact, high-efficiency, transportable tunable free-electron laser (FEL), the feasibility of which is currently being evaluated by Thomson-CSF. The salient advantages are: compactness of the FEL, along with the possibility to accelerate the beam through several successive passes through the accelerating section (the number of passes being defined by the final wavelength of the radiation; i.e. visible, MWIR, LWIR); the wiggler can be turned off and be transparent to the beam until the last pass. Wiggler periodicities as small as 5 mm can be achieved, hence contributing to FEL compactness. To achieve overall efficiencies in the range of 10% at visible wavelengths, not only the wiggler periodicity must be variable, but the strength of the magnetic field of each period can be adjusted separately and fine-tuned versus time during the macropulse, so as to take into account the growing contribution of the wave energy in the cavity to the total ponderomotive force. The salient theoretical point of this design is the optimization of the parameters defining each period of the wiggler for each micropacket of the macropulse. The salient technology point is the mechanical and thermal design of the wiggler which allows the required high currents to achieve magnetic fields up to 2T.

  13. Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains

    Directory of Open Access Journals (Sweden)

    Erik Van der Straeten

    2009-11-01

    Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.

  14. Hidden Markov models: the best models for forager movements?

    Science.gov (United States)

    Joo, Rocio; Bertrand, Sophie; Tam, Jorge; Fablet, Ronan

    2013-01-01

    One major challenge in the emerging field of movement ecology is the inference of behavioural modes from movement patterns. This has been mainly addressed through Hidden Markov models (HMMs). We propose here to evaluate two sets of alternative and state-of-the-art modelling approaches. First, we consider hidden semi-Markov models (HSMMs). They may better represent the behavioural dynamics of foragers since they explicitly model the duration of the behavioural modes. Second, we consider discriminative models which state the inference of behavioural modes as a classification issue, and may take better advantage of multivariate and non linear combinations of movement pattern descriptors. For this work, we use a dataset of >200 trips from human foragers, Peruvian fishermen targeting anchovy. Their movements were recorded through a Vessel Monitoring System (∼1 record per hour), while their behavioural modes (fishing, searching and cruising) were reported by on-board observers. We compare the efficiency of hidden Markov, hidden semi-Markov, and three discriminative models (random forests, artificial neural networks and support vector machines) for inferring the fishermen behavioural modes, using a cross-validation procedure. HSMMs show the highest accuracy (80%), significantly outperforming HMMs and discriminative models. Simulations show that data with higher temporal resolution, HSMMs reach nearly 100% of accuracy. Our results demonstrate to what extent the sequential nature of movement is critical for accurately inferring behavioural modes from a trajectory and we strongly recommend the use of HSMMs for such purpose. In addition, this work opens perspectives on the use of hybrid HSMM-discriminative models, where a discriminative setting for the observation process of HSMMs could greatly improve inference performance.

  15. Markov traces and II1 factors in conformal field theory

    International Nuclear Information System (INIS)

    Boer, J. de; Goeree, J.

    1991-01-01

    Using the duality equations of Moore and Seiberg we define for every primary field in a Rational Conformal Field Theory a proper Markov trace and hence a knot invariant. Next we define two nested algebras and show, using results of Ocneanu, how the position of the smaller algebra in the larger one reproduces part of the duality data. A new method for constructing Rational Conformal Field Theories is proposed. (orig.)

  16. A Partially Observed Markov Decision Process for Dynamic Pricing

    OpenAIRE

    Yossi Aviv; Amit Pazgal

    2005-01-01

    In this paper, we develop a stylized partially observed Markov decision process (POMDP) framework to study a dynamic pricing problem faced by sellers of fashion-like goods. We consider a retailer that plans to sell a given stock of items during a finite sales season. The objective of the retailer is to dynamically price the product in a way that maximizes expected revenues. Our model brings together various types of uncertainties about the demand, some of which are resolvable through sales ob...

  17. The computation of stationary distributions of Markov chains through perturbations

    Directory of Open Access Journals (Sweden)

    Jeffery J. Hunter

    1991-01-01

    Full Text Available An algorithmic procedure for the determination of the stationary distribution of a finite, m-state, irreducible Markov chain, that does not require the use of methods for solving systems of linear equations, is presented. The technique is based upon a succession of m, rank one, perturbations of the trivial doubly stochastic matrix whose known steady state vector is updated at each stage to yield the required stationary probability vector.

  18. R Package clickstream: Analyzing Clickstream Data with Markov Chains

    Directory of Open Access Journals (Sweden)

    Michael Scholz

    2016-10-01

    Full Text Available Clickstream analysis is a useful tool for investigating consumer behavior, market research and software testing. I present the clickstream package which provides functionality for reading, clustering, analyzing and writing clickstreams in R. The package allows for a modeling of lists of clickstreams as zero-, first- and higher-order Markov chains. I illustrate the application of clickstream for a list of representative clickstreams from an online store.

  19. Markov Chain Models for Stochastic Behavior in Resonance Overlap Regions

    Science.gov (United States)

    McCarthy, Morgan; Quillen, Alice

    2018-01-01

    We aim to predict lifetimes of particles in chaotic zoneswhere resonances overlap. A continuous-time Markov chain model isconstructed using mean motion resonance libration timescales toestimate transition times between resonances. The model is applied todiffusion in the co-rotation region of a planet. For particles begunat low eccentricity, the model is effective for early diffusion, butnot at later time when particles experience close encounters to the planet.

  20. Parallel algorithms for simulating continuous time Markov chains

    Science.gov (United States)

    Nicol, David M.; Heidelberger, Philip

    1992-01-01

    We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.

  1. Hidden Markov models: the best models for forager movements?

    Directory of Open Access Journals (Sweden)

    Rocio Joo

    Full Text Available One major challenge in the emerging field of movement ecology is the inference of behavioural modes from movement patterns. This has been mainly addressed through Hidden Markov models (HMMs. We propose here to evaluate two sets of alternative and state-of-the-art modelling approaches. First, we consider hidden semi-Markov models (HSMMs. They may better represent the behavioural dynamics of foragers since they explicitly model the duration of the behavioural modes. Second, we consider discriminative models which state the inference of behavioural modes as a classification issue, and may take better advantage of multivariate and non linear combinations of movement pattern descriptors. For this work, we use a dataset of >200 trips from human foragers, Peruvian fishermen targeting anchovy. Their movements were recorded through a Vessel Monitoring System (∼1 record per hour, while their behavioural modes (fishing, searching and cruising were reported by on-board observers. We compare the efficiency of hidden Markov, hidden semi-Markov, and three discriminative models (random forests, artificial neural networks and support vector machines for inferring the fishermen behavioural modes, using a cross-validation procedure. HSMMs show the highest accuracy (80%, significantly outperforming HMMs and discriminative models. Simulations show that data with higher temporal resolution, HSMMs reach nearly 100% of accuracy. Our results demonstrate to what extent the sequential nature of movement is critical for accurately inferring behavioural modes from a trajectory and we strongly recommend the use of HSMMs for such purpose. In addition, this work opens perspectives on the use of hybrid HSMM-discriminative models, where a discriminative setting for the observation process of HSMMs could greatly improve inference performance.

  2. Pemodelan Markov Switching Dengan Time-varying Transition Probability

    OpenAIRE

    Savitri, Anggita Puri; Warsito, Budi; Rahmawati, Rita

    2016-01-01

    Exchange rate or currency is an economic variable which reflects country's state of economy. It fluctuates over time because of its ability to switch the condition or regime caused by economic and political factors. The changes in the exchange rate are depreciation and appreciation. Therefore, it could be modeled using Markov Switching with Time-Varying Transition Probability which observe the conditional changes and use information variable. From this model, time-varying transition probabili...

  3. Geometry and Dynamics for Markov Chain Monte Carlo

    OpenAIRE

    Barp, Alessandro; Briol, Francois-Xavier; Kennedy, Anthony D.; Girolami, Mark

    2017-01-01

    Markov Chain Monte Carlo methods have revolutionised mathematical computation and enabled statistical inference within many previously intractable models. In this context, Hamiltonian dynamics have been proposed as an efficient way of building chains which can explore probability densities efficiently. The method emerges from physics and geometry and these links have been extensively studied by a series of authors through the last thirty years. However, there is currently a gap between the in...

  4. The Perron-Frobenius Theorem for Markov Semigroups

    OpenAIRE

    Hijab, Omar

    2014-01-01

    Let $P^V_t$, $t\\ge0$, be the Schrodinger semigroup associated to a potential $V$ and Markov semigroup $P_t$, $t\\ge0$, on $C(X)$. Existence is established of a left eigenvector and right eigenvector corresponding to the spectral radius $e^{\\lambda_0t}$ of $P^V_t$, simultaneously for all $t\\ge0$. This is derived with no compactness assumption on the semigroup operators.

  5. ON THE ISSUE OF "MEMORY" MARKOV MODEL OF DAMAGE ACCUMULATION

    Directory of Open Access Journals (Sweden)

    A. I. Lantuh-Lyaschenko

    2010-04-01

    Full Text Available This paper presents the application of a probabilistic approach for the modeling of service life of highway bridge elements. The focus of this paper is on the Markov stochastic deterioration models. These models can be used as effective tool for technical state assessments and prediction of residual resource of a structure. For the bridge maintenance purpose these models can give quantitative criteria of a reliability level, risk and prediction algorithms of the residual resource.

  6. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  7. MARKOV CHAIN MODELING OF PERFORMANCE DEGRADATION OF PHOTOVOLTAIC SYSTEM

    OpenAIRE

    E. Suresh Kumar; Asis Sarkar; Dhiren kumar Behera

    2012-01-01

    Modern probability theory studies chance processes for which theknowledge of previous outcomes influence predictions for future experiments. In principle, when a sequence of chance experiments, all of the past outcomes could influence the predictions for the next experiment. In Markov chain type of chance, the outcome of a given experiment can affect the outcome of the next experiment. The system state changes with time and the state X and time t are two random variables. Each of these variab...

  8. On the entropy of a hidden Markov process.

    Science.gov (United States)

    Jacquet, Philippe; Seroussi, Gadiel; Szpankowski, Wojciech

    2008-05-01

    We study the entropy rate of a hidden Markov process (HMP) defined by observing the output of a binary symmetric channel whose input is a first-order binary Markov process. Despite the simplicity of the models involved, the characterization of this entropy is a long standing open problem. By presenting the probability of a sequence under the model as a product of random matrices, one can see that the entropy rate sought is equal to a top Lyapunov exponent of the product. This offers an explanation for the elusiveness of explicit expressions for the HMP entropy rate, as Lyapunov exponents are notoriously difficult to compute. Consequently, we focus on asymptotic estimates, and apply the same product of random matrices to derive an explicit expression for a Taylor approximation of the entropy rate with respect to the parameter of the binary symmetric channel. The accuracy of the approximation is validated against empirical simulation results. We also extend our results to higher-order Markov processes and to Rényi entropies of any order.

  9. Extracting Markov Models of Peptide Conformational Dynamics from Simulation Data.

    Science.gov (United States)

    Schultheis, Verena; Hirschberger, Thomas; Carstens, Heiko; Tavan, Paul

    2005-07-01

    A high-dimensional time series obtained by simulating a complex and stochastic dynamical system (like a peptide in solution) may code an underlying multiple-state Markov process. We present a computational approach to most plausibly identify and reconstruct this process from the simulated trajectory. Using a mixture of normal distributions we first construct a maximum likelihood estimate of the point density associated with this time series and thus obtain a density-oriented partition of the data space. This discretization allows us to estimate the transfer operator as a matrix of moderate dimension at sufficient statistics. A nonlinear dynamics involving that matrix and, alternatively, a deterministic coarse-graining procedure are employed to construct respective hierarchies of Markov models, from which the model most plausibly mapping the generating stochastic process is selected by consideration of certain observables. Within both procedures the data are classified in terms of prototypical points, the conformations, marking the various Markov states. As a typical example, the approach is applied to analyze the conformational dynamics of a tripeptide in solution. The corresponding high-dimensional time series has been obtained from an extended molecular dynamics simulation.

  10. Inferring animal densities from tracking data using Markov chains.

    Directory of Open Access Journals (Sweden)

    Hal Whitehead

    Full Text Available The distributions and relative densities of species are keys to ecology. Large amounts of tracking data are being collected on a wide variety of animal species using several methods, especially electronic tags that record location. These tracking data are effectively used for many purposes, but generally provide biased measures of distribution, because the starts of the tracks are not randomly distributed among the locations used by the animals. We introduce a simple Markov-chain method that produces unbiased measures of relative density from tracking data. The density estimates can be over a geographical grid, and/or relative to environmental measures. The method assumes that the tracked animals are a random subset of the population in respect to how they move through the habitat cells, and that the movements of the animals among the habitat cells form a time-homogenous Markov chain. We illustrate the method using simulated data as well as real data on the movements of sperm whales. The simulations illustrate the bias introduced when the initial tracking locations are not randomly distributed, as well as the lack of bias when the Markov method is used. We believe that this method will be important in giving unbiased estimates of density from the growing corpus of animal tracking data.

  11. Robust Dynamics and Control of a Partially Observed Markov Chain

    International Nuclear Information System (INIS)

    Elliott, R. J.; Malcolm, W. P.; Moore, J. P.

    2007-01-01

    In a seminal paper, Martin Clark (Communications Systems and Random Process Theory, Darlington, 1977, pp. 721-734, 1978) showed how the filtered dynamics giving the optimal estimate of a Markov chain observed in Gaussian noise can be expressed using an ordinary differential equation. These results offer substantial benefits in filtering and in control, often simplifying the analysis and an in some settings providing numerical benefits, see, for example Malcolm et al. (J. Appl. Math. Stoch. Anal., 2007, to appear).Clark's method uses a gauge transformation and, in effect, solves the Wonham-Zakai equation using variation of constants. In this article, we consider the optimal control of a partially observed Markov chain. This problem is discussed in Elliott et al. (Hidden Markov Models Estimation and Control, Applications of Mathematics Series, vol. 29, 1995). The innovation in our results is that the robust dynamics of Clark are used to compute forward in time dynamics for a simplified adjoint process. A stochastic minimum principle is established

  12. Finding metastabilities in reversible Markov chains based on incomplete sampling

    Directory of Open Access Journals (Sweden)

    Fackeldey Konstantin

    2017-01-01

    Full Text Available In order to fully characterize the state-transition behaviour of finite Markov chains one needs to provide the corresponding transition matrix P. In many applications such as molecular simulation and drug design, the entries of the transition matrix P are estimated by generating realizations of the Markov chain and determining the one-step conditional probability Pij for a transition from one state i to state j. This sampling can be computational very demanding. Therefore, it is a good idea to reduce the sampling effort. The main purpose of this paper is to design a sampling strategy, which provides a partial sampling of only a subset of the rows of such a matrix P. Our proposed approach fits very well to stochastic processes stemming from simulation of molecular systems or random walks on graphs and it is different from the matrix completion approaches which try to approximate the transition matrix by using a low-rank-assumption. It will be shown how Markov chains can be analyzed on the basis of a partial sampling. More precisely. First, we will estimate the stationary distribution from a partially given matrix P. Second, we will estimate the infinitesimal generator Q of P on the basis of this stationary distribution. Third, from the generator we will compute the leading invariant subspace, which should be identical to the leading invariant subspace of P. Forth, we will apply Robust Perron Cluster Analysis (PCCA+ in order to identify metastabilities using this subspace.

  13. ''adding'' algorithm for the Markov chain formalism for radiation transfer

    International Nuclear Information System (INIS)

    Esposito, L.W.

    1979-01-01

    The Markov chain radiative transfer method of Esposito and House has been shown to be both efficient and accurate for calculation of the diffuse reflection from a homogeneous scattering planetary atmosphere. The use of a new algorithm similar to the ''adding'' formula of Hansen and Travis extends the application of this formalism to an arbitrarily deep atmosphere. The basic idea for this algorithm is to consider a preceding calculation as a single state of a new Markov chain. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. The time required for the algorithm is comparable to that for a doubling calculation for a homogeneous atmosphere, but for a non-homogeneous atmosphere the new method is considerably faster than the standard ''adding'' routine. As with he standard ''adding'' method, the information on the internal radiation field is lost during the calculation. This method retains the advantage of the earlier Markov chain method that the time required is relatively insensitive to the number of illumination angles or observation angles for which the diffuse reflection is calculated. A technical write-up giving fuller details of the algorithm and a sample code are available from the author

  14. Optimization of Markov chains for a SUSY fitter: Fittino

    Energy Technology Data Exchange (ETDEWEB)

    Prudent, Xavier [IKTP, Technische Universitaet, Dresden (Germany); Bechtle, Philip [DESY, Hamburg (Germany); Desch, Klaus; Wienemann, Peter [Universitaet Bonn (Germany)

    2010-07-01

    A Markov chains is a ''random walk'' algorithm which allows an efficient scan of a given profile and the search of the absolute minimum, even when this profil suffers from the presence of many secondary minima. This property makes them particularly suited to the study of Supersymmetry (SUSY) models, where minima have to be found in up-to 18-dimensional space for the general MSSM. Hence the SUSY fitter ''Fittino'' uses a Metropolis*Hastings Markov chain in a frequentist interpretation to study the impact of current low -energy measurements, as well as expected measurements from LHC and ILC, on the SUSY parameter space. The expected properties of an optimal Markov chain should be the independence of final results with respect to the starting point and a fast convergence. These two points can be achieved by optimizing the width of the proposal distribution, that is the ''average step length'' between two links in the chain. We developped an algorithm for the optimization of the proposal width, by modifying iteratively the width so that the rejection rate be around fifty percent. This optimization leads to a starting point independent chain as well as a faster convergence.

  15. Inferring animal densities from tracking data using Markov chains.

    Science.gov (United States)

    Whitehead, Hal; Jonsen, Ian D

    2013-01-01

    The distributions and relative densities of species are keys to ecology. Large amounts of tracking data are being collected on a wide variety of animal species using several methods, especially electronic tags that record location. These tracking data are effectively used for many purposes, but generally provide biased measures of distribution, because the starts of the tracks are not randomly distributed among the locations used by the animals. We introduce a simple Markov-chain method that produces unbiased measures of relative density from tracking data. The density estimates can be over a geographical grid, and/or relative to environmental measures. The method assumes that the tracked animals are a random subset of the population in respect to how they move through the habitat cells, and that the movements of the animals among the habitat cells form a time-homogenous Markov chain. We illustrate the method using simulated data as well as real data on the movements of sperm whales. The simulations illustrate the bias introduced when the initial tracking locations are not randomly distributed, as well as the lack of bias when the Markov method is used. We believe that this method will be important in giving unbiased estimates of density from the growing corpus of animal tracking data.

  16. Quantitative risk stratification in Markov chains with limiting conditional distributions.

    Science.gov (United States)

    Chan, David C; Pollett, Philip K; Weinstein, Milton C

    2009-01-01

    Many clinical decisions require patient risk stratification. The authors introduce the concept of limiting conditional distributions, which describe the equilibrium proportion of surviving patients occupying each disease state in a Markov chain with death. Such distributions can quantitatively describe risk stratification. The authors first establish conditions for the existence of a positive limiting conditional distribution in a general Markov chain and describe a framework for risk stratification using the limiting conditional distribution. They then apply their framework to a clinical example of a treatment indicated for high-risk patients, first to infer the risk of patients selected for treatment in clinical trials and then to predict the outcomes of expanding treatment to other populations of risk. For the general chain, a positive limiting conditional distribution exists only if patients in the earliest state have the lowest combined risk of progression or death. The authors show that in their general framework, outcomes and population risk are interchangeable. For the clinical example, they estimate that previous clinical trials have selected the upper quintile of patient risk for this treatment, but they also show that expanded treatment would weakly dominate this degree of targeted treatment, and universal treatment may be cost-effective. Limiting conditional distributions exist in most Markov models of progressive diseases and are well suited to represent risk stratification quantitatively. This framework can characterize patient risk in clinical trials and predict outcomes for other populations of risk.

  17. A Bayesian Markov geostatistical model for estimation of hydrogeological properties

    International Nuclear Information System (INIS)

    Rosen, L.; Gustafson, G.

    1996-01-01

    A geostatistical methodology based on Markov-chain analysis and Bayesian statistics was developed for probability estimations of hydrogeological and geological properties in the siting process of a nuclear waste repository. The probability estimates have practical use in decision-making on issues such as siting, investigation programs, and construction design. The methodology is nonparametric which makes it possible to handle information that does not exhibit standard statistical distributions, as is often the case for classified information. Data do not need to meet the requirements on additivity and normality as with the geostatistical methods based on regionalized variable theory, e.g., kriging. The methodology also has a formal way for incorporating professional judgments through the use of Bayesian statistics, which allows for updating of prior estimates to posterior probabilities each time new information becomes available. A Bayesian Markov Geostatistical Model (BayMar) software was developed for implementation of the methodology in two and three dimensions. This paper gives (1) a theoretical description of the Bayesian Markov Geostatistical Model; (2) a short description of the BayMar software; and (3) an example of application of the model for estimating the suitability for repository establishment with respect to the three parameters of lithology, hydraulic conductivity, and rock quality designation index (RQD) at 400--500 meters below ground surface in an area around the Aespoe Hard Rock Laboratory in southeastern Sweden

  18. Thermodynamically accurate modeling of the catalytic cycle of photosynthetic oxygen evolution: a mathematical solution to asymmetric Markov chains.

    Science.gov (United States)

    Vinyard, David J; Zachary, Chase E; Ananyev, Gennady; Dismukes, G Charles

    2013-07-01

    Forty-three years ago, Kok and coworkers introduced a phenomenological model describing period-four oscillations in O2 flash yields during photosynthetic water oxidation (WOC), which had been first reported by Joliot and coworkers. The original two-parameter Kok model was subsequently extended in its level of complexity to better simulate diverse data sets, including intact cells and isolated PSII-WOCs, but at the expense of introducing physically unrealistic assumptions necessary to enable numerical solutions. To date, analytical solutions have been found only for symmetric Kok models (inefficiencies are equally probable for all intermediates, called "S-states"). However, it is widely accepted that S-state reaction steps are not identical and some are not reversible (by thermodynamic restraints) thereby causing asymmetric cycles. We have developed a mathematically more rigorous foundation that eliminates unphysical assumptions known to be in conflict with experiments and adopts a new experimental constraint on solutions. This new algorithm termed STEAMM for S-state Transition Eigenvalues of Asymmetric Markov Models enables solutions to models having fewer adjustable parameters and uses automated fitting to experimental data sets, yielding higher accuracy and precision than the classic Kok or extended Kok models. This new tool provides a general mathematical framework for analyzing damped oscillations arising from any cycle period using any appropriate Markov model, regardless of symmetry. We illustrate applications of STEAMM that better describe the intrinsic inefficiencies for photon-to-charge conversion within PSII-WOCs that are responsible for damped period-four and period-two oscillations of flash O2 yields across diverse species, while using simpler Markov models free from unrealistic assumptions. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Prediction and generation of binary Markov processes: Can a finite-state fox catch a Markov mouse?

    Science.gov (United States)

    Ruebeck, Joshua B.; James, Ryan G.; Mahoney, John R.; Crutchfield, James P.

    2018-01-01

    Understanding the generative mechanism of a natural system is a vital component of the scientific method. Here, we investigate one of the fundamental steps toward this goal by presenting the minimal generator of an arbitrary binary Markov process. This is a class of processes whose predictive model is well known. Surprisingly, the generative model requires three distinct topologies for different regions of parameter space. We show that a previously proposed generator for a particular set of binary Markov processes is, in fact, not minimal. Our results shed the first quantitative light on the relative (minimal) costs of prediction and generation. We find, for instance, that the difference between prediction and generation is maximized when the process is approximately independently, identically distributed.

  20. Water exchange traded funds: A study on idiosyncratic risk using Markov switching analysis

    Directory of Open Access Journals (Sweden)

    Gurudeo Anand Tularam

    2016-12-01

    Full Text Available We investigate the relationship between idiosyncratic risk and return among four water exchange traded funds—PowerShares Water Resources Portfolio, Power Shares Global Water, First Trust ISE Water Index Fund, and Guggenheim S&P Global Water Index ETF using the Markov switching model for the period 2007–2015. The generated transition probabilities in this paper show that there is a high and low probability of switching between Regimes 1 and 3, respectively. Moreover, we find that the idiosyncratic risk for most of the exchange traded funds move from low volatility (Regime 2 to very low volatility (Regime 1 and 3. Our study also identify that the beta coefficients are positive and entire values are less than 1. Thus, it seems that water investment has a lower systematic risk and a positive effect on the water exchange traded index funds returns during different regimes.