A Partially Observed Markov Decision Process for Dynamic Pricing
Yossi Aviv; Amit Pazgal
2005-01-01
In this paper, we develop a stylized partially observed Markov decision process (POMDP) framework to study a dynamic pricing problem faced by sellers of fashion-like goods. We consider a retailer that plans to sell a given stock of items during a finite sales season. The objective of the retailer is to dynamically price the product in a way that maximizes expected revenues. Our model brings together various types of uncertainties about the demand, some of which are resolvable through sales ob...
Robust Dynamics and Control of a Partially Observed Markov Chain
International Nuclear Information System (INIS)
Elliott, R. J.; Malcolm, W. P.; Moore, J. P.
2007-01-01
In a seminal paper, Martin Clark (Communications Systems and Random Process Theory, Darlington, 1977, pp. 721-734, 1978) showed how the filtered dynamics giving the optimal estimate of a Markov chain observed in Gaussian noise can be expressed using an ordinary differential equation. These results offer substantial benefits in filtering and in control, often simplifying the analysis and an in some settings providing numerical benefits, see, for example Malcolm et al. (J. Appl. Math. Stoch. Anal., 2007, to appear).Clark's method uses a gauge transformation and, in effect, solves the Wonham-Zakai equation using variation of constants. In this article, we consider the optimal control of a partially observed Markov chain. This problem is discussed in Elliott et al. (Hidden Markov Models Estimation and Control, Applications of Mathematics Series, vol. 29, 1995). The innovation in our results is that the robust dynamics of Clark are used to compute forward in time dynamics for a simplified adjoint process. A stochastic minimum principle is established
Partially Hidden Markov Models
DEFF Research Database (Denmark)
Forchhammer, Søren Otto; Rissanen, Jorma
1996-01-01
Partially Hidden Markov Models (PHMM) are introduced. They differ from the ordinary HMM's in that both the transition probabilities of the hidden states and the output probabilities are conditioned on past observations. As an illustration they are applied to black and white image compression where...
A Method for Speeding Up Value Iteration in Partially Observable Markov Decision Processes
Zhang, Nevin Lianwen; Lee, Stephen S.; Zhang, Weihong
2013-01-01
We present a technique for speeding up the convergence of value iteration for partially observable Markov decisions processes (POMDPs). The underlying idea is similar to that behind modified policy iteration for fully observable Markov decision processes (MDPs). The technique can be easily incorporated into any existing POMDP value iteration algorithms. Experiments have been conducted on several test problems with one POMDP value iteration algorithm called incremental pruning. We find that th...
Rate estimation in partially observed Markov jump processes with measurement errors
Amrein, Michael; Kuensch, Hans R.
2010-01-01
We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...
Modeling treatment of ischemic heart disease with partially observable Markov decision processes.
Hauskrecht, M; Fraser, H
1998-01-01
Diagnosis of a disease and its treatment are not separate, one-shot activities. Instead they are very often dependent and interleaved over time, mostly due to uncertainty about the underlying disease, uncertainty associated with the response of a patient to the treatment and varying cost of different diagnostic (investigative) and treatment procedures. The framework of Partially observable Markov decision processes (POMDPs) developed and used in operations research, control theory and artificial intelligence communities is particularly suitable for modeling such a complex decision process. In the paper, we show how the POMDP framework could be used to model and solve the problem of the management of patients with ischemic heart disease, and point out modeling advantages of the framework over standard decision formalisms.
Planning treatment of ischemic heart disease with partially observable Markov decision processes.
Hauskrecht, M; Fraser, H
2000-03-01
Diagnosis of a disease and its treatment are not separate, one-shot activities. Instead, they are very often dependent and interleaved over time. This is mostly due to uncertainty about the underlying disease, uncertainty associated with the response of a patient to the treatment and varying cost of different diagnostic (investigative) and treatment procedures. The framework of partially observable Markov decision processes (POMDPs) developed and used in the operations research, control theory and artificial intelligence communities is particularly suitable for modeling such a complex decision process. In this paper, we show how the POMDP framework can be used to model and solve the problem of the management of patients with ischemic heart disease (IHD), and demonstrate the modeling advantages of the framework over standard decision formalisms.
Directory of Open Access Journals (Sweden)
Rajesh P N Rao
2010-11-01
Full Text Available A fundamental problem faced by animals is learning to select actions based on noisy sensory information and incomplete knowledge of the world. It has been suggested that the brain engages in Bayesian inference during perception but how such probabilistic representations are used to select actions has remained unclear. Here we propose a neural model of action selection and decision making based on the theory of partially observable Markov decision processes (POMDPs. Actions are selected based not on a single optimal estimate of state but on the posterior distribution over states (the belief state. We show how such a model provides a unified framework for explaining experimental results in decision making that involve both information gathering and overt actions. The model utilizes temporal difference (TD learning for maximizing expected reward. The resulting neural architecture posits an active role for the neocortex in belief computation while ascribing a role to the basal ganglia in belief representation, value computation, and action selection. When applied to the random dots motion discrimination task, model neurons representing belief exhibit responses similar to those of LIP neurons in primate neocortex. The appropriate threshold for switching from information gathering to overt actions emerges naturally during reward maximization. Additionally, the time course of reward prediction error in the model shares similarities with dopaminergic responses in the basal ganglia during the random dots task. For tasks with a deadline, the model learns a decision making strategy that changes with elapsed time, predicting a collapsing decision threshold consistent with some experimental studies. The model provides a new framework for understanding neural decision making and suggests an important role for interactions between the neocortex and the basal ganglia in learning the mapping between probabilistic sensory representations and actions that maximize
Adaptive Partially Hidden Markov Models
DEFF Research Database (Denmark)
Forchhammer, Søren Otto; Rasmussen, Tage
1996-01-01
Partially Hidden Markov Models (PHMM) have recently been introduced. The transition and emission probabilities are conditioned on the past. In this report, the PHMM is extended with a multiple token version. The different versions of the PHMM are applied to bi-level image coding....
Coding with partially hidden Markov models
DEFF Research Database (Denmark)
Forchhammer, Søren; Rissanen, J.
1995-01-01
Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...
Directory of Open Access Journals (Sweden)
Jian Jiao
2017-09-01
Full Text Available The Ka-band and higher Q/V band channels can provide an appealing capacity for the future deep-space communications and Space Information Networks (SIN, which are viewed as a primary solution to satisfy the increasing demands for high data rate services. However, Ka-band channel is much more sensitive to the weather conditions than the conventional communication channels. Moreover, due to the huge distance and long propagation delay in SINs, the transmitter can only obtain delayed Channel State Information (CSI from feedback. In this paper, the noise temperature of time-varying rain attenuation at Ka-band channels is modeled to a two-state Gilbert–Elliot channel, to capture the channel capacity that randomly ranging from good to bad state. An optimal transmission scheme based on Partially Observable Markov Decision Processes (POMDP is proposed, and the key thresholds for selecting the optimal transmission method in the SIN communications are derived. Simulation results show that our proposed scheme can effectively improve the throughput.
Observation uncertainty in reversible Markov chains.
Metzner, Philipp; Weber, Marcus; Schütte, Christof
2010-09-01
In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .
Automated generation of partial Markov chain from high level descriptions
International Nuclear Information System (INIS)
Brameret, P.-A.; Rauzy, A.; Roussel, J.-M.
2015-01-01
We propose an algorithm to generate partial Markov chains from high level implicit descriptions, namely AltaRica models. This algorithm relies on two components. First, a variation on Dijkstra's algorithm to compute shortest paths in a graph. Second, the definition of a notion of distance to select which states must be kept and which can be safely discarded. The proposed method solves two problems at once. First, it avoids a manual construction of Markov chains, which is both tedious and error prone. Second, up the price of acceptable approximations, it makes it possible to push back dramatically the exponential blow-up of the size of the resulting chains. We report experimental results that show the efficiency of the proposed approach. - Highlights: • We generate Markov chains from a higher level safety modeling language (AltaRica). • We use a variation on Dijkstra's algorithm to generate partial Markov chains. • Hence we solve two problems: the first problem is the tedious manual construction of Markov chains. • The second problem is the blow-up of the size of the chains, at the cost of decent approximations. • The experimental results highlight the efficiency of the method
Monitoring as a partially observable decision problem
Paul L. Fackler; Robert G. Haight
2014-01-01
Monitoring is an important and costly activity in resource man-agement problems such as containing invasive species, protectingendangered species, preventing soil erosion, and regulating con-tracts for environmental services. Recent studies have viewedoptimal monitoring as a Partially Observable Markov Decision Pro-cess (POMDP), which provides a framework for...
Filtering of a Markov Jump Process with Counting Observations
International Nuclear Information System (INIS)
Ceci, C.; Gerardi, A.
2000-01-01
This paper concerns the filtering of an R d -valued Markov pure jump process when only the total number of jumps are observed. Strong and weak uniqueness for the solutions of the filtering equations are discussed
Quantum tomography, phase-space observables and generalized Markov kernels
International Nuclear Information System (INIS)
Pellonpaeae, Juha-Pekka
2009-01-01
We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.
Timed Testing under Partial Observability
DEFF Research Database (Denmark)
David, Alexandre; Larsen, Kim Guldstrand; Li, Shuhao
2009-01-01
observability of SUT using a set of predicates over the TGA state space, and specify the test purposes in Computation Tree Logic (CTL) formulas. A recently developed partially observable timed game solver is used to generate winning strategies, which are used as test cases. We propose a conformance testing...
Application of the Markov chain approximation to the sunspot observations
International Nuclear Information System (INIS)
Onal, M.
1988-01-01
The positions of the 13,588 sunspot groups observed during the cycle of 1950-1960 at the Istanbul University Observatory have been corrected for the effect of differential rotation. The evolution probability of a sunspot group to the other one in the same region have been determined. By using the Markov chain approximation, the types of these groups and their transition probabilities during the following activity cycle (1950-1960), and the concentration of active regions during 1950-1960 have been estimated. The transition probabilities from the observations of the activity cycle 1960-1970 have been compared with the predicted transition probabilities and a good correlation has been noted. 5 refs.; 2 tabs
Bayesian inference for Markov jump processes with informative observations.
Golightly, Andrew; Wilkinson, Darren J
2015-04-01
In this paper we consider the problem of parameter inference for Markov jump process (MJP) representations of stochastic kinetic models. Since transition probabilities are intractable for most processes of interest yet forward simulation is straightforward, Bayesian inference typically proceeds through computationally intensive methods such as (particle) MCMC. Such methods ostensibly require the ability to simulate trajectories from the conditioned jump process. When observations are highly informative, use of the forward simulator is likely to be inefficient and may even preclude an exact (simulation based) analysis. We therefore propose three methods for improving the efficiency of simulating conditioned jump processes. A conditioned hazard is derived based on an approximation to the jump process, and used to generate end-point conditioned trajectories for use inside an importance sampling algorithm. We also adapt a recently proposed sequential Monte Carlo scheme to our problem. Essentially, trajectories are reweighted at a set of intermediate time points, with more weight assigned to trajectories that are consistent with the next observation. We consider two implementations of this approach, based on two continuous approximations of the MJP. We compare these constructs for a simple tractable jump process before using them to perform inference for a Lotka-Volterra system. The best performing construct is used to infer the parameters governing a simple model of motility regulation in Bacillus subtilis.
Computing continuous-time Markov chains as transformers of unbounded observables
DEFF Research Database (Denmark)
Danos, Vincent; Heindel, Tobias; Garnier, Ilias
2017-01-01
The paper studies continuous-time Markov chains (CTMCs) as transformers of real-valued functions on their state space, considered as generalised predicates and called observables. Markov chains are assumed to take values in a countable state space S; observables f: S → ℝ may be unbounded...
Synchronizing Strategies under Partial Observability
DEFF Research Database (Denmark)
Larsen, Kim Guldstrand; Laursen, Simon; Srba, Jiri
2014-01-01
Embedded devices usually share only partial information about their current configurations as the communication bandwidth can be restricted. Despite this, we may wish to bring a failed device into a given predetermined configuration. This problem, also known as resetting or synchronizing words, has...... been intensively studied for systems that do not provide any information about their configurations. In order to capture more general scenarios, we extend the existing theory of synchronizing words to synchronizing strategies, and study the synchronization, short-synchronization and subset...
Simulation based sequential Monte Carlo methods for discretely observed Markov processes
Neal, Peter
2014-01-01
Parameter estimation for discretely observed Markov processes is a challenging problem. However, simulation of Markov processes is straightforward using the Gillespie algorithm. We exploit this ease of simulation to develop an effective sequential Monte Carlo (SMC) algorithm for obtaining samples from the posterior distribution of the parameters. In particular, we introduce two key innovations, coupled simulations, which allow us to study multiple parameter values on the basis of a single sim...
Spreading paths in partially observed social networks
Onnela, Jukka-Pekka; Christakis, Nicholas A.
2012-03-01
Understanding how and how far information, behaviors, or pathogens spread in social networks is an important problem, having implications for both predicting the size of epidemics, as well as for planning effective interventions. There are, however, two main challenges for inferring spreading paths in real-world networks. One is the practical difficulty of observing a dynamic process on a network, and the other is the typical constraint of only partially observing a network. Using static, structurally realistic social networks as platforms for simulations, we juxtapose three distinct paths: (1) the stochastic path taken by a simulated spreading process from source to target; (2) the topologically shortest path in the fully observed network, and hence the single most likely stochastic path, between the two nodes; and (3) the topologically shortest path in a partially observed network. In a sampled network, how closely does the partially observed shortest path (3) emulate the unobserved spreading path (1)? Although partial observation inflates the length of the shortest path, the stochastic nature of the spreading process also frequently derails the dynamic path from the shortest path. We find that the partially observed shortest path does not necessarily give an inflated estimate of the length of the process path; in fact, partial observation may, counterintuitively, make the path seem shorter than it actually is.
Spreading paths in partially observed social networks.
Onnela, Jukka-Pekka; Christakis, Nicholas A
2012-03-01
Understanding how and how far information, behaviors, or pathogens spread in social networks is an important problem, having implications for both predicting the size of epidemics, as well as for planning effective interventions. There are, however, two main challenges for inferring spreading paths in real-world networks. One is the practical difficulty of observing a dynamic process on a network, and the other is the typical constraint of only partially observing a network. Using static, structurally realistic social networks as platforms for simulations, we juxtapose three distinct paths: (1) the stochastic path taken by a simulated spreading process from source to target; (2) the topologically shortest path in the fully observed network, and hence the single most likely stochastic path, between the two nodes; and (3) the topologically shortest path in a partially observed network. In a sampled network, how closely does the partially observed shortest path (3) emulate the unobserved spreading path (1)? Although partial observation inflates the length of the shortest path, the stochastic nature of the spreading process also frequently derails the dynamic path from the shortest path. We find that the partially observed shortest path does not necessarily give an inflated estimate of the length of the process path; in fact, partial observation may, counterintuitively, make the path seem shorter than it actually is.
Projected metastable Markov processes and their estimation with observable operator models
International Nuclear Information System (INIS)
Wu, Hao; Prinz, Jan-Hendrik; Noé, Frank
2015-01-01
The determination of kinetics of high-dimensional dynamical systems, such as macromolecules, polymers, or spin systems, is a difficult and generally unsolved problem — both in simulation, where the optimal reaction coordinate(s) are generally unknown and are difficult to compute, and in experimental measurements, where only specific coordinates are observable. Markov models, or Markov state models, are widely used but suffer from the fact that the dynamics on a coarsely discretized state spaced are no longer Markovian, even if the dynamics in the full phase space are. The recently proposed projected Markov models (PMMs) are a formulation that provides a description of the kinetics on a low-dimensional projection without making the Markovianity assumption. However, as yet no general way of estimating PMMs from data has been available. Here, we show that the observed dynamics of a PMM can be exactly described by an observable operator model (OOM) and derive a PMM estimator based on the OOM learning
Observability of discretized partial differential equations
Cohn, Stephen E.; Dee, Dick P.
1988-01-01
It is shown that complete observability of the discrete model used to assimilate data from a linear partial differential equation (PDE) system is necessary and sufficient for asymptotic stability of the data assimilation process. The observability theory for discrete systems is reviewed and applied to obtain simple observability tests for discretized constant-coefficient PDEs. Examples are used to show how numerical dispersion can result in discrete dynamics with multiple eigenvalues, thereby detracting from observability.
Spreading paths in partially observed social networks
Onnela, Jukka-Pekka; Christakis, Nicholas A.
2012-01-01
Understanding how and how far information, behaviors, or pathogens spread in social networks is an important problem, having implications for both predicting the size of epidemics, as well as for planning effective interventions. There are, however, two main challenges for inferring spreading paths in real-world networks. One is the practical difficulty of observing a dynamic process on a network, and the other is the typical constraint of only partially observing a network. Using a static, s...
Evidence Estimation for Bayesian Partially Observed MRFs
Chen, Y.; Welling, M.
2013-01-01
Bayesian estimation in Markov random fields is very hard due to the intractability of the partition function. The introduction of hidden units makes the situation even worse due to the presence of potentially very many modes in the posterior distribution. For the first time we propose a
Optimal State Estimation for Discrete-Time Markov Jump Systems with Missing Observations
Directory of Open Access Journals (Sweden)
Qing Sun
2014-01-01
Full Text Available This paper is concerned with the optimal linear estimation for a class of direct-time Markov jump systems with missing observations. An observer-based approach of fault detection and isolation (FDI is investigated as a detection mechanic of fault case. For systems with known information, a conditional prediction of observations is applied and fault observations are replaced and isolated; then, an FDI linear minimum mean square error estimation (LMMSE can be developed by comprehensive utilizing of the correct information offered by systems. A recursive equation of filtering based on the geometric arguments can be obtained. Meanwhile, a stability of the state estimator will be guaranteed under appropriate assumption.
Markov random field and Gaussian mixture for segmented MRI-based partial volume correction in PET
International Nuclear Information System (INIS)
Bousse, Alexandre; Thomas, Benjamin A; Erlandsson, Kjell; Hutton, Brian F; Pedemonte, Stefano; Ourselin, Sébastien; Arridge, Simon
2012-01-01
In this paper we propose a segmented magnetic resonance imaging (MRI) prior-based maximum penalized likelihood deconvolution technique for positron emission tomography (PET) images. The model assumes the existence of activity classes that behave like a hidden Markov random field (MRF) driven by the segmented MRI. We utilize a mean field approximation to compute the likelihood of the MRF. We tested our method on both simulated and clinical data (brain PET) and compared our results with PET images corrected with the re-blurred Van Cittert (VC) algorithm, the simplified Guven (SG) algorithm and the region-based voxel-wise (RBV) technique. We demonstrated our algorithm outperforms the VC algorithm and outperforms SG and RBV corrections when the segmented MRI is inconsistent (e.g. mis-segmentation, lesions, etc) with the PET image. (paper)
On some Filtration Procedure for Jump Markov Process Observed in White Gaussian Noise
Khas'minskii, Rafail Z.; Lazareva, Betty V.
1992-01-01
The importance of optimal filtration problem for Markov chain with two states observed in Gaussian white noise (GWN) for a lot of concrete technical problems is well known. The equation for a posterior probability $\\pi(t)$ of one of the states was obtained many years ago. The aim of this paper is to study a simple filtration method. It is shown that this simplified filtration is asymptotically efficient in some sense if the diffusion constant of the GWN goes to 0. Some advantages of this proc...
Partial observation control in an anticipating environment
International Nuclear Information System (INIS)
Oeksendal, B; Sulem, A
2004-01-01
A study is made of a controlled stochastic system whose state X(t) at time t is described by a stochastic differential equation driven by Levy processes with filtration {F t } telementof[0,T] . The system is assumed to be anticipating, in the sense that the coefficients are assumed to be adapted to a filtration {G t } t≥0 with F t subset of equal G t for all t element of [0,T]. The corresponding anticipating stochastic differential equation is interpreted in the sense of forward integrals, which naturally generalize semimartingale integrals. The admissible controls are assumed to be adapted to a filtration {E t } telementof[0,T] such that E t subset of equal F t for all t element of [0,T]. The general problem is to maximize a given performance functional of this system over all admissible controls. This is a partial observation stochastic control problem in an anticipating environment. Examples of applications include stochastic volatity models in finance, insider influenced financial markets, and stochastic control of systems with delayed noise effects. Some particular cases in finance, involving optimal portfolios with logarithmic utility, are solved explicitly
Chen, Baojiang; Zhou, Xiao-Hua
2011-05-01
Identifying risk factors for transition rates among normal cognition, mildly cognitive impairment, dementia and death in an Alzheimer's disease study is very important. It is known that transition rates among these states are strongly time dependent. While Markov process models are often used to describe these disease progressions, the literature mainly focuses on time homogeneous processes, and limited tools are available for dealing with non-homogeneity. Further, patients may choose when they want to visit the clinics, which creates informative observations. In this paper, we develop methods to deal with non-homogeneous Markov processes through time scale transformation when observation times are pre-planned with some observations missing. Maximum likelihood estimation via the EM algorithm is derived for parameter estimation. Simulation studies demonstrate that the proposed method works well under a variety of situations. An application to the Alzheimer's disease study identifies that there is a significant increase in transition rates as a function of time. Furthermore, our models reveal that the non-ignorable missing mechanism is perhaps reasonable. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gravity Effects Observed In Partially Premixed Flames
Puri, Ishwar K.; Aggarwal, Suresh K.; Lock, Andrew J.; Gauguly, Ranjan; Hegde, Uday
2003-01-01
Partially premixed flames (PPFs) contain a rich premixed fuel air mixture in a pocket or stream, and, for complete combustion to occur, they require the transport of oxidizer from an appropriately oxidizer-rich (or fuel-lean) mixture that is present in another pocket or stream. Partial oxidation reactions occur in fuel-rich portions of the mixture and any remaining unburned fuel and/or intermediate species are consumed in the oxidizer-rich portions. Partial premixing, therefore, represents that condition when the equivalence ratio (phi) in one portion of the flowfield is greater than unity, and in another section its value is less than unity. In general, for combustion to occur efficiently, the global equivalence ratio is in the range fuel-lean to stoichiometric. These flames can be established by design by placing a fuel-rich mixture in contact with a fuel-lean mixture, but they also occur otherwise in many practical systems, which include nonpremixed lifted flames, turbulent nonpremixed combustion, spray flames, and unwanted fires. Other practical applications of PPFs are reported elsewhere. Although extensive experimental studies have been conducted on premixed and nonpremixed flames under microgravity, there is a absence of previous experimental work on burner stabilized PPFs in this regard. Previous numerical studies by our group employing a detailed numerical model showed gravity effects to be significant on the PPF structure. We report on the results of microgravity experiments conducted on two-dimensional (established on a Wolfhard-Parker slot burner) and axisymmetric flames (on a coannular burner) that were investigated in a self-contained multipurpose rig. Thermocouple and radiometer data were also used to characterize the thermal transport in the flame.
Reasoning about Strategies under Partial Observability and Fairness Constraints
Directory of Open Access Journals (Sweden)
Simon Busard
2013-03-01
Full Text Available A number of extensions exist for Alternating-time Temporal Logic; some of these mix strategies and partial observability but, to the best of our knowledge, no work provides a unified framework for strategies, partial observability and fairness constraints. In this paper we propose ATLK^F_po, a logic mixing strategies under partial observability and epistemic properties of agents in a system with fairness constraints on states, and we provide a model checking algorithm for it.
QRS complex detection based on continuous density hidden Markov models using univariate observations
Sotelo, S.; Arenas, W.; Altuve, M.
2018-04-01
In the electrocardiogram (ECG), the detection of QRS complexes is a fundamental step in the ECG signal processing chain since it allows the determination of other characteristics waves of the ECG and provides information about heart rate variability. In this work, an automatic QRS complex detector based on continuous density hidden Markov models (HMM) is proposed. HMM were trained using univariate observation sequences taken either from QRS complexes or their derivatives. The detection approach is based on the log-likelihood comparison of the observation sequence with a fixed threshold. A sliding window was used to obtain the observation sequence to be evaluated by the model. The threshold was optimized by receiver operating characteristic curves. Sensitivity (Sen), specificity (Spc) and F1 score were used to evaluate the detection performance. The approach was validated using ECG recordings from the MIT-BIH Arrhythmia database. A 6-fold cross-validation shows that the best detection performance was achieved with 2 states HMM trained with QRS complexes sequences (Sen = 0.668, Spc = 0.360 and F1 = 0.309). We concluded that these univariate sequences provide enough information to characterize the QRS complex dynamics from HMM. Future works are directed to the use of multivariate observations to increase the detection performance.
Aralis, Hilary; Brookmeyer, Ron
2017-01-01
Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.
Borchers, D L; Langrock, R
2015-12-01
We develop maximum likelihood methods for line transect surveys in which animals go undetected at distance zero, either because they are stochastically unavailable while within view or because they are missed when they are available. These incorporate a Markov-modulated Poisson process model for animal availability, allowing more clustered availability events than is possible with Poisson availability models. They include a mark-recapture component arising from the independent-observer survey, leading to more accurate estimation of detection probability given availability. We develop models for situations in which (a) multiple detections of the same individual are possible and (b) some or all of the availability process parameters are estimated from the line transect survey itself, rather than from independent data. We investigate estimator performance by simulation, and compare the multiple-detection estimators with estimators that use only initial detections of individuals, and with a single-observer estimator. Simultaneous estimation of detection function parameters and availability model parameters is shown to be feasible from the line transect survey alone with multiple detections and double-observer data but not with single-observer data. Recording multiple detections of individuals improves estimator precision substantially when estimating the availability model parameters from survey data, and we recommend that these data be gathered. We apply the methods to estimate detection probability from a double-observer survey of North Atlantic minke whales, and find that double-observer data greatly improve estimator precision here too. © 2015 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
Observable consequences of partially degenerate leptogenesis
Ellis, Jonathan Richard; Yanagida, T; Ellis, John; Raidal, Martti
2002-01-01
In the context of the seesaw mechanism, it is natural that the large solar and atmospheric neutrino mixing angles originate separately from large 2 by 2 mixings in the neutrino and charged-lepton sectors, respectively, and large mixing in the neutrino couplings is in turn more plausible if two of the heavy singlet neutrinos are nearly degenerate. We study the phenomenology of this scenario, calculating leptogenesis by solving numerically the set of coupled Boltzmann equations for out-of-equilibrium heavy singlet neutrino decays in the minimal supersymmetric seesaw model. The near-degenerate neutrinos may weigh < 10^8 GeV, avoiding the cosmological gravitino problem. This scenario predicts that Br(mu to e gamma) should be strongly suppressed, because of the small singlet neutrino masses, whilst Br(tau to mu gamma) may be large enough to be observable in B-factory or LHC experiments. If the light neutrino masses are hierarchical, we predict that the neutrinoless double-beta decay parameter m_{ee} is approxim...
Wang, Yuting; Xu, Lixin
2010-01-01
In this paper, the holographic dark energy model with new infrared (IR) cut-off for both the flat case and the non-flat case are confronted with the combined constraints of current cosmological observations: type Ia Supernovae, Baryon Acoustic Oscillations, current Cosmic Microwave Background, and the observational hubble data. By utilizing the Markov Chain Monte Carlo (MCMC) method, we obtain the best fit values of the parameters with $1\\sigma, 2\\sigma$ errors in the flat model: $\\Omega_{b}h...
Estimation with Right-Censored Observations Under A Semi-Markov Model.
Zhao, Lihui; Hu, X Joan
2013-06-01
The semi-Markov process often provides a better framework than the classical Markov process for the analysis of events with multiple states. The purpose of this paper is twofold. First, we show that in the presence of right censoring, when the right end-point of the support of the censoring time is strictly less than the right end-point of the support of the semi-Markov kernel, the transition probability of the semi-Markov process is nonidentifiable, and the estimators proposed in the literature are inconsistent in general. We derive the set of all attainable values for the transition probability based on the censored data, and we propose a nonparametric inference procedure for the transition probability using this set. Second, the conventional approach to constructing confidence bands is not applicable for the semi-Markov kernel and the sojourn time distribution. We propose new perturbation resampling methods to construct these confidence bands. Different weights and transformations are explored in the construction. We use simulation to examine our proposals and illustrate them with hospitalization data from a recent cancer survivor study.
Hidden Semi Markov Models for Multiple Observation Sequences: The mhsmm Package for R
DEFF Research Database (Denmark)
O'Connell, Jarad Michael; Højsgaard, Søren
2011-01-01
models only allow a geometrically distributed sojourn time in a given state, while hidden semi-Markov models extend this by allowing an arbitrary sojourn distribution. We demonstrate the software with simulation examples and an application involving the modelling of the ovarian cycle of dairy cows...
An Optimal Medium Access Control with Partial Observations for Sensor Networks
Directory of Open Access Journals (Sweden)
Servetto Sergio D
2005-01-01
Full Text Available We consider medium access control (MAC in multihop sensor networks, where only partial information about the shared medium is available to the transmitter. We model our setting as a queuing problem in which the service rate of a queue is a function of a partially observed Markov chain representing the available bandwidth, and in which the arrivals are controlled based on the partial observations so as to keep the system in a desirable mildly unstable regime. The optimal controller for this problem satisfies a separation property: we first compute a probability measure on the state space of the chain, namely the information state, then use this measure as the new state on which the control decisions are based. We give a formal description of the system considered and of its dynamics, we formalize and solve an optimal control problem, and we show numerical simulations to illustrate with concrete examples properties of the optimal control law. We show how the ergodic behavior of our queuing model is characterized by an invariant measure over all possible information states, and we construct that measure. Our results can be specifically applied for designing efficient and stable algorithms for medium access control in multiple-accessed systems, in particular for sensor networks.
Advances in the control of markov jump linear systems with no mode observation
Vargas, Alessandro N; do Val, João B R
2016-01-01
This brief broadens readers’ understanding of stochastic control by highlighting recent advances in the design of optimal control for Markov jump linear systems (MJLS). It also presents an algorithm that attempts to solve this open stochastic control problem, and provides a real-time application for controlling the speed of direct current motors, illustrating the practical usefulness of MJLS. Particularly, it offers novel insights into the control of systems when the controller does not have access to the Markovian mode.
Nash Equilibria in Symmetric Graph Games with Partial Observation
DEFF Research Database (Denmark)
Bouyer, Patricia; Markey, Nicolas; Vester, Steen
2017-01-01
We investigate a model for representing large multiplayer games, which satisfy strong symmetry properties. This model is made of multiple copies of an arena; each player plays in his own arena, and can partially observe what the other players do. Therefore, this game has partial information...... and symmetry constraints, which make the computation of Nash equilibria difficult. We show several undecidability results, and for bounded-memory strategies, we precisely characterize the complexity of computing pure Nash equilibria for qualitative objectives in this game model....
Nash Equilibria in Symmetric Games with Partial Observation
DEFF Research Database (Denmark)
Bouyer, Patricia; Markey, Nicolas; Vester, Steen
2014-01-01
We investigate a model for representing large multiplayer games, which satisfy strong symmetry properties. This model is made of multiple copies of an arena; each player plays in his own arena, and can partially observe what the other players do. Therefore, this game has partial information...... and symmetry constraints, which make the computation of Nash equilibria difficult. We show several undecidability results, and for bounded-memory strategies, we precisely characterize the complexity of computing pure Nash equilibria (for qualitative objectives) in this game model....
Markov decision processes in artificial intelligence
Sigaud, Olivier
2013-01-01
Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision problems under uncertainty as well as Reinforcement Learning problems. Written by experts in the field, this book provides a global view of current research using MDPs in Artificial Intelligence. It starts with an introductory presentation of the fundamental aspects of MDPs (planning in MDPs, Reinforcement Learning, Partially Observable MDPs, Markov games and the use of non-classical criteria). Then it presents more advanced research trends in the domain and gives some concrete examples using illustr
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Lewis, F L; Vamvoudakis, Kyriakos G
2011-02-01
Approximate dynamic programming (ADP) is a class of reinforcement learning methods that have shown their importance in a variety of applications, including feedback control of dynamical systems. ADP generally requires full information about the system internal states, which is usually not available in practical situations. In this paper, we show how to implement ADP methods using only measured input/output data from the system. Linear dynamical systems with deterministic behavior are considered herein, which are systems of great interest in the control system community. In control system theory, these types of methods are referred to as output feedback (OPFB). The stochastic equivalent of the systems dealt with in this paper is a class of partially observable Markov decision processes. We develop both policy iteration and value iteration algorithms that converge to an optimal controller that requires only OPFB. It is shown that, similar to Q -learning, the new methods have the important advantage that knowledge of the system dynamics is not needed for the implementation of these learning algorithms or for the OPFB control. Only the order of the system, as well as an upper bound on its "observability index," must be known. The learned OPFB controller is in the form of a polynomial autoregressive moving-average controller that has equivalent performance with the optimal state variable feedback gain.
Kirkwood, James R
2015-01-01
Review of ProbabilityShort HistoryReview of Basic Probability DefinitionsSome Common Probability DistributionsProperties of a Probability DistributionProperties of the Expected ValueExpected Value of a Random Variable with Common DistributionsGenerating FunctionsMoment Generating FunctionsExercisesDiscrete-Time, Finite-State Markov ChainsIntroductionNotationTransition MatricesDirected Graphs: Examples of Markov ChainsRandom Walk with Reflecting BoundariesGamblerâ€™s RuinEhrenfest ModelCentral Problem of Markov ChainsCondition to Ensure a Unique Equilibrium StateFinding the Equilibrium StateTransient and Recurrent StatesIndicator FunctionsPerron-Frobenius TheoremAbsorbing Markov ChainsMean First Passage TimeMean Recurrence Time and the Equilibrium StateFundamental Matrix for Regular Markov ChainsDividing a Markov Chain into Equivalence ClassesPeriodic Markov ChainsReducible Markov ChainsSummaryExercisesDiscrete-Time, Infinite-State Markov ChainsRenewal ProcessesDelayed Renewal ProcessesEquilibrium State f...
Directory of Open Access Journals (Sweden)
Pablo J. Villacorta
2016-07-01
Full Text Available Markov chains are well-established probabilistic models of a wide variety of real systems that evolve along time. Countless examples of applications of Markov chains that successfully capture the probabilistic nature of real problems include areas as diverse as biology, medicine, social science, and engineering. One interesting feature which characterizes certain kinds of Markov chains is their stationary distribution, which stands for the global fraction of time the system spends in each state. The computation of the stationary distribution requires precise knowledge of the transition probabilities. When the only information available is a sequence of observations drawn from the system, such probabilities have to be estimated. Here we review an existing method to estimate fuzzy transition probabilities from observations and, with them, obtain the fuzzy stationary distribution of the resulting fuzzy Markov chain. The method also works when the user directly provides fuzzy transition probabilities. We provide an implementation in the R environment that is the first available to the community and serves as a proof of concept. We demonstrate the usefulness of our proposal with computational experiments on a toy problem, namely a time-homogeneous Markov chain that guides the randomized movement of an autonomous robot that patrols a small area.
NonMarkov Ito Processes with 1- state memory
McCauley, Joseph L.
2010-08-01
A Markov process, by definition, cannot depend on any previous state other than the last observed state. An Ito process implies the Fokker-Planck and Kolmogorov backward time partial differential eqns. for transition densities, which in turn imply the Chapman-Kolmogorov eqn., but without requiring the Markov condition. We present a class of Ito process superficially resembling Markov processes, but with 1-state memory. In finance, such processes would obey the efficient market hypothesis up through the level of pair correlations. These stochastic processes have been mislabeled in recent literature as 'nonlinear Markov processes'. Inspired by Doob and Feller, who pointed out that the ChapmanKolmogorov eqn. is not restricted to Markov processes, we exhibit a Gaussian Ito transition density with 1-state memory in the drift coefficient that satisfies both of Kolmogorov's partial differential eqns. and also the Chapman-Kolmogorov eqn. In addition, we show that three of the examples from McKean's seminal 1966 paper are also nonMarkov Ito processes. Last, we show that the transition density of the generalized Black-Scholes type partial differential eqn. describes a martingale, and satisfies the ChapmanKolmogorov eqn. This leads to the shortest-known proof that the Green function of the Black-Scholes eqn. with variable diffusion coefficient provides the so-called martingale measure of option pricing.
Markov Chains and Markov Processes
Ogunbayo, Segun
2016-01-01
Markov chain, which was named after Andrew Markov is a mathematical system that transfers a state to another state. Many real world systems contain uncertainty. This study helps us to understand the basic idea of a Markov chain and how is been useful in our daily lives. For some times there had been suspense on distinct predictions and future existences. Also in different games there had been different expectations or results involved. That is the reason why we need Markov chains to predict o...
The energy efficiency paradox revisited through a partial observability approach
International Nuclear Information System (INIS)
Kounetas, Kostas; Tsekouras, Kostas
2008-01-01
The present paper examines the energy efficiency paradox demonstrated in Greek manufacturing firms through a partial observability approach. The data set used has resulted from a survey carried out among 161 energy-saving technology firm adopters. Maximum likelihood estimates that arise from an incidental truncation model reveal that the adoption of the energy-saving technologies is indeed strongly correlated to the returns of assets that are required in order to undertake the corresponding investments. The source of the energy efficiency paradox lies within a wide range of factors. Policy schemes that aim to increase the adoption rate of energy-saving technologies within the field of manufacturing are significantly affected by differences in the size of firms. Finally, mixed policies seem to be more effective than policies that are only capital subsidy or regulation oriented
Partially Observed Mixtures of IRT Models: An Extension of the Generalized Partial-Credit Model
Von Davier, Matthias; Yamamoto, Kentaro
2004-01-01
The generalized partial-credit model (GPCM) is used frequently in educational testing and in large-scale assessments for analyzing polytomous data. Special cases of the generalized partial-credit model are the partial-credit model--or Rasch model for ordinal data--and the two parameter logistic (2PL) model. This article extends the GPCM to the…
Real-time characterization of partially observed epidemics using surrogate models.
Energy Technology Data Exchange (ETDEWEB)
Safta, Cosmin; Ray, Jaideep; Lefantzi, Sophia; Crary, David (Applied Research Associates, Arlington, VA); Sargsyan, Khachik; Cheng, Karen (Applied Research Associates, Arlington, VA)
2011-09-01
We present a statistical method, predicated on the use of surrogate models, for the 'real-time' characterization of partially observed epidemics. Observations consist of counts of symptomatic patients, diagnosed with the disease, that may be available in the early epoch of an ongoing outbreak. Characterization, in this context, refers to estimation of epidemiological parameters that can be used to provide short-term forecasts of the ongoing epidemic, as well as to provide gross information on the dynamics of the etiologic agent in the affected population e.g., the time-dependent infection rate. The characterization problem is formulated as a Bayesian inverse problem, and epidemiological parameters are estimated as distributions using a Markov chain Monte Carlo (MCMC) method, thus quantifying the uncertainty in the estimates. In some cases, the inverse problem can be computationally expensive, primarily due to the epidemic simulator used inside the inversion algorithm. We present a method, based on replacing the epidemiological model with computationally inexpensive surrogates, that can reduce the computational time to minutes, without a significant loss of accuracy. The surrogates are created by projecting the output of an epidemiological model on a set of polynomial chaos bases; thereafter, computations involving the surrogate model reduce to evaluations of a polynomial. We find that the epidemic characterizations obtained with the surrogate models is very close to that obtained with the original model. We also find that the number of projections required to construct a surrogate model is O(10)-O(10{sup 2}) less than the number of samples required by the MCMC to construct a stationary posterior distribution; thus, depending upon the epidemiological models in question, it may be possible to omit the offline creation and caching of surrogate models, prior to their use in an inverse problem. The technique is demonstrated on synthetic data as well as
Benoit, Julia S; Chan, Wenyaw; Luo, Sheng; Yeh, Hung-Wen; Doody, Rachelle
2016-04-30
Understanding the dynamic disease process is vital in early detection, diagnosis, and measuring progression. Continuous-time Markov chain (CTMC) methods have been used to estimate state-change intensities but challenges arise when stages are potentially misclassified. We present an analytical likelihood approach where the hidden state is modeled as a three-state CTMC model allowing for some observed states to be possibly misclassified. Covariate effects of the hidden process and misclassification probabilities of the hidden state are estimated without information from a 'gold standard' as comparison. Parameter estimates are obtained using a modified expectation-maximization (EM) algorithm, and identifiability of CTMC estimation is addressed. Simulation studies and an application studying Alzheimer's disease caregiver stress-levels are presented. The method was highly sensitive to detecting true misclassification and did not falsely identify error in the absence of misclassification. In conclusion, we have developed a robust longitudinal method for analyzing categorical outcome data when classification of disease severity stage is uncertain and the purpose is to study the process' transition behavior without a gold standard. Copyright © 2016 John Wiley & Sons, Ltd.
Partial Linearization of Mechanical Systems with Application to Observer Design
Sarras, Ioannis; Venkatraman, Aneesh; Ortega, Romeo; Schaft, Arjan van der
2008-01-01
We consider general mechanical systems and establish a necessary and sufficient condition for the existence of a suitable change in the generalized momentum coordinates such that the new dynamics become linear in the transformed momenta. The class of systems which can be (partially) linearized by
Learning classifier systems with memory condition to solve non-Markov problems
Zang, Zhaoxiang; Li, Dehua; Wang, Junying
2012-01-01
In the family of Learning Classifier Systems, the classifier system XCS has been successfully used for many applications. However, the standard XCS has no memory mechanism and can only learn optimal policy in Markov environments, where the optimal action is determined solely by the state of current sensory input. In practice, most environments are partially observable environments on agent's sensation, which are also known as non-Markov environments. Within these environments, XCS either fail...
El Yazid Boudaren, Mohamed; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar
2014-11-01
Hidden Markov chains have been shown to be inadequate for data modeling under some complex conditions. In this work, we address the problem of statistical modeling of phenomena involving two heterogeneous system states. Such phenomena may arise in biology or communications, among other fields. Namely, we consider that a sequence of meaningful words is to be searched within a whole observation that also contains arbitrary one-by-one symbols. Moreover, a word may be interrupted at some site to be carried on later. Applying plain hidden Markov chains to such data, while ignoring their specificity, yields unsatisfactory results. The Phasic triplet Markov chain, proposed in this paper, overcomes this difficulty by means of an auxiliary underlying process in accordance with the triplet Markov chains theory. Related Bayesian restoration techniques and parameters estimation procedures according to the new model are then described. Finally, to assess the performance of the proposed model against the conventional hidden Markov chain model, experiments are conducted on synthetic and real data.
An observation of a partially albinistic zenaida macroura (Mourning Dove)
Berdeen, James; Otis, D.L.
2011-01-01
Abstract Three of the 4 forms of albinism that occur in avifauna have been detected in Zenaida macroura (Mourning Dove). Albinism is rare in this species, and the incidence rate of each age and sex cohort is not well known. Consequently, we examined the pigmentation of Mourning Doves encountered in the Coastal Plain of South Carolina, and classified the age and sex of all individuals. One adult male Mourning Dove had unusually light coloration of some feathers and the upper mandible. This pigmentation is consistent with partial albinism. This was the only individual out of 10,749 examined that appeared to be albinistic. This low incidence rate of albinism supports the conclusion that this condition is relatively rare in Mourning Doves (Mirarchi 1993).
Distinguishing Hidden Markov Chains
Kiefer, Stefan; Sistla, A. Prasad
2015-01-01
Hidden Markov Chains (HMCs) are commonly used mathematical models of probabilistic systems. They are employed in various fields such as speech recognition, signal processing, and biological sequence analysis. We consider the problem of distinguishing two given HMCs based on an observation sequence that one of the HMCs generates. More precisely, given two HMCs and an observation sequence, a distinguishing algorithm is expected to identify the HMC that generates the observation sequence. Two HM...
Polynomial Time Decidability of Weighted Synchronization under Partial Observability
DEFF Research Database (Denmark)
Kretínsky, Jan; Larsen, Kim Guldstrand; Laursen, Simon
2015-01-01
We consider weighted automata with both positive and negative integer weights on edges and study the problem of synchronization using adaptive strategies that may only observe whether the current weight-level is negative or nonnegative. We show that the synchronization problem is decidable...
Likelihood based inference for partially observed renewal processes
van Lieshout, Maria Nicolette Margaretha
2016-01-01
This paper is concerned with inference for renewal processes on the real line that are observed in a broken interval. For such processes, the classic history-based approach cannot be used. Instead, we adapt tools from sequential spatial point process theory to propose a Monte Carlo maximum
Hand gesture recognition in confined spaces with partial observability and occultation constraints
Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen
2016-05-01
Human activity detection and recognition capabilities have broad applications for military and homeland security. These tasks are very complicated, however, especially when multiple persons are performing concurrent activities in confined spaces that impose significant obstruction, occultation, and observability uncertainty. In this paper, our primary contribution is to present a dedicated taxonomy and kinematic ontology that are developed for in-vehicle group human activities (IVGA). Secondly, we describe a set of hand-observable patterns that represents certain IVGA examples. Thirdly, we propose two classifiers for hand gesture recognition and compare their performance individually and jointly. Finally, we present a variant of Hidden Markov Model for Bayesian tracking, recognition, and annotation of hand motions, which enables spatiotemporal inference to human group activity perception and understanding. To validate our approach, synthetic (graphical data from virtual environment) and real physical environment video imagery are employed to verify the performance of these hand gesture classifiers, while measuring their efficiency and effectiveness based on the proposed Hidden Markov Model for tracking and interpreting dynamic spatiotemporal IVGA scenarios.
Pemodelan Markov Switching Autoregressive
Ariyani, Fiqria Devi; Warsito, Budi; Yasin, Hasbi
2014-01-01
Transition from depreciation to appreciation of exchange rate is one of regime switching that ignored by classic time series model, such as ARIMA, ARCH, or GARCH. Therefore, economic variables are modeled by Markov Switching Autoregressive (MSAR) which consider the regime switching. MLE is not applicable to parameters estimation because regime is an unobservable variable. So that filtering and smoothing process are applied to see the regime probabilities of observation. Using this model, tran...
Markov processes and controlled Markov chains
Filar, Jerzy; Chen, Anyue
2002-01-01
The general theory of stochastic processes and the more specialized theory of Markov processes evolved enormously in the second half of the last century. In parallel, the theory of controlled Markov chains (or Markov decision processes) was being pioneered by control engineers and operations researchers. Researchers in Markov processes and controlled Markov chains have been, for a long time, aware of the synergies between these two subject areas. However, this may be the first volume dedicated to highlighting these synergies and, almost certainly, it is the first volume that emphasizes the contributions of the vibrant and growing Chinese school of probability. The chapters that appear in this book reflect both the maturity and the vitality of modern day Markov processes and controlled Markov chains. They also will provide an opportunity to trace the connections that have emerged between the work done by members of the Chinese school of probability and the work done by the European, US, Central and South Ameri...
Vujačić, Ivan; Dattner, Itai
In this paper we use the sieve framework to prove consistency of the ‘direct integral estimator’ of parameters for partially observed systems of ordinary differential equations, which are commonly used for modeling dynamic processes.
Directory of Open Access Journals (Sweden)
Yanfeng Wang
2017-01-01
Full Text Available This paper investigates the observer-based controller design problem for a class of nonlinear networked control systems with random time-delays. The nonlinearity is assumed to satisfy a global Lipschitz condition and two dependent Markov chains are employed to describe the time-delay from sensor to controller (S-C delay and the time-delay from controller to actuator (C-A delay, respectively. The transition probabilities of S-C delay and C-A delay are both assumed to be partly inaccessible. Sufficient conditions on the stochastic stability for the closed-loop systems are obtained by constructing proper Lyapunov functional. The methods of calculating the controller and the observer gain matrix are also given. Two numerical examples are used to illustrate the effectiveness of the proposed method.
Zhao, Zhibiao
2011-06-01
We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.
Xu, Z.; Mace, G. G.; Posselt, D. J.
2017-12-01
As we begin to contemplate the next generation atmospheric observing systems, it will be critically important that we are able to make informed decisions regarding the trade space between scientific capability and the need to keep complexity and cost within definable limits. To explore this trade space as it pertains to understanding key cloud and precipitation processes, we are developing a Markov Chain Monte Carlo (MCMC) algorithm suite that allows us to arbitrarily define the specifications of candidate observing systems and then explore how the uncertainties in key retrieved geophysical parameters respond to that observing system. MCMC algorithms produce a more complete posterior solution space, and allow for an objective examination of information contained in measurements. In our initial implementation, MCMC experiments are performed to retrieve vertical profiles of cloud and precipitation properties from a spectrum of active and passive measurements collected by aircraft during the ACE Radiation Definition Experiments (RADEX). Focusing on shallow cumulus clouds observed during the Integrated Precipitation and Hydrology EXperiment (IPHEX), observing systems in this study we consider W and Ka-band radar reflectivity, path-integrated attenuation at those frequencies, 31 and 94 GHz brightness temperatures as well as visible and near-infrared reflectance. By varying the sensitivity and uncertainty of these measurements, we quantify the capacity of various combinations of observations to characterize the physical properties of clouds and precipitation.
Bayesian analysis of Markov point processes
DEFF Research Database (Denmark)
Berthelsen, Kasper Klitgaard; Møller, Jesper
2006-01-01
Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...... a partially ordered Markov point process as the auxiliary variable. As the method requires simulation from the "unknown" likelihood, perfect simulation algorithms for spatial point processes become useful....
Markov stochasticity coordinates
International Nuclear Information System (INIS)
Eliazar, Iddo
2017-01-01
Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.
Abdulla, Parosh Aziz; Henda, Noomene Ben; Mayr, Richard
2007-01-01
We consider qualitative and quantitative verification problems for infinite-state Markov chains. We call a Markov chain decisive w.r.t. a given set of target states F if it almost certainly eventually reaches either F or a state from which F can no longer be reached. While all finite Markov chains are trivially decisive (for every set F), this also holds for many classes of infinite Markov chains. Infinite Markov chains which contain a finite attractor are decisive w.r.t. every set F. In part...
Markov stochasticity coordinates
Energy Technology Data Exchange (ETDEWEB)
Eliazar, Iddo, E-mail: iddo.eliazar@intel.com
2017-01-15
Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.
Yu, Xiang
2011-01-01
We consider a model of optimal investment and consumption with both habit formation and partial observations in incomplete It\\^{o} processes market. The investor chooses his consumption under the addictive habits constraint while only observing the market stock prices but not the instantaneous rate of return. Applying the Kalman-Bucy filtering theorem and the Dynamic Programming arguments, we solve the associated Hamilton-Jacobi-Bellman (HJB) equation explicitly for the path dependent stochas...
Abbott, B. P.; Abbott, R.; Abbott, D.; Acernese, F.; Ackley, K.; Adams, C.; Phythian-Adams, A.T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Afrough, M.; Agarwal, B.; Agatsuma, K.; Aggarwal, N.T.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allen, G; Allocca, A.; Almoubayyed, H.; Altin, P. A.; Amato, A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Antier, S.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; AultONeal, K.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Bae, S.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Banagiri, S.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, R.D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bawaj, M.; Bazzan, M.; Becsy, B.; Beer, C.; Bejger, M.; Belahcene, I.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Etienne, Z. B.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, D J; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blari, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bode, N.; Boer, M.; Bogaert, J.G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, A.D.; Brown, D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Bustillo, J. Calderon; Callister, T. A.; Calloni, E.; Camp, J. B.; Canepa, M.; Canizares, P.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Carney, M. F.; Diaz, J. Casanueva; Casentini, C.; Caudill, S.; Cavaglia, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Baiardi, L. Cerboni; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, D. S.; Charlton, P.; Chassande-Mottin, E.; Chatterjee, D.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y; Cheng, H. -P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S. S. Y.; Chung, A. K. W.; Chung, S.; Ciani, G.; Ciolfi, R.; Cirelli, C. E.; Cirone, A.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P. -F.; Colla, A.; Collette, C. G.; Cominsky, L. R.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corban, P.; Corbitt, T. R.; Corley, K. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J. -P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, Laura; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Costa, C. F. Da Silva; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; De, S.; Debra, D.; Deelman, E; Degallaix, J.; De laurentis, M.; Deleglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.A.; Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Diaz, M. C.; Di Fiore, L.; Giovanni, M. Di; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Renzo, F.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Alvarez, M. Dovale; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Duncan, J.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H. -B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Feicht, J.; Fejer, M. M.; Fernandez-Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M; Fong, H.; Forsyth, P. W. F.; Forsyth, S. S.; Fournier, J. -D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gabel, M.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Ganija, M. R.; Gaonkar, S. G.; Garufi, F.; Gaudio, S.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, D.J.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.P.; Glover, L.; Goetz, E.; Goetz, R.; Gomes, A.S.P.; Gonzalez, Idelmis G.; Castro, J. M. Gonzalez; Gopakumar, A.; Gorodetsky, M. L.; Gossan, S. E.; Lee-Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.M.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Gruning, P.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannuksela, O. A.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Haster, C. -J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.A.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Horst, C.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Intini, G.; Isa, H. N.; Isac, J. -M.; Isi, M.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jimenez-Forteza, F.; Johnson, W.; Jones, I.D.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katolik, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kawabe, K.; Kefelian, F.; Keitel, D.; Kemball, A. J.; Kennedy, R.E.; Kent, C.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan., S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, W.; Kim, S.W.; Kim, Y.M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kramer, C.; Kringel, V.; Krishnan, B.; Krolak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kumar, S.; Kuo, L.; Kutynia, A.; Kwang-Cheol, S.; Lackey, B. D.; Lai, K. H.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lee, C.H.; Lee, K.H.; Lee, M.H.; Lee, W. H.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Luck, H.; Lumaca, D.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana Hernandez, I.; Magana-Sandoval, F.; Magana Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marka, S.; Marka, Z.; Markakis, C.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matas, A.; Matichard, F.; Matone, L.; Mavalvala, N.; Mayani, R.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McCuller, L.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Mejuto-Villa, E.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minazzoli, O.; Minenkov, Y.; Ming, J.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B.C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, S.D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P.G.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nelemans, G.; Nelson, T. J. N.; Gutierrez-Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Ng, K. K. Y.; Nguyen, T. T.; Nichols, D.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; Ormiston, R.; Ortega, L. F.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pace, A. E.; Page, J.; Page, M. A.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pang, B.; Pang, P. T. H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.S; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Castro-Perez, J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Porter, E. K.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Purrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Ramirez, K. E.; Rapagnani, P.; Raymond, V.; Razzano, M.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Ricker, P. M.; Rieger, S.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romel, C. L.; Romie, J. H.; Rosinska, D.; Ross, M. P.; Rowan, S.; Rudiger, A.; Ruggi, P.; Ryan, K.; Rynge, M.; Sachdev, Perminder S; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J; Schmidt, P.; Schnabel, R.B.; Schofield, R. M. S.; Schonbeck, A.; Schreiber, K.E.C.; Schuette, D.; Schulte, B. W.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Seidel, E.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D. A.; Shaffer, T. J.; Shah, A.; Shahriar, M. S.; Shao, L.P.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, António Dias da; Singer, A; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, R. J. E.; Smith, R. J. E.; Son, E. J.; Sonnenberg, J. A.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Stratta, G.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepanczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tapai, M.; Taracchini, A.; Taylor, J. A.; Taylor, W.R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Toyra, D.; Travasso, F.; Traylor, G.; Trifiro, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tsang, K. W.; Tse, M.; Tso, R.; Tuyenbayev, D.; Ueno, K.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahi, K.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; Van Beuzekom, Martin; van den Brand, J. F. J.; Van Den Broeck, C.F.F.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasuth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P.J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Vicere, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J. -Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, MT; Walet, R.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, J. Z.; Wang, M.; Wang, Y. -F.; Wang, Y. -F.; Ward, L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L. -W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wessel, E. K.; Wessels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, D.R.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Wofford, J.; Wong, G.W.K.; Worden, J.; Wright, J.L.; Wu, D.S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrozny, A.; Zanolin, M.; Zelenova, T.; Zendri, J. -P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y. -H.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zweizig, J.; Suvorova, S.; Moran, W.; Evans, J.R.
2017-01-01
Results are presented from a semicoherent search for continuous gravitational waves from the brightest low-mass X-ray binary, Scorpius X-1, using data collected during the first Advanced LIGO observing run. The search combines a frequency domain matched filter (Bessel-weighted F-statistic) with a
Ahmad, Hamzah; Namerikawa, Toru
2010-01-01
This paper presents H∞ Filter SLAM, which is also known as the minimax filter to estimate the robot and landmarks location with the analysis on partial observability. Some convergence conditions are also presented to aid the analysis. Due to SLAM is a controllable but unobservable problem, it's difficult to estimate the position of robot and landmarks even though the control inputs are given to the system. As a result, Covariance Inflation which is a method of adding a pseudo positive semidef...
Grabski
2014-01-01
Semi-Markov Processes: Applications in System Reliability and Maintenance is a modern view of discrete state space and continuous time semi-Markov processes and their applications in reliability and maintenance. The book explains how to construct semi-Markov models and discusses the different reliability parameters and characteristics that can be obtained from those models. The book is a useful resource for mathematicians, engineering practitioners, and PhD and MSc students who want to understand the basic concepts and results of semi-Markov process theory. Clearly defines the properties and
International Nuclear Information System (INIS)
Gershgorin, B.; Majda, A.J.
2011-01-01
A statistically exactly solvable model for passive tracers is introduced as a test model for the authors' Nonlinear Extended Kalman Filter (NEKF) as well as other filtering algorithms. The model involves a Gaussian velocity field and a passive tracer governed by the advection-diffusion equation with an imposed mean gradient. The model has direct relevance to engineering problems such as the spread of pollutants in the air or contaminants in the water as well as climate change problems concerning the transport of greenhouse gases such as carbon dioxide with strongly intermittent probability distributions consistent with the actual observations of the atmosphere. One of the attractive properties of the model is the existence of the exact statistical solution. In particular, this unique feature of the model provides an opportunity to design and test fast and efficient algorithms for real-time data assimilation based on rigorous mathematical theory for a turbulence model problem with many active spatiotemporal scales. Here, we extensively study the performance of the NEKF which uses the exact first and second order nonlinear statistics without any approximations due to linearization. The role of partial and sparse observations, the frequency of observations and the observation noise strength in recovering the true signal, its spectrum, and fat tail probability distribution are the central issues discussed here. The results of our study provide useful guidelines for filtering realistic turbulent systems with passive tracers through partial observations.
Sarasola, Xabier; Brenner, Paul; Hahn, Michael; Pedersen, Thomas
2009-11-01
The Columbia Non-neutral Torus (CNT) is the first stellarator devoted to the study of pure electron, partially neutralized and positron-electron plasmas. To date, CNT usually operates with electron rich plasmas (with negligible ion density) [1], but a stellarator can also confine plasmas of arbitrary degree of neutralization. In CNT the accumulation of ions alters the equilibrium of electron plasmas and a global instability has been observed when the ion fraction exceeds 10 %. A characterization of this instability is presented in [2], analyzing its parameter dependence and spatial structure (non- resonant with rational surfaces). A new set of experiments is currently underway studying plasmas of arbitrary degree of neutralization, ranging from pure electron to quasineutral plasmas. Basic observations show that the plasma potential decouples from emitter bias when we increase the degree of the neutralization of our plasmas. Partially neutralized plasmas are also characterized by multiple mode behavior with dominant modes between 20 and 200 kHz. When the plasma becomes quasineutral, it reverts to single mode behavior. The first results on partially neutralized plasmas confined on magnetic surfaces will be presented. [1] J. Kremer, PRL 97, (2006) 095003 [2] Q. Marksteiner, PRL 100 (2008) 065002
Fitting Hidden Markov Models to Psychological Data
Directory of Open Access Journals (Sweden)
Ingmar Visser
2002-01-01
Full Text Available Markov models have been used extensively in psychology of learning. Applications of hidden Markov models are rare however. This is partially due to the fact that comprehensive statistics for model selection and model assessment are lacking in the psychological literature. We present model selection and model assessment statistics that are particularly useful in applying hidden Markov models in psychology. These statistics are presented and evaluated by simulation studies for a toy example. We compare AIC, BIC and related criteria and introduce a prediction error measure for assessing goodness-of-fit. In a simulation study, two methods of fitting equality constraints are compared. In two illustrative examples with experimental data we apply selection criteria, fit models with constraints and assess goodness-of-fit. First, data from a concept identification task is analyzed. Hidden Markov models provide a flexible approach to analyzing such data when compared to other modeling methods. Second, a novel application of hidden Markov models in implicit learning is presented. Hidden Markov models are used in this context to quantify knowledge that subjects express in an implicit learning task. This method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.
Markov Decision Process Measurement Model.
LaMar, Michelle M
2018-03-01
Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.
Nonlinear Inference in Partially Observed Physical Systems and Deep Neural Networks
Rozdeba, Paul J.
The problem of model state and parameter estimation is a significant challenge in nonlinear systems. Due to practical considerations of experimental design, it is often the case that physical systems are partially observed, meaning that data is only available for a subset of the degrees of freedom required to fully model the observed system's behaviors and, ultimately, predict future observations. Estimation in this context is highly complicated by the presence of chaos, stochasticity, and measurement noise in dynamical systems. One of the aims of this dissertation is to simultaneously analyze state and parameter estimation in as a regularized inverse problem, where the introduction of a model makes it possible to reverse the forward problem of partial, noisy observation; and as a statistical inference problem using data assimilation to transfer information from measurements to the model states and parameters. Ultimately these two formulations achieve the same goal. Similar aspects that appear in both are highlighted as a means for better understanding the structure of the nonlinear inference problem. An alternative approach to data assimilation that uses model reduction is then examined as a way to eliminate unresolved nonlinear gating variables from neuron models. In this formulation, only measured variables enter into the model, and the resulting errors are themselves modeled by nonlinear stochastic processes with memory. Finally, variational annealing, a data assimilation method previously applied to dynamical systems, is introduced as a potentially useful tool for understanding deep neural network training in machine learning by exploiting similarities between the two problems.
DEFF Research Database (Denmark)
Justesen, Jørn
2005-01-01
A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly.......A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly....
Screening for a Chronic Disease: A Multiple Stage Duration Model with Partial Observability.
Mroz, Thomas A; Picone, Gabriel; Sloan, Frank; Yashkin, Arseniy P
2016-08-01
We estimate a dynamic multi-stage duration model to investigate how early detection of diabetes can delay the onset of lower extremity complications and death. We allow for partial observability of the disease stage, unmeasured heterogeneity, and endogenous timing of diabetes screening. Timely diagnosis appears important. We evaluate the effectiveness of two potential policies to reduce the monetary costs of frequent screening in terms of lost longevity. Compared to the status quo, the more restrictive policy yields an implicit value for an additional year of life of about $50,000, while the less restrictive policy implies a value of about $120,000.
Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations
Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.
2018-02-01
The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.
A Bayesian model for binary Markov chains
Directory of Open Access Journals (Sweden)
Belkheir Essebbar
2004-02-01
Full Text Available This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.
MULTIWAVELENGTH OBSERVATIONS OF A PARTIALLY ERUPTIVE FILAMENT ON 2011 SEPTEMBER 8
Energy Technology Data Exchange (ETDEWEB)
Zhang, Q. M.; Ning, Z. J.; Zhou, T. H.; Ji, H. S.; Feng, L. [Key Laboratory for Dark Matter and Space Science, Purple Mountain Observatory, CAS, Nanjing 210008 (China); Guo, Y.; Cheng, X. [School of Astronomy and Space Science, Nanjing University, Nanjing 210093 (China); Wiegelmann, T., E-mail: zhangqm@pmo.ac.cn [Max-Planck-Institut für Sonnensystemforschung, Justus-von-Liebig-Weg-3, D-37077 Göttingen (Germany)
2015-05-20
In this paper, we report our multiwavelength observations of a partial filament eruption event in NOAA active region (AR) 11283 on 8 September 2011. A magnetic null point and the corresponding spine and separatrix surface are found in the AR. Beneath the null point, a sheared arcade supports the filament along the highly complex and fragmented polarity inversion line. After being activated, the sigmoidal filament erupted and split into two parts. The major part rose at speeds of 90–150 km s{sup −1} before reaching the maximum apparent height of ∼115 Mm. Afterward, it returned to the solar surface in a bumpy way at speeds of 20–80 km s{sup −1}. The rising and falling motions were clearly observed in the extreme-ultraviolet, UV, and Hα wavelengths. The failed eruption of the main part was associated with an M6.7 flare with a single hard X-ray source. The runaway part of the filament, however, separated from and rotated around the major part for ∼1 turn at the eastern leg before escaping from the corona, probably along large-scale open magnetic field lines. The ejection of the runaway part resulted in a very faint coronal mass ejection that propagated at an apparent speed of 214 km s{sup −1} in the outer corona. The filament eruption also triggered a transverse kink-mode oscillation of the adjacent coronal loops in the same AR. The amplitude and period of the oscillation were 1.6 Mm and 225 s. Our results are important for understanding the mechanisms of partial filament eruptions, and provide new constraints to theoretical models. The multiwavelength observations also shed light on space weather prediction.
Online POMDP Algorithms for Very Large Observation Spaces
2017-06-06
problems: a road network with some roads that may be blocked, as well as the reduction from optimal decision tree (ODT) problem that is used to show that...UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT Partially Observable Markov Decision Process (POMDP) provides a mathematically elegant...Observable Markov Decision Process (POMDP) provides a mathematically elegant modeling tool for planning and control under uncertainty. Substantial progress
janssen, Anja; Segers, Johan
2013-01-01
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in Rd. We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In ...
Is partially automated driving a bad idea? Observations from an on-road study.
Banks, Victoria A; Eriksson, Alexander; O'Donoghue, Jim; Stanton, Neville A
2018-04-01
The automation of longitudinal and lateral control has enabled drivers to become "hands and feet free" but they are required to remain in an active monitoring state with a requirement to resume manual control if required. This represents the single largest allocation of system function problem with vehicle automation as the literature suggests that humans are notoriously inefficient at completing prolonged monitoring tasks. To further explore whether partially automated driving solutions can appropriately support the driver in completing their new monitoring role, video observations were collected as part of an on-road study using a Tesla Model S being operated in Autopilot mode. A thematic analysis of video data suggests that drivers are not being properly supported in adhering to their new monitoring responsibilities and instead demonstrate behaviour indicative of complacency and over-trust. These attributes may encourage drivers to take more risks whilst out on the road. Copyright © 2017 Elsevier Ltd. All rights reserved.
A simple method for identifying parameter correlations in partially observed linear dynamic models.
Li, Pu; Vu, Quoc Dong
2015-12-14
Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a
Majeed, Muhammad Usman
2017-07-19
Steady-state elliptic partial differential equations (PDEs) are frequently used to model a diverse range of physical phenomena. The source and boundary data estimation problems for such PDE systems are of prime interest in various engineering disciplines including biomedical engineering, mechanics of materials and earth sciences. Almost all existing solution strategies for such problems can be broadly classified as optimization-based techniques, which are computationally heavy especially when the problems are formulated on higher dimensional space domains. However, in this dissertation, feedback based state estimation algorithms, known as state observers, are developed to solve such steady-state problems using one of the space variables as time-like. In this regard, first, an iterative observer algorithm is developed that sweeps over regular-shaped domains and solves boundary estimation problems for steady-state Laplace equation. It is well-known that source and boundary estimation problems for the elliptic PDEs are highly sensitive to noise in the data. For this, an optimal iterative observer algorithm, which is a robust counterpart of the iterative observer, is presented to tackle the ill-posedness due to noise. The iterative observer algorithm and the optimal iterative algorithm are then used to solve source localization and estimation problems for Poisson equation for noise-free and noisy data cases respectively. Next, a divide and conquer approach is developed for three-dimensional domains with two congruent parallel surfaces to solve the boundary and the source data estimation problems for the steady-state Laplace and Poisson kind of systems respectively. Theoretical results are shown using a functional analysis framework, and consistent numerical simulation results are presented for several test cases using finite difference discretization schemes.
Semi-Markov Arnason-Schwarz models.
King, Ruth; Langrock, Roland
2016-06-01
We consider multi-state capture-recapture-recovery data where observed individuals are recorded in a set of possible discrete states. Traditionally, the Arnason-Schwarz model has been fitted to such data where the state process is modeled as a first-order Markov chain, though second-order models have also been proposed and fitted to data. However, low-order Markov models may not accurately represent the underlying biology. For example, specifying a (time-independent) first-order Markov process involves the assumption that the dwell time in each state (i.e., the duration of a stay in a given state) has a geometric distribution, and hence that the modal dwell time is one. Specifying time-dependent or higher-order processes provides additional flexibility, but at the expense of a potentially significant number of additional model parameters. We extend the Arnason-Schwarz model by specifying a semi-Markov model for the state process, where the dwell-time distribution is specified more generally, using, for example, a shifted Poisson or negative binomial distribution. A state expansion technique is applied in order to represent the resulting semi-Markov Arnason-Schwarz model in terms of a simpler and computationally tractable hidden Markov model. Semi-Markov Arnason-Schwarz models come with only a very modest increase in the number of parameters, yet permit a significantly more flexible state process. Model selection can be performed using standard procedures, and in particular via the use of information criteria. The semi-Markov approach allows for important biological inference to be drawn on the underlying state process, for example, on the times spent in the different states. The feasibility of the approach is demonstrated in a simulation study, before being applied to real data corresponding to house finches where the states correspond to the presence or absence of conjunctivitis. © 2015, The International Biometric Society.
A Markov Process Inspired Cellular Automata Model of Road Traffic
Wang, Fa; Li, Li; Hu, Jianming; Ji, Yan; Yao, Danya; Zhang, Yi; Jin, Xuexiang; Su, Yuelong; Wei, Zheng
2008-01-01
To provide a more accurate description of the driving behaviors in vehicle queues, a namely Markov-Gap cellular automata model is proposed in this paper. It views the variation of the gap between two consequent vehicles as a Markov process whose stationary distribution corresponds to the observed distribution of practical gaps. The multiformity of this Markov process provides the model enough flexibility to describe various driving behaviors. Two examples are given to show how to specialize i...
Hard X-Ray Emission from Partially Occulted Solar Flares: RHESSI Observations in Two Solar Cycles
Energy Technology Data Exchange (ETDEWEB)
Effenberger, Frederic; Costa, Fatima Rubio da; Petrosian, Vahé [Department of Physics and KIPAC, Stanford University, Stanford, CA 94305 (United States); Oka, Mitsuo; Saint-Hilaire, Pascal; Krucker, Säm [Space Sciences Laboratory, University of California, Berkeley, CA 94720-7450 (United States); Liu, Wei [Bay Area Environmental Research Institute, 625 2nd Street, Suite 209, Petaluma, CA 94952 (United States); Glesener, Lindsay, E-mail: feffen@stanford.edu, E-mail: frubio@stanford.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States)
2017-02-01
Flares close to the solar limb, where the footpoints are occulted, can reveal the spectrum and structure of the coronal looptop source in X-rays. We aim at studying the properties of the corresponding energetic electrons near their acceleration site, without footpoint contamination. To this end, a statistical study of partially occulted flares observed with Reuven Ramaty High-Energy Solar Spectroscopic Imager is presented here, covering a large part of solar cycles 23 and 24. We perform detailed spectra, imaging, and light curve analyses for 116 flares and include contextual observations from SDO and STEREO when available, providing further insights into flare emission that were previously not accessible. We find that most spectra are fitted well with a thermal component plus a broken power-law, non-thermal component. A thin-target kappa distribution model gives satisfactory fits after the addition of a thermal component. X-ray imaging reveals small spatial separation between the thermal and non-thermal components, except for a few flares with a richer coronal source structure. A comprehensive light curve analysis shows a very good correlation between the derivative of the soft X-ray flux (from GOES ) and the hard X-rays for a substantial number of flares, indicative of the Neupert effect. The results confirm that non-thermal particles are accelerated in the corona and estimated timescales support the validity of a thin-target scenario with similar magnitudes of thermal and non-thermal energy fluxes.
A Bayesian method for inferring transmission chains in a partially observed epidemic.
Energy Technology Data Exchange (ETDEWEB)
Marzouk, Youssef M.; Ray, Jaideep
2008-10-01
We present a Bayesian approach for estimating transmission chains and rates in the Abakaliki smallpox epidemic of 1967. The epidemic affected 30 individuals in a community of 74; only the dates of appearance of symptoms were recorded. Our model assumes stochastic transmission of the infections over a social network. Distinct binomial random graphs model intra- and inter-compound social connections, while disease transmission over each link is treated as a Poisson process. Link probabilities and rate parameters are objects of inference. Dates of infection and recovery comprise the remaining unknowns. Distributions for smallpox incubation and recovery periods are obtained from historical data. Using Markov chain Monte Carlo, we explore the joint posterior distribution of the scalar parameters and provide an expected connectivity pattern for the social graph and infection pathway.
Stochastic simulations of conditional states of partially observed systems, quantum and classical
International Nuclear Information System (INIS)
Gambetta, Jay; Wiseman, H M
2005-01-01
In a partially observed quantum or classical system the information that we cannot access results in our description of the system becoming mixed, even if we have perfect initial knowledge. That is, if the system is quantum the conditional state will be given by a state matrix ρ r (t), and if classical, the conditional state will be given by a probability distribution P r (x,t), where r is the result of the measurement. Thus to determine the evolution of this conditional state, under continuous-in-time monitoring, requires a numerically expensive calculation. In this paper we demonstrate a numerical technique based on linear measurement theory that allows us to determine the conditional state using only pure states. That is, our technique reduces the problem size by a factor of N, the number of basis states for the system. Furthermore we show that our method can be applied to joint classical and quantum systems such as arise in modelling realistic (finite bandwidth, noisy) measurement
Rufo Campos, M; Carreño, M
2009-01-01
It is important to conduct studies on the utilization of new antiepileptic drugs in order to improve their use. Our objective is to describe the use patterns of carbamazepine and oxcarbazepine. Observational, cross-sectional, national study with 58 investigators that included 185 pediatric patients with partial epilepsy. We recorded prescription patterns, quality of life (QoL) using the QoL scale in childhood epilepsy (CAVE) and use of resources. 134 patients were under treatment with oxcarbazepine (72.4 %), with a mean dose of 22.3 mg/kg/day; standard deviation (SD): 8.04; 95 % confidence interval (CI): 20.9 to 23.7, and 51 (27.6%) with carbamazepine, mean dose of 14 mg/kg/day; SD: 6.2; 95 % CI: 12.3 to 15.8. A total of 19.4% and 21.6 %, respectively, followed multiple drug treatment. The mean scores on functional dimensions of CAVE were (out of 5): school attendance: 4.5; SD: 0.7; social relationships: 4.1; SD: 0.9, and autonomy: 3.9; SD: 1.9. Patients receiving multiple drug therapy had worse results in quality of life (p used in lower doses than recommended and the dosing is not adjusted for weight. Underdosing may lead to regimes of multiple drug therapy that should be reviewed individually.
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
Hartfiel, Darald J
1998-01-01
In this study extending classical Markov chain theory to handle fluctuating transition matrices, the author develops a theory of Markov set-chains and provides numerous examples showing how that theory can be applied. Chapters are concluded with a discussion of related research. Readers who can benefit from this monograph are those interested in, or involved with, systems whose data is imprecise or that fluctuate with time. A background equivalent to a course in linear algebra and one in probability theory should be sufficient.
Confluence reduction for Markov automata
Timmer, Mark; Katoen, Joost P.; van de Pol, Jaco; Stoelinga, Mariëlle Ida Antoinette
2016-01-01
Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. As expected, the state space explosion threatens the analysability of these models. We therefore introduce confluence reduction for Markov automata, a powerful reduction
Process Algebra and Markov Chains
Brinksma, Hendrik; Hermanns, H.; Brinksma, Hendrik; Hermanns, H.; Katoen, Joost P.
This paper surveys and relates the basic concepts of process algebra and the modelling of continuous time Markov chains. It provides basic introductions to both fields, where we also study the Markov chains from an algebraic perspective, viz. that of Markov chain algebra. We then proceed to study
Process algebra and Markov chains
Brinksma, E.; Hermanns, H.; Brinksma, E.; Hermanns, H.; Katoen, J.P.
2001-01-01
This paper surveys and relates the basic concepts of process algebra and the modelling of continuous time Markov chains. It provides basic introductions to both fields, where we also study the Markov chains from an algebraic perspective, viz. that of Markov chain algebra. We then proceed to study
Indian Academy of Sciences (India)
be obtained as a limiting value of a sample path of a suitable ... makes a mathematical model of chance and deals with the problem by .... Is the Markov chain aperiodic? It is! Here is how you can see it. Suppose that after you do the cut, you hold the top half in your right hand, and the bottom half in your left. Then there.
Composable Markov Building Blocks
Evers, S.; Fokkinga, M.M.; Apers, Peter M.G.; Prade, H.; Subrahmanian, V.S.
2007-01-01
In situations where disjunct parts of the same process are described by their own first-order Markov models and only one model applies at a time (activity in one model coincides with non-activity in the other models), these models can be joined together into one. Under certain conditions, nearly all
Composable Markov Building Blocks
Evers, S.; Fokkinga, M.M.; Apers, Peter M.G.
2007-01-01
In situations where disjunct parts of the same process are described by their own first-order Markov models, these models can be joined together under the constraint that there can only be one activity at a time, i.e. the activities of one model coincide with non-activity in the other models. Under
Solan, Eilon; Vieille, Nicolas
2015-01-01
We study irreducible time-homogenous Markov chains with finite state space in discrete time. We obtain results on the sensitivity of the stationary distribution and other statistical quantities with respect to perturbations of the transition matrix. We define a new closeness relation between transition matrices, and use graph-theoretic techniques, in contrast with the matrix analysis techniques previously used.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.
Recursive smoothers for hidden discrete-time Markov chains
Directory of Open Access Journals (Sweden)
Lakhdar Aggoun
2005-01-01
Full Text Available We consider a discrete-time Markov chain observed through another Markov chain. The proposed model extends models discussed by Elliott et al. (1995. We propose improved recursive formulae to update smoothed estimates of processes related to the model. These recursive estimates are used to update the parameter of the model via the expectation maximization (EM algorithm.
Cobben, M.M.P.; Van Noordwijk, A.J.
2017-01-01
Migration is a widespread phenomenon across the animal kingdom as a response to seasonality in environmental conditions. Partially migratory populations are populations that consist of both migratory and residential individuals. Such populations are very common, yet their stability has long been
Routing policies for a partially observable two-server queueing system
Ellens, W.; Kovács, P.; Núñez-Queija, R.; Berg, H. van den
2015-01-01
We consider a queueing system controlled by decisions based on partial state information. The motivation for this work stems from road traffic, in which drivers may, or may not, be subscribed to a smartphone application for dynamic route planning. Our model consists of two queues with independent
Routing policies for a partially observable two-server queueing system
W. Ellens; P. Kovacs; J.L. van den Berg (Hans); R. Núñez Queija (Rudesindo); A. Busic; M. Gribaudo; P. Reinecke
2015-01-01
htmlabstractWe consider a queueing system controlled by decisions based on partial state information. The motivation for this work stems from road traffic, in which drivers may, or may not, be subscribed to a smartphone application for dynamic route planning. Our model consists of two queues
International Nuclear Information System (INIS)
Leij, Femke van der; Elkhuizen, Paula H.M.; Janssen, Tomas M.; Poortmans, Philip; Sangen, Maurice van der; Scholten, Astrid N.; Vliet-Vroegindeweij, Corine van; Boersma, Liesbeth J.
2014-01-01
The challenge of adequate target volume definition in external beam partial breast irradiation (PBI) could be overcome with preoperative irradiation, due to less inter-observer variation. We compared the target volume delineation for external beam PBI on preoperative versus postoperative CT scans of twenty-four breast cancer patients
Leij, F. van der; Elkhuizen, P.H.M.; Janssen, T.M.; Poortmans, P.M.P.; Sangen, M. van der; Scholten, A.N.; Vliet-Vroegindeweij, C. van; Boersma, L.J.
2014-01-01
The challenge of adequate target volume definition in external beam partial breast irradiation (PBI) could be overcome with preoperative irradiation, due to less inter-observer variation. We compared the target volume delineation for external beam PBI on preoperative versus postoperative CT scans of
Cobben, Marleen M P; van Noordwijk, Arie J
2017-10-01
Migration is a widespread phenomenon across the animal kingdom as a response to seasonality in environmental conditions. Partially migratory populations are populations that consist of both migratory and residential individuals. Such populations are very common, yet their stability has long been debated. The inheritance of migratory activity is currently best described by the threshold model of quantitative genetics. The inclusion of such a genetic threshold model for migratory behavior leads to a stable zone in time and space of partially migratory populations under a wide range of demographic parameter values, when assuming stable environmental conditions and unlimited genetic diversity. Migratory species are expected to be particularly sensitive to global warming, as arrival at the breeding grounds might be increasingly mistimed as a result of the uncoupling of long-used cues and actual environmental conditions, with decreasing reproduction as a consequence. Here, we investigate the consequences for migratory behavior and the stability of partially migratory populations under five climate change scenarios and the assumption of a genetic threshold value for migratory behavior in an individual-based model. The results show a spatially and temporally stable zone of partially migratory populations after different lengths of time in all scenarios. In the scenarios in which the species expands its range from a particular set of starting populations, the genetic diversity and location at initialization determine the species' colonization speed across the zone of partial migration and therefore across the entire landscape. Abruptly changing environmental conditions after model initialization never caused a qualitative change in phenotype distributions, or complete extinction. This suggests that climate change-induced shifts in species' ranges as well as changes in survival probabilities and reproductive success can be met with flexibility in migratory behavior at the
Hidden Markov models for labeled sequences
DEFF Research Database (Denmark)
Krogh, Anders Stærmose
1994-01-01
A hidden Markov model for labeled observations, called a class HMM, is introduced and a maximum likelihood method is developed for estimating the parameters of the model. Instead of training it to model the statistics of the training sequences it is trained to optimize recognition. It resembles MMI...
Generalized Markov branching models
Li, Junping
2005-01-01
In this thesis, we first considered a modified Markov branching process incorporating both state-independent immigration and resurrection. After establishing the criteria for regularity and uniqueness, explicit expressions for the extinction probability and mean extinction time are presented. The criteria for recurrence and ergodicity are also established. In addition, an explicit expression for the equilibrium distribution is presented.\\ud \\ud We then moved on to investigate the basic proper...
Ragain, Stephen; Ugander, Johan
2016-01-01
As datasets capturing human choices grow in richness and scale---particularly in online domains---there is an increasing need for choice models that escape traditional choice-theoretic axioms such as regularity, stochastic transitivity, and Luce's choice axiom. In this work we introduce the Pairwise Choice Markov Chain (PCMC) model of discrete choice, an inferentially tractable model that does not assume any of the above axioms while still satisfying the foundational axiom of uniform expansio...
Fannes, Mark; Wouters, Jeroen
2012-01-01
We study a quantum process that can be considered as a quantum analogue for the classical Markov process. We specifically construct a version of these processes for free Fermions. For such free Fermionic processes we calculate the entropy density. This can be done either directly using Szeg\\"o's theorem for asymptotic densities of functions of Toeplitz matrices, or through an extension of said theorem to rates of functions, which we present in this article.
Approximate quantum Markov chains
Sutter, David
2018-01-01
This book is an introduction to quantum Markov chains and explains how this concept is connected to the question of how well a lost quantum mechanical system can be recovered from a correlated subsystem. To achieve this goal, we strengthen the data-processing inequality such that it reveals a statement about the reconstruction of lost information. The main difficulty in order to understand the behavior of quantum Markov chains arises from the fact that quantum mechanical operators do not commute in general. As a result we start by explaining two techniques of how to deal with non-commuting matrices: the spectral pinching method and complex interpolation theory. Once the reader is familiar with these techniques a novel inequality is presented that extends the celebrated Golden-Thompson inequality to arbitrarily many matrices. This inequality is the key ingredient in understanding approximate quantum Markov chains and it answers a question from matrix analysis that was open since 1973, i.e., if Lieb's triple ma...
A relation between non-Markov and Markov processes
International Nuclear Information System (INIS)
Hara, H.
1980-01-01
With the aid of a transformation technique, it is shown that some memory effects in the non-Markov processes can be eliminated. In other words, some non-Markov processes are rewritten in a form obtained by the random walk process; the Markov process. To this end, two model processes which have some memory or correlation in the random walk process are introduced. An explanation of the memory in the processes is given. (orig.)
Dec-POMDPs as Non-Observable MDPs
Oliehoek, F.A.; Amato, C.
2014-01-01
A recent insight in the field of decentralized partially observable Markov decision processes (Dec-POMDPs) is that it is possible to convert a Dec-POMDP to a non-observable MDP, which is a special case of POMDP. This technical report provides an overview of this reduction and pointers to related
Mallak, Saed
1996-01-01
Ankara : Department of Mathematics and Institute of Engineering and Sciences of Bilkent University, 1996. Thesis (Master's) -- Bilkent University, 1996. Includes bibliographical references leaves leaf 29 In thi.s work, we studierl the Ergodicilv of Non-Stationary .Markov chains. We gave several e.xainples with different cases. We proved that given a sec[uence of Markov chains such that the limit of this sec|uence is an Ergodic Markov chain, then the limit of the combination ...
Markov Chain Monte Carlo Methods
Indian Academy of Sciences (India)
Keywords. Markov chain; state space; stationary transition probability; stationary distribution; irreducibility; aperiodicity; stationarity; M-H algorithm; proposal distribution; acceptance probability; image processing; Gibbs sampler.
Volchenkov, Dima; Dawin, Jean René
A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.
DEFF Research Database (Denmark)
Kohlenbach, Ulrich Wilhelm
2002-01-01
We show that the so-called weak Markov's principle (WMP) which states that every pseudo-positive real number is positive is underivable in E-HA + AC. Since allows one to formalize (atl eastl arge parts of) Bishop's constructive mathematics, this makes it unlikely that WMP can be proved within...... the framework of Bishop-style mathematics (which has been open for about 20 years). The underivability even holds if the ine.ective schema of full comprehension (in all types) for negated formulas (in particular for -free formulas) is added, which allows one to derive the law of excluded middle...
Directory of Open Access Journals (Sweden)
Rami Alazrai
2017-03-01
Full Text Available This paper presents a new approach for fall detection from partially-observed depth-map video sequences. The proposed approach utilizes the 3D skeletal joint positions obtained from the Microsoft Kinect sensor to build a view-invariant descriptor for human activity representation, called the motion-pose geometric descriptor (MPGD. Furthermore, we have developed a histogram-based representation (HBR based on the MPGD to construct a length-independent representation of the observed video subsequences. Using the constructed HBR, we formulate the fall detection problem as a posterior-maximization problem in which the posteriori probability for each observed video subsequence is estimated using a multi-class SVM (support vector machine classifier. Then, we combine the computed posteriori probabilities from all of the observed subsequences to obtain an overall class posteriori probability of the entire partially-observed depth-map video sequence. To evaluate the performance of the proposed approach, we have utilized the Kinect sensor to record a dataset of depth-map video sequences that simulates four fall-related activities of elderly people, including: walking, sitting, falling form standing and falling from sitting. Then, using the collected dataset, we have developed three evaluation scenarios based on the number of unobserved video subsequences in the testing videos, including: fully-observed video sequence scenario, single unobserved video subsequence of random lengths scenarios and two unobserved video subsequences of random lengths scenarios. Experimental results show that the proposed approach achieved an average recognition accuracy of 93 . 6 % , 77 . 6 % and 65 . 1 % , in recognizing the activities during the first, second and third evaluation scenario, respectively. These results demonstrate the feasibility of the proposed approach to detect falls from partially-observed videos.
Switching Markov chains for a holistic modeling of SIS unavailability
International Nuclear Information System (INIS)
Mechri, Walid; Simon, Christophe; BenOthman, Kamel
2015-01-01
This paper proposes a holistic approach to model the Safety Instrumented Systems (SIS). The model is based on Switching Markov Chain and integrates several parameters like Common Cause Failure, Imperfect Proof testing, partial proof testing, etc. The basic concepts of Switching Markov Chain applied to reliability analysis are introduced and a model to compute the unavailability for a case study is presented. The proposed Switching Markov Chain allows us to assess the effect of each parameter on the SIS performance. The proposed method ensures the relevance of the results. - Highlights: • A holistic approach to model the unavailability safety systems using Switching Markov chains. • The model integrates several parameters like probability of failure due to the test, the probability of not detecting a failure in a test. • The basic concepts of the Switching Markov Chains are introduced and applied to compute the unavailability for safety systems. • The proposed Switching Markov Chain allows assessing the effect of each parameter on the chemical reactor performance
Nonlinear Markov processes: Deterministic case
International Nuclear Information System (INIS)
Frank, T.D.
2008-01-01
Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution
Analysis of partially pulsating fatigue process on carbon steel with microstructural observation
International Nuclear Information System (INIS)
Shimano, Hiroyuki; Faiz, M. Khairi; Hara, Asato; Yoshizumi, Kyoko; Yoshida, Makoto; Horibe, Susumu
2016-01-01
Pulsating low-cycle fatigue processes, up to the present, have been divided into three states: the transient state, steady state, and accelerating state of ratcheting. In our previous work, we suggested that fatigue behavior of pulsating fatigue process should be classified into five stages in which the plastic strain amplitude and the ratcheting strain rate are plotted on the X and Y axis, respectively. In this study, at the condition of R=−0.3 (partially pulsating fatigue), the change in the plastic strain amplitude and ratcheting strain rate for each cycle to failure was examined on AISI 1025 carbon steel. The dislocation substructure was examined using transmission electron microscopy (TEM) for each stage, except for stage I. It was also demonstrated that the fatigue process can be divided into five stages: stage I corresponds to the un-pinning of dislocations from the Cottrell atmosphere and propagation of the Luders band. Stage II corresponds to the restriction of dislocation movement by dislocation tangles. Stage III corresponds to the formation of dislocation cells. Stage IV corresponds to the promotion of the to-and-fro (back-and-forth) motion of dislocations by a re-arrangement of the dislocations in the cells. Stage V corresponds to the release of dislocation movement by the collapse of dislocation cells.
Analysis of partially pulsating fatigue process on carbon steel with microstructural observation
Energy Technology Data Exchange (ETDEWEB)
Shimano, Hiroyuki, E-mail: tales-of-destiny@akane.waseda.jp [Department of Modern Mechanical Engineering, Graduate School of Creative Science and Engineering, Waseda University, 3-4-1 Shinjyuku-ku Okubo, Tokyo 169-8555 (Japan); Faiz, M. Khairi; Hara, Asato; Yoshizumi, Kyoko [Department of Modern Mechanical Engineering, Graduate School of Creative Science and Engineering, Waseda University, 3-4-1 Shinjyuku-ku Okubo, Tokyo 169-8555 (Japan); Yoshida, Makoto [Department of Modern Mechanical Engineering, Graduate School of Creative Science and Engineering, Waseda University, 3-4-1 Shinjyuku-ku Okubo, Tokyo 169-8555 (Japan); Kagami Memorial Research Institute for Materials Science and Technology, Waseda University, 2-8-26, Nishi-Waseda, Shinjyuku-ku, Tokyo 169-0051 (Japan); Horibe, Susumu [Department of Modern Mechanical Engineering, Graduate School of Creative Science and Engineering, Waseda University, 3-4-1 Shinjyuku-ku Okubo, Tokyo 169-8555 (Japan)
2016-01-10
Pulsating low-cycle fatigue processes, up to the present, have been divided into three states: the transient state, steady state, and accelerating state of ratcheting. In our previous work, we suggested that fatigue behavior of pulsating fatigue process should be classified into five stages in which the plastic strain amplitude and the ratcheting strain rate are plotted on the X and Y axis, respectively. In this study, at the condition of R=−0.3 (partially pulsating fatigue), the change in the plastic strain amplitude and ratcheting strain rate for each cycle to failure was examined on AISI 1025 carbon steel. The dislocation substructure was examined using transmission electron microscopy (TEM) for each stage, except for stage I. It was also demonstrated that the fatigue process can be divided into five stages: stage I corresponds to the un-pinning of dislocations from the Cottrell atmosphere and propagation of the Luders band. Stage II corresponds to the restriction of dislocation movement by dislocation tangles. Stage III corresponds to the formation of dislocation cells. Stage IV corresponds to the promotion of the to-and-fro (back-and-forth) motion of dislocations by a re-arrangement of the dislocations in the cells. Stage V corresponds to the release of dislocation movement by the collapse of dislocation cells.
Suzaku Observation of the Dwarf Nova V893 Scorpii: The Discovery of a Partial X-Ray Eclipse
Mukai, Koji; Zietsman, E.; Still, M.
2008-01-01
V893 Sco is an eclipsing dwarf nova that had attracted little attention from X-ray astronomers until it was proposed as the identification of an RXTE all-sky slew survey (XSS) source. Here we report on the po inted X-ray observations of this object using Suzaku. We confirm V893 Sco to be X-ray bright, whose spectrum is highly absorbed for a dwar f nova. We have also discovered a partial X-ray eclipse in V893 Sco. This is the first time that a partial eclipse is seen in Xray light c urves of a dwarf nova. We have successfully modeled the gross features of the optical and X-ray eclipse light curves using a boundary layer geometry of the X-ray emission region. Future observations may lead to confirmation of this basic picture, and allow us to place tight co nstraints on the size of the X-ray emission region. The partial X-ray eclipse therefore should make V893 Sco a key object in understanding the physics of accretion in quiescent dwarf nova.
Majeed, Muhammad Usman
2017-01-01
the problems are formulated on higher dimensional space domains. However, in this dissertation, feedback based state estimation algorithms, known as state observers, are developed to solve such steady-state problems using one of the space variables as time
International Nuclear Information System (INIS)
Floriani, Elena; Lima, Ricardo; Ourrad, Ouerdia; Spinelli, Lionel
2016-01-01
Highlights: • The flux through a Markov chain of a conserved quantity (mass) is studied. • Mass is supplied by an external source and ends in the absorbing states of the chain. • Meaningful for modeling open systems whose dynamics has a Markov property. • The analytical expression of mass distribution is given for a constant source. • The expression of mass distribution is given for periodic or random sources. - Abstract: In this paper we study the flux through a finite Markov chain of a quantity, that we will call mass, which moves through the states of the chain according to the Markov transition probabilities. Mass is supplied by an external source and accumulates in the absorbing states of the chain. We believe that studying how this conserved quantity evolves through the transient (non-absorbing) states of the chain could be useful for the modelization of open systems whose dynamics has a Markov property.
Autonomous Navigation in Partially Observable Environments Using Hierarchical Q-Learning
Zhou, Y.; van Kampen, E.; Chu, Q.
2016-01-01
Flapping-wing MAVs represent an attractive alternative to conventional designs with rotary wings, since they promise a much higher efficiency in forward flight. However, further insight into the flapping-wing aerodynamics is still needed to get closer to the flight performance observed in natural
A partial ensemble Kalman filtering approach to enable use of range limited observations
DEFF Research Database (Denmark)
Borup, Morten; Grum, Morten; Madsen, Henrik
2015-01-01
The ensemble Kalman filter (EnKF) relies on the assumption that an observed quantity can be regarded as a stochastic variable that is Gaussian distributed with mean and variance that equals the measurement and the measurement noise, respectively. When a gauge has a minimum and/or maximum detection...
Sweeting, M J; Farewell, V T; De Angelis, D
2010-05-20
In many chronic diseases it is important to understand the rate at which patients progress from infection through a series of defined disease states to a clinical outcome, e.g. cirrhosis in hepatitis C virus (HCV)-infected individuals or AIDS in HIV-infected individuals. Typically data are obtained from longitudinal studies, which often are observational in nature, and where disease state is observed only at selected examinations throughout follow-up. Transition times between disease states are therefore interval censored. Multi-state Markov models are commonly used to analyze such data, but rely on the assumption that the examination times are non-informative, and hence the examination process is ignorable in a likelihood-based analysis. In this paper we develop a Markov model that relaxes this assumption through the premise that the examination process is ignorable only after conditioning on a more regularly observed auxiliary variable. This situation arises in a study of HCV disease progression, where liver biopsies (the examinations) are sparse, irregular, and potentially informative with respect to the transition times. We use additional information on liver function tests (LFTs), commonly collected throughout follow-up, to inform current disease state and to assume an ignorable examination process. The model developed has a similar structure to a hidden Markov model and accommodates both the series of LFT measurements and the partially latent series of disease states. We show through simulation how this model compares with the commonly used ignorable Markov model, and a Markov model that assumes the examination process is non-ignorable. Copyright 2010 John Wiley & Sons, Ltd.
A note on identification in discrete choice models with partial observability
DEFF Research Database (Denmark)
Fosgerau, Mogens; Ranjan, Abhishek
2017-01-01
This note establishes a new identification result for additive random utility discrete choice models. A decision-maker associates a random utility Uj+ mj to each alternative in a finite set j∈ {1 , … , J} , where U= {U1, … , UJ} is unobserved by the researcher and random with an unknown joint dis...... for applications where choices are observed aggregated into groups while prices and attributes vary at the level of individual alternatives....
Shen, Chuan-an; Chai, Jia-ke; Tuo, Xiao-ye; Cai, Jian-hua; Li, Dong-jie; Zhang, Lin; Zhu, Hua; Cai, Jin-dong
2013-02-01
To observe the effect of negative pressure therapy in the treatment of superficial partial-thickness scald in children. Three hundred and seven children with superficial partial-thickness scald hospitalized from August 2009 to May 2012 were divided into negative pressure therapy group (NPT, n = 145) and control group (C, n = 162) according to the random number table. Patients in group NPT were treated with negative pressure from within post injury day (PID) 3 to PID 9 (with -16 kPa pressure), while traditional occlusive dressing method was used in group C. Changes in body temperature, wound healing condition, frequency of dressing change were compared between group NPT and group C. Bacterial culture results of wounds were compared before and after treatment in group NPT. Volume of drained transudate per one percent of wound area was recorded in group NPT on PID 1 to PID 3. Data were processed with t test or chi-square test. The incidence of high fever was significantly lower in group NPT (26.9%, 39/145) than in group C (63.6%, 103/162, χ(2) = 41.419, P partial-thickness scald.
Reliability estimation of semi-Markov systems: a case study
International Nuclear Information System (INIS)
Ouhbi, Brahim; Limnios, Nikolaos
1997-01-01
In this article, we are concerned with the estimation of the reliability and the availability of a turbo-generator rotor using a set of data observed in a real engineering situation provided by Electricite De France (EDF). The rotor is modeled by a semi-Markov process, which is used to estimate the rotor's reliability and availability. To do this, we present a method for estimating the semi-Markov kernel from a censored data
Burke, Christopher J; Baddeley, Michelle; Tobler, Philippe N; Schultz, Wolfram
2016-09-28
Given that the range of rewarding and punishing outcomes of actions is large but neural coding capacity is limited, efficient processing of outcomes by the brain is necessary. One mechanism to increase efficiency is to rescale neural output to the range of outcomes expected in the current context, and process only experienced deviations from this expectation. However, this mechanism comes at the cost of not being able to discriminate between unexpectedly low losses when times are bad versus unexpectedly high gains when times are good. Thus, too much adaptation would result in disregarding information about the nature and absolute magnitude of outcomes, preventing learning about the longer-term value structure of the environment. Here we investigate the degree of adaptation in outcome coding brain regions in humans, for directly experienced outcomes and observed outcomes. We scanned participants while they performed a social learning task in gain and loss blocks. Multivariate pattern analysis showed two distinct networks of brain regions adapt to the most likely outcomes within a block. Frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Critically, in both cases, adaptation was incomplete and information about whether the outcomes arose in a gain block or a loss block was retained. Univariate analysis confirmed incomplete adaptive coding in these regions but also detected nonadapting outcome signals. Thus, although neural areas rescale their responses to outcomes for efficient coding, they adapt incompletely and keep track of the longer-term incentives available in the environment. Optimal value-based choice requires that the brain precisely and efficiently represents positive and negative outcomes. One way to increase efficiency is to adapt responding to the most likely outcomes in a given context. However, too strong adaptation would result in loss of precise
The missing link: Predicting connectomes from noisy and partially observed tract tracing data
DEFF Research Database (Denmark)
Hinne, Max; Meijers, Annet; Bakker, Rembrandt
2017-01-01
a high chance of being connected, while regions far apart are most likely disconnected in the connectome. After learning the latent embedding from the connections that we did observe, the latent space allows us to predict connections that have not been probed previously. We apply the methodology to two....... In this paper, we suggest that instead of probing all possible connections, hitherto unknown connections may be predicted from the data that is already available. Our approach uses a 'latent space model' that embeds the connectivity in an abstract physical space. Regions that are close in the latent space have...... connectivity data sets of the macaque, where we demonstrate that the latent space model is successful in predicting unobserved connectivity, outperforming two baselines and an alternative model in nearly all cases. Furthermore, we show how the latent spatial embedding may be used to integrate multimodal...
Regeneration and general Markov chains
Directory of Open Access Journals (Sweden)
Vladimir V. Kalashnikov
1994-01-01
Full Text Available Ergodicity, continuity, finite approximations and rare visits of general Markov chains are investigated. The obtained results permit further quantitative analysis of characteristics, such as, rates of convergence, continuity (measured as a distance between perturbed and non-perturbed characteristics, deviations between Markov chains, accuracy of approximations and bounds on the distribution function of the first visit time to a chosen subset, etc. The underlying techniques use the embedding of the general Markov chain into a wide sense regenerative process with the help of splitting construction.
Markov chains theory and applications
Sericola, Bruno
2013-01-01
Markov chains are a fundamental class of stochastic processes. They are widely used to solve problems in a large number of domains such as operational research, computer science, communication networks and manufacturing systems. The success of Markov chains is mainly due to their simplicity of use, the large number of available theoretical results and the quality of algorithms developed for the numerical evaluation of many metrics of interest.The author presents the theory of both discrete-time and continuous-time homogeneous Markov chains. He carefully examines the explosion phenomenon, the
Quadratic Variation by Markov Chains
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Horel, Guillaume
We introduce a novel estimator of the quadratic variation that is based on the the- ory of Markov chains. The estimator is motivated by some general results concerning filtering contaminated semimartingales. Specifically, we show that filtering can in prin- ciple remove the effects of market...... microstructure noise in a general framework where little is assumed about the noise. For the practical implementation, we adopt the dis- crete Markov chain model that is well suited for the analysis of financial high-frequency prices. The Markov chain framework facilitates simple expressions and elegant analyti...
SLOW RISE AND PARTIAL ERUPTION OF A DOUBLE-DECKER FILAMENT. I. OBSERVATIONS AND INTERPRETATION
International Nuclear Information System (INIS)
Liu Rui; Kliem, Bernhard; Török, Tibor; Titov, Viacheslav S.; Lionello, Roberto; Linker, Jon A.; Liu Chang; Wang Haimin
2012-01-01
We study an active-region dextral filament that was composed of two branches separated in height by about 13 Mm, as inferred from three-dimensional reconstruction by combining SDO and STEREO-B observations. This 'double-decker' configuration sustained for days before the upper branch erupted with a GOES-class M1.0 flare on 2010 August 7. Analyzing this evolution, we obtain the following main results. (1) During the hours before the eruption, filament threads within the lower branch were observed to intermittently brighten up, lift upward, and then merge with the upper branch. The merging process contributed magnetic flux and current to the upper branch, resulting in its quasi-static ascent. (2) This transfer might serve as the key mechanism for the upper branch to lose equilibrium by reaching the limiting flux that can be stably held down by the overlying field or by reaching the threshold of the torus instability. (3) The erupting branch first straightened from a reverse S shape that followed the polarity inversion line and then writhed into a forward S shape. This shows a transfer of left-handed helicity in a sequence of writhe-twist-writhe. The fact that the initial writhe is converted into the twist of the flux rope excludes the helical kink instability as the trigger process of the eruption, but supports the occurrence of the instability in the main phase, which is indeed indicated by the very strong writhing motion. (4) A hard X-ray sigmoid, likely of coronal origin, formed in the gap between the two original filament branches in the impulsive phase of the associated flare. This supports a model of transient sigmoids forming in the vertical flare current sheet. (5) Left-handed magnetic helicity is inferred for both branches of the dextral filament. (6) Two types of force-free magnetic configurations are compatible with the data, a double flux rope equilibrium and a single flux rope situated above a loop arcade.
Litvak, Leonid M; Spahr, Anthony J; Emadi, Gulam
2007-08-01
Most cochlear implant strategies utilize monopolar stimulation, likely inducing relatively broad activation of the auditory neurons. The spread of activity may be narrowed with a tripolar stimulation scheme, wherein compensating current of opposite polarity is simultaneously delivered to two adjacent electrodes. In this study, a model and cochlear implant subjects were used to examine loudness growth for varying amounts of tripolar compensation, parameterized by a coefficient sigma, ranging from 0 (monopolar) to 1 (full tripolar). In both the model and the subjects, current required for threshold activation could be approximated by I(sigma)=Ithr(0)(1-sigmaK), with fitted constants Ithr(0) and K. Three of the subjects had a "positioner," intended to place their electrode arrays closer to their neural tissue. The values of K were smaller for the positioner users and for a "close" electrode-to-tissue distance in the model. Above threshold, equal-loudness contours for some subjects deviated significantly from a linear scale-up of the threshold approximations. The patterns of deviation were similar to those observed in the model for conditions in which most of the neurons near the center electrode were excited.
Energy Technology Data Exchange (ETDEWEB)
Wunderlich, Y.; Afzal, F.; Thiel, A.; Beck, R. [Universitaet Bonn, Helmholtz-Institut fuer Strahlen- und Kernphysik, Bonn (Germany)
2017-05-15
This work presents a simple method to determine the significant partial wave contributions to experimentally determined observables in pseudoscalar meson photoproduction. First, fits to angular distributions are presented and the maximum orbital angular momentum L{sub max} needed to achieve a good fit is determined. Then, recent polarization measurements for γp → π{sup 0}p from ELSA, GRAAL, JLab and MAMI are investigated according to the proposed method. This method allows us to project high-spin partial wave contributions to any observable as long as the measurement has the necessary statistical accuracy. We show, that high precision and large angular coverage in the polarization data are needed in order to be sensitive to high-spin resonance states and thereby also for the finding of small resonance contributions. This task can be achieved via interference of these resonances with the well-known states. For the channel γp → π{sup 0}p, those are the N(1680)(5)/(2){sup +} and Δ(1950)(7)/(2){sup +}, contributing to the F-waves. (orig.)
Valchev, Nikola; Zijdewind, Inge; Keysers, Christian; Gazzola, Valeria; Avenanti, Alessio; Maurits, Natasha M
2015-01-01
Seeing others performing an action induces the observers' motor cortex to "resonate" with the observed action. Transcranial magnetic stimulation (TMS) studies suggest that such motor resonance reflects the encoding of various motor features of the observed action, including the apparent motor effort. However, it is unclear whether such encoding requires direct observation or whether force requirements can be inferred when the moving body part is partially occluded. To address this issue, we presented participants with videos of a right hand lifting a box of three different weights and asked them to estimate its weight. During each trial we delivered one transcranial magnetic stimulation (TMS) pulse over the left primary motor cortex of the observer and recorded the motor evoked potentials (MEPs) from three muscles of the right hand (first dorsal interosseous, FDI, abductor digiti minimi, ADM, and brachioradialis, BR). Importantly, because the hand shown in the videos was hidden behind a screen, only the contractions in the actor's BR muscle under the bare skin were observable during the entire videos, while the contractions in the actor's FDI and ADM muscles were hidden during the grasp and actual lift. The amplitudes of the MEPs recorded from the BR (observable) and FDI (hidden) muscle increased with the weight of the box. These findings indicate that the modulation of motor excitability induced by action observation extends to the cortical representation of muscles with contractions that could not be observed. Thus, motor resonance appears to reflect force requirements of observed lifting actions even when the moving body part is occluded from view. Copyright © 2014 Elsevier Ltd. All rights reserved.
Exact solution of the hidden Markov processes
Saakian, David B.
2017-11-01
We write a master equation for the distributions related to hidden Markov processes (HMPs) and solve it using a functional equation. Thus the solution of HMPs is mapped exactly to the solution of the functional equation. For a general case the latter can be solved only numerically. We derive an exact expression for the entropy of HMPs. Our expression for the entropy is an alternative to the ones given before by the solution of integral equations. The exact solution is possible because actually the model can be considered as a generalized random walk on a one-dimensional strip. While we give the solution for the two second-order matrices, our solution can be easily generalized for the L values of the Markov process and M values of observables: We should be able to solve a system of L functional equations in the space of dimension M -1 .
Mcclenny, Levi D; Imani, Mahdi; Braga-Neto, Ulisses M
2017-11-25
Gene regulatory networks govern the function of key cellular processes, such as control of the cell cycle, response to stress, DNA repair mechanisms, and more. Boolean networks have been used successfully in modeling gene regulatory networks. In the Boolean network model, the transcriptional state of each gene is represented by 0 (inactive) or 1 (active), and the relationship among genes is represented by logical gates updated at discrete time points. However, the Boolean gene states are never observed directly, but only indirectly and incompletely through noisy measurements based on expression technologies such as cDNA microarrays, RNA-Seq, and cell imaging-based assays. The Partially-Observed Boolean Dynamical System (POBDS) signal model is distinct from other deterministic and stochastic Boolean network models in removing the requirement of a directly observable Boolean state vector and allowing uncertainty in the measurement process, addressing the scenario encountered in practice in transcriptomic analysis. BoolFilter is an R package that implements the POBDS model and associated algorithms for state and parameter estimation. It allows the user to estimate the Boolean states, network topology, and measurement parameters from time series of transcriptomic data using exact and approximated (particle) filters, as well as simulate the transcriptomic data for a given Boolean network model. Some of its infrastructure, such as the network interface, is the same as in the previously published R package for Boolean Networks BoolNet, which enhances compatibility and user accessibility to the new package. We introduce the R package BoolFilter for Partially-Observed Boolean Dynamical Systems (POBDS). The BoolFilter package provides a useful toolbox for the bioinformatics community, with state-of-the-art algorithms for simulation of time series transcriptomic data as well as the inverse process of system identification from data obtained with various expression
Markov Chain Monte Carlo Methods
Indian Academy of Sciences (India)
Systat Software Asia-Pacific. Ltd., in Bangalore, where the technical work for the development of the statistical software Systat takes ... In Part 4, we discuss some applications of the Markov ... one can construct the joint probability distribution of.
Reviving Markov processes and applications
International Nuclear Information System (INIS)
Cai, H.
1988-01-01
In this dissertation we study a procedure which restarts a Markov process when the process is killed by some arbitrary multiplicative functional. The regenerative nature of this revival procedure is characterized through a Markov renewal equation. An interesting duality between the revival procedure and the classical killing operation is found. Under the condition that the multiplicative functional possesses an intensity, the generators of the revival process can be written down explicitly. An intimate connection is also found between the perturbation of the sample path of a Markov process and the perturbation of a generator (in Kato's sense). The applications of the theory include the study of the processes like piecewise-deterministic Markov process, virtual waiting time process and the first entrance decomposition (taboo probability)
Confluence reduction for Markov automata
Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette
Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. Recently, the process algebra MAPA was introduced to efficiently model such systems. As always, the state space explosion threatens the analysability of the models
Confluence Reduction for Markov Automata
Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette; Braberman, Victor; Fribourg, Laurent
Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. Recently, the process algebra MAPA was introduced to efficiently model such systems. As always, the state space explosion threatens the analysability of the models
Modeling nonhomogeneous Markov processes via time transformation.
Hubbard, R A; Inoue, L Y T; Fann, J R
2008-09-01
Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.
Directory of Open Access Journals (Sweden)
F. M. San Martini
2006-01-01
Full Text Available A Markov Chain Monte Carlo model for integrating the observations of inorganic species with a thermodynamic equilibrium model was presented in Part I of this series. Using observations taken at three ground sites, i.e. a residential, industrial and rural site, during the MCMA-2003 campaign in Mexico City, the model is used to analyze the inorganic particle and ammonia data and to predict gas phase concentrations of nitric and hydrochloric acid. In general, the model is able to accurately predict the observed inorganic particle concentrations at all three sites. The agreement between the predicted and observed gas phase ammonia concentration is excellent. The NOz concentration calculated from the NOy, NO and NO2 observations is of limited use in constraining the gas phase nitric acid concentration given the large uncertainties in this measure of nitric acid and additional reactive nitrogen species. Focusing on the acidic period of 9–11 April identified by Salcedo et al. (2006, the model accurately predicts the particle phase observations during this period with the exception of the nitrate predictions after 10:00 a.m. (Central Daylight Time, CDT on 9 April, where the model underpredicts the observations by, on average, 20%. This period had a low planetary boundary layer, very high particle concentrations, and higher than expected nitrogen dioxide concentrations. For periods when the particle chloride observations are consistently above the detection limit, the model is able to both accurately predict the particle chloride mass concentrations and provide well-constrained HCl (g concentrations. The availability of gas-phase ammonia observations helps constrain the predicted HCl (g concentrations. When the particles are aqueous, the most likely concentrations of HCl (g are in the sub-ppbv range. The most likely predicted concentration of HCl (g was found to reach concentrations of order 10 ppbv if the particles are dry. Finally, the
Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I
2018-01-01
Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.
International Nuclear Information System (INIS)
Haupage, Samantha; Branski, Ryan C.; Kraus, Dennis; Peck, Kyung K.; Hsu, Meier; Holodny, Andrei
2010-01-01
The current study seeks to provide preliminary data regarding this central, adaptive response during tongue motor tasks utilizing functional magnetic resonance imaging (fMRI) before and after glossectomy. Six patients, with confirmed histological diagnoses of oral tongue cancer, underwent fMRI before and 6 months after partial glossectomy. These data were compared to nine healthy controls. All subjects performed three tongue motor tasks during fMRI: tongue tapping (TT), dry swallow (Dry), and wet swallow (Wet). Following surgery, increased activation was subjectively observed in the superior parietal lobule, supplementary motor area, and anterior cingulate. Region of interest (ROI) analysis of the precentral gyrus confirmed increased cortical activity following surgery. In addition, comparisons between pre-surgical scans and controls suggested the dry swallow task was sensitive to elicit tongue-related activation in the precentral gyrus (p ≤ 0.05). The adaptive changes in the cortex following partial glossectomy reflect recruitment of the parietal, frontal, and cingulate cortex during tongue motor tasks. In addition, post-operative activation patterns more closely approximated control levels than the pre-operative scans. Furthermore, the dry swallow task appears most specific to elicit tongue-related cortical activity. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Haupage, Samantha; Branski, Ryan C.; Kraus, Dennis [Memorial Sloan-Kettering Cancer Center, Head and Neck Surgery, New York, NY (United States); Peck, Kyung K. [Memorial Sloan-Kettering Cancer Center, Department of Radiology, New York, NY (United States); Memorial Sloan-Kettering Cancer Center, Medical Physics, New York, NY (United States); Memorial Sloan-Kettering Cancer Center, Department of Medical Physics and Radiology, New York, NY (United States); Hsu, Meier [Memorial Sloan-Kettering Cancer Center, Department of Epidemiology and Biostatistics, New York, NY (United States); Holodny, Andrei [Memorial Sloan-Kettering Cancer Center, Department of Radiology, New York, NY (United States)
2010-12-15
The current study seeks to provide preliminary data regarding this central, adaptive response during tongue motor tasks utilizing functional magnetic resonance imaging (fMRI) before and after glossectomy. Six patients, with confirmed histological diagnoses of oral tongue cancer, underwent fMRI before and 6 months after partial glossectomy. These data were compared to nine healthy controls. All subjects performed three tongue motor tasks during fMRI: tongue tapping (TT), dry swallow (Dry), and wet swallow (Wet). Following surgery, increased activation was subjectively observed in the superior parietal lobule, supplementary motor area, and anterior cingulate. Region of interest (ROI) analysis of the precentral gyrus confirmed increased cortical activity following surgery. In addition, comparisons between pre-surgical scans and controls suggested the dry swallow task was sensitive to elicit tongue-related activation in the precentral gyrus (p {<=} 0.05). The adaptive changes in the cortex following partial glossectomy reflect recruitment of the parietal, frontal, and cingulate cortex during tongue motor tasks. In addition, post-operative activation patterns more closely approximated control levels than the pre-operative scans. Furthermore, the dry swallow task appears most specific to elicit tongue-related cortical activity. (orig.)
Markov Processes in Image Processing
Petrov, E. P.; Kharina, N. L.
2018-05-01
Digital images are used as an information carrier in different sciences and technologies. The aspiration to increase the number of bits in the image pixels for the purpose of obtaining more information is observed. In the paper, some methods of compression and contour detection on the basis of two-dimensional Markov chain are offered. Increasing the number of bits on the image pixels will allow one to allocate fine object details more precisely, but it significantly complicates image processing. The methods of image processing do not concede by the efficiency to well-known analogues, but surpass them in processing speed. An image is separated into binary images, and processing is carried out in parallel with each without an increase in speed, when increasing the number of bits on the image pixels. One more advantage of methods is the low consumption of energy resources. Only logical procedures are used and there are no computing operations. The methods can be useful in processing images of any class and assignment in processing systems with a limited time and energy resources.
Mendez, Rene A.; Claveria, Ruben M.; Orchard, Marcos E.; Silva, Jorge F.
2017-11-01
We present orbital elements and mass sums for 18 visual binary stars of spectral types B to K (five of which are new orbits) with periods ranging from 20 to more than 500 yr. For two double-line spectroscopic binaries with no previous orbits, the individual component masses, using combined astrometric and radial velocity data, have a formal uncertainty of ˜ 0.1 {M}⊙ . Adopting published photometry and trigonometric parallaxes, plus our own measurements, we place these objects on an H-R diagram and discuss their evolutionary status. These objects are part of a survey to characterize the binary population of stars in the Southern Hemisphere using the SOAR 4 m telescope+HRCAM at CTIO. Orbital elements are computed using a newly developed Markov chain Monte Carlo (MCMC) algorithm that delivers maximum-likelihood estimates of the parameters, as well as posterior probability density functions that allow us to evaluate the uncertainty of our derived parameters in a robust way. For spectroscopic binaries, using our approach, it is possible to derive a self-consistent parallax for the system from the combined astrometric and radial velocity data (“orbital parallax”), which compares well with the trigonometric parallaxes. We also present a mathematical formalism that allows a dimensionality reduction of the feature space from seven to three search parameters (or from 10 to seven dimensions—including parallax—in the case of spectroscopic binaries with astrometric data), which makes it possible to explore a smaller number of parameters in each case, improving the computational efficiency of our MCMC code. Based on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação (MCTI) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).
Qin, Ning; Wen, John Z.; Ren, Carolyn L.
2017-04-01
This is the first part of a two-part study on a partially miscible liquid-liquid flow (liquid carbon dioxide and deionized water) which is highly pressurized and confined in a microfluidic T-junction. Our main focuses are to understand the flow regimes as a result of varying flow conditions and investigate the characteristics of drop flow distinct from coflow, with a capillary number, C ac , that is calculated based on the continuous liquid, ranging from 10-3 to 10-2 (10-4 for coflow). Here in part I, we present our experimental observation of drop formation cycle by tracking drop length, spacing, frequency, and after-generation speed using high-speed video and image analysis. The drop flow is chronologically composed of a stagnating and filling stage, an elongating and squeezing stage, and a truncating stage. The common "necking" time during the elongating and squeezing stage (with C ac˜10-3 ) for the truncation of the dispersed liquid stream is extended, and the truncation point is subsequently shifted downstream from the T-junction corner. This temporal postponement effect modifies the scaling function reported in the literature for droplet formation with two immiscible fluids. Our experimental measurements also demonstrate the drop speed immediately following their generations can be approximated by the mean velocity from averaging the total flow rate over the channel cross section. Further justifications of the quantitative analysis by considering the mass transfer at the interface of the two partially miscible fluids are provided in part II.
Analysis and design of Markov jump systems with complex transition probabilities
Zhang, Lixian; Shi, Peng; Zhu, Yanzheng
2016-01-01
The book addresses the control issues such as stability analysis, control synthesis and filter design of Markov jump systems with the above three types of TPs, and thus is mainly divided into three parts. Part I studies the Markov jump systems with partially unknown TPs. Different methodologies with different conservatism for the basic stability and stabilization problems are developed and compared. Then the problems of state estimation, the control of systems with time-varying delays, the case involved with both partially unknown TPs and uncertain TPs in a composite way are also tackled. Part II deals with the Markov jump systems with piecewise homogeneous TPs. Methodologies that can effectively handle control problems in the scenario are developed, including the one coping with the asynchronous switching phenomenon between the currently activated system mode and the controller/filter to be designed. Part III focuses on the Markov jump systems with memory TPs. The concept of σ-mean square stability is propo...
Buckley, Lisa; Bingham, C Raymond; Flannagan, Carol A; Carter, Patrick M; Almani, Farideh; Cicchino, Jessica B
2016-10-01
Motorcycle crashes result in a significant health burden, including many fatal injuries and serious non-fatal head injuries. Helmets are highly effective in preventing such trauma, and jurisdictions that require helmet use of all motorcyclists have higher rates of helmet use and lower rates of head injuries among motorcyclists. The current study examines helmet use and characteristics of helmeted operators and their riding conditions in Michigan, following a weakening of the state's universal motorcycle helmet use law in April 2012. Data on police-reported crashes occurring during 2012-14 and from a stratified roadside observational survey undertaken in Southeast Michigan during May-September 2014 were used to estimate statewide helmet use rates. Observed helmet use was more common among operators of sports motorcycles, on freeways, and in the morning, and least common among operators of cruisers, on minor arterials, and in the afternoon. The rate of helmet use across the state was estimated at 75%, adjusted for roadway type, motorcycle class, and time of day. Similarly, the helmet use rate found from examination of crash records was 73%. In the observation survey, 47% of operators wore jackets, 94% wore long pants, 54% wore boots, and 80% wore gloves. Protective clothing of jackets and gloves was most often worn by sport motorcycle operators and long pants and boots most often by riders of touring motorcycles. Findings highlight the much lower rate of helmet use in Michigan compared with states that have a universal helmet use law, although the rate is higher than observed in many states with partial helmet laws. Targeted interventions aimed at specific groups of motorcyclists and situations where helmet use rates are particularly low should be considered to increase helmet use. Copyright © 2016 Elsevier Ltd. All rights reserved.
Yamashita, Yoshifumi; Nakata, Ryu; Nishikawa, Takeshi; Hada, Masaki; Hayashi, Yasuhiko
2018-04-01
We studied the dynamics of the expansion of a Shockley-type stacking fault (SSF) with 30° Si(g) partial dislocations (PDs) using a scanning electron microscope. We observed SSFs as dark lines (DLs), which formed the contrast at the intersection between the surface and the SSF on the (0001) face inclined by 8° from the surface. We performed experiments at different electron-beam scanning speeds, observing magnifications, and irradiation areas. The results indicated that the elongation of a DL during one-frame scanning depended on the time for which the electron beam irradiated the PD segment in the frame of view. From these results, we derived a formula to express the velocity of the PD using the elongation rate of the corresponding DL during one-frame scanning. We also obtained the result that the elongation velocity of the DL was not influenced by changing the direction in which the electron beam irradiates the PD. From this result, we deduced that the geometrical kink motion of the PD was enhanced by diffusing carriers that were generated by the electron-beam irradiation.
Maximizing Entropy over Markov Processes
DEFF Research Database (Denmark)
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2013-01-01
The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code....
Maximizing entropy over Markov processes
DEFF Research Database (Denmark)
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2014-01-01
The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code. © 2014 Elsevier...
Markov Networks in Evolutionary Computation
Shakya, Siddhartha
2012-01-01
Markov networks and other probabilistic graphical modes have recently received an upsurge in attention from Evolutionary computation community, particularly in the area of Estimation of distribution algorithms (EDAs). EDAs have arisen as one of the most successful experiences in the application of machine learning methods in optimization, mainly due to their efficiency to solve complex real-world optimization problems and their suitability for theoretical analysis. This book focuses on the different steps involved in the conception, implementation and application of EDAs that use Markov networks, and undirected models in general. It can serve as a general introduction to EDAs but covers also an important current void in the study of these algorithms by explaining the specificities and benefits of modeling optimization problems by means of undirected probabilistic models. All major developments to date in the progressive introduction of Markov networks based EDAs are reviewed in the book. Hot current researc...
Constructing Dynamic Event Trees from Markov Models
International Nuclear Information System (INIS)
Paolo Bucci; Jason Kirschenbaum; Tunc Aldemir; Curtis Smith; Ted Wood
2006-01-01
In the probabilistic risk assessment (PRA) of process plants, Markov models can be used to model accurately the complex dynamic interactions between plant physical process variables (e.g., temperature, pressure, etc.) and the instrumentation and control system that monitors and manages the process. One limitation of this approach that has prevented its use in nuclear power plant PRAs is the difficulty of integrating the results of a Markov analysis into an existing PRA. In this paper, we explore a new approach to the generation of failure scenarios and their compilation into dynamic event trees from a Markov model of the system. These event trees can be integrated into an existing PRA using software tools such as SAPHIRE. To implement our approach, we first construct a discrete-time Markov chain modeling the system of interest by: (a) partitioning the process variable state space into magnitude intervals (cells), (b) using analytical equations or a system simulator to determine the transition probabilities between the cells through the cell-to-cell mapping technique, and, (c) using given failure/repair data for all the components of interest. The Markov transition matrix thus generated can be thought of as a process model describing the stochastic dynamic behavior of the finite-state system. We can therefore search the state space starting from a set of initial states to explore all possible paths to failure (scenarios) with associated probabilities. We can also construct event trees of arbitrary depth by tracing paths from a chosen initiating event and recording the following events while keeping track of the probabilities associated with each branch in the tree. As an example of our approach, we use the simple level control system often used as benchmark in the literature with one process variable (liquid level in a tank), and three control units: a drain unit and two supply units. Each unit includes a separate level sensor to observe the liquid level in the tank
Radford, Isolde H; Fersht, Alan R; Settanni, Giovanni
2011-06-09
Atomistic molecular dynamics simulations of the TZ1 beta-hairpin peptide have been carried out using an implicit model for the solvent. The trajectories have been analyzed using a Markov state model defined on the projections along two significant observables and a kinetic network approach. The Markov state model allowed for an unbiased identification of the metastable states of the system, and provided the basis for commitment probability calculations performed on the kinetic network. The kinetic network analysis served to extract the main transition state for folding of the peptide and to validate the results from the Markov state analysis. The combination of the two techniques allowed for a consistent and concise characterization of the dynamics of the peptide. The slowest relaxation process identified is the exchange between variably folded and denatured species, and the second slowest process is the exchange between two different subsets of the denatured state which could not be otherwise identified by simple inspection of the projected trajectory. The third slowest process is the exchange between a fully native and a partially folded intermediate state characterized by a native turn with a proximal backbone H-bond, and frayed side-chain packing and termini. The transition state for the main folding reaction is similar to the intermediate state, although a more native like side-chain packing is observed.
Varoquaux, G; Gramfort, A; Poline, J B; Thirion, B
2012-01-01
Correlations in the signal observed via functional Magnetic Resonance Imaging (fMRI), are expected to reveal the interactions in the underlying neural populations through hemodynamic response. In particular, they highlight distributed set of mutually correlated regions that correspond to brain networks related to different cognitive functions. Yet graph-theoretical studies of neural connections give a different picture: that of a highly integrated system with small-world properties: local clustering but with short pathways across the complete structure. We examine the conditional independence properties of the fMRI signal, i.e. its Markov structure, to find realistic assumptions on the connectivity structure that are required to explain the observed functional connectivity. In particular we seek a decomposition of the Markov structure into segregated functional networks using decomposable graphs: a set of strongly-connected and partially overlapping cliques. We introduce a new method to efficiently extract such cliques on a large, strongly-connected graph. We compare methods learning different graph structures from functional connectivity by testing the goodness of fit of the model they learn on new data. We find that summarizing the structure as strongly-connected networks can give a good description only for very large and overlapping networks. These results highlight that Markov models are good tools to identify the structure of brain connectivity from fMRI signals, but for this purpose they must reflect the small-world properties of the underlying neural systems. Copyright © 2012 Elsevier Ltd. All rights reserved.
Markov chains and mixing times
Levin, David A; Wilmer, Elizabeth L
2009-01-01
This book is an introduction to the modern approach to the theory of Markov chains. The main goal of this approach is to determine the rate of convergence of a Markov chain to the stationary distribution as a function of the size and geometry of the state space. The authors develop the key tools for estimating convergence times, including coupling, strong stationary times, and spectral methods. Whenever possible, probabilistic methods are emphasized. The book includes many examples and provides brief introductions to some central models of statistical mechanics. Also provided are accounts of r
Markov Models for Handwriting Recognition
Plotz, Thomas
2011-01-01
Since their first inception, automatic reading systems have evolved substantially, yet the recognition of handwriting remains an open research problem due to its substantial variation in appearance. With the introduction of Markovian models to the field, a promising modeling and recognition paradigm was established for automatic handwriting recognition. However, no standard procedures for building Markov model-based recognizers have yet been established. This text provides a comprehensive overview of the application of Markov models in the field of handwriting recognition, covering both hidden
Finding metastabilities in reversible Markov chains based on incomplete sampling
Directory of Open Access Journals (Sweden)
Fackeldey Konstantin
2017-01-01
Full Text Available In order to fully characterize the state-transition behaviour of finite Markov chains one needs to provide the corresponding transition matrix P. In many applications such as molecular simulation and drug design, the entries of the transition matrix P are estimated by generating realizations of the Markov chain and determining the one-step conditional probability Pij for a transition from one state i to state j. This sampling can be computational very demanding. Therefore, it is a good idea to reduce the sampling effort. The main purpose of this paper is to design a sampling strategy, which provides a partial sampling of only a subset of the rows of such a matrix P. Our proposed approach fits very well to stochastic processes stemming from simulation of molecular systems or random walks on graphs and it is different from the matrix completion approaches which try to approximate the transition matrix by using a low-rank-assumption. It will be shown how Markov chains can be analyzed on the basis of a partial sampling. More precisely. First, we will estimate the stationary distribution from a partially given matrix P. Second, we will estimate the infinitesimal generator Q of P on the basis of this stationary distribution. Third, from the generator we will compute the leading invariant subspace, which should be identical to the leading invariant subspace of P. Forth, we will apply Robust Perron Cluster Analysis (PCCA+ in order to identify metastabilities using this subspace.
Dynamic modeling of presence of occupants using inhomogeneous Markov chains
DEFF Research Database (Denmark)
Andersen, Philip Hvidthøft Delff; Iversen, Anne; Madsen, Henrik
2014-01-01
on time of day, and by use of a filter of the observations it is able to capture per-employee sequence dynamics. Simulations using this method are compared with simulations using homogeneous Markov chains and show far better ability to reproduce key properties of the data. The method is based...... on inhomogeneous Markov chains with where the transition probabilities are estimated using generalized linear models with polynomials, B-splines, and a filter of passed observations as inputs. For treating the dispersion of the data series, a hierarchical model structure is used where one model is for low presence...
Directory of Open Access Journals (Sweden)
Rozana Alik
2016-03-01
Full Text Available This paper presents a simple checking algorithm for maximum power point tracking (MPPT technique for Photovoltaic (PV system using Perturb and Observe (P&O algorithm. The main benefit of this checking algorithm is the simplicity and efficiency of the system whose duty cycle produced by the MPPT is smoother and changes faster according to maximum power point (MPP. This checking algorithm can determine the maximum power first before the P&O algorithm takes place to identify the voltage at MPP (VMPP, which is needed to calculate the duty cycle for the boost converter. To test the effectiveness of the algorithm, a simulation model of PV system has been carried out using MATLAB/Simulink under different level of irradiation; or in other words partially shaded condition of PV array. The results from the system using the proposed approach prove to have faster response and low ripple. Besides, the results are close to the desired outputs and exhibit an approximately 98.25% of the system efficiency. On the other hand, the system with conventional P&O MPPT seems to be unstable and has higher percentage of error. In summary, the proposed method is useful under varying level of irradiation with higher efficiency of the system.
Consistency and refinement for Interval Markov Chains
DEFF Research Database (Denmark)
Delahaye, Benoit; Larsen, Kim Guldstrand; Legay, Axel
2012-01-01
Interval Markov Chains (IMC), or Markov Chains with probability intervals in the transition matrix, are the base of a classic specification theory for probabilistic systems [18]. The standard semantics of IMCs assigns to a specification the set of all Markov Chains that satisfy its interval...
Li, Jian; Jiang, Ting; Li, Sai; Chen, Wei
2013-02-18
To investigate design methods of dual insertion paths and observe a short-term clinic overview of rotational path removable partial dentures (RPDs). In the study, 40 patients with partial edentulous arches were included and divided into two groups. The patients in group one were restored with rotational path RPDs (10 Kennedy class III and 10 Kennedy class IV respectively). The patients in group two (20 patients), whose edentulous area was matched with the patients' in group one, were restored with the linear path RPDs. After surveying and simulative preparation on diagnostic casts, the basic laws of designing rotational path RPDs were summarized. The oral preparation was accurately performed under the guidance of indices made on diagnostic casts after simulative preparation. The 40 dentures were recalled two weeks and one year after the insertion. The evaluations of the clinic outcome, including retention, stability, mastication function, esthetics and wearing convenience, were marked out as good, acceptable, and poor. The comparison of the evaluation results was performed between the two groups. In the rotational path design for Kennedy class III or IV RPDs, the angles (α) of dual insertion paths should be designed within a scope, approximate 10°-15°.When the angle (α) became larger, the denture retention turned to be better, but accordingly the posterior abutments needed more preparation. In the clinical application, the first insertions of the 40 dentures were all favorably accomplished. When the rotational path RPDs were compared to linear path RPDs, the time consuming on first insertion had no statistical difference[(32±8) min and (33±8) min respectively, P>0.05]. Recalled two weeks and one year after the insertion, in the esthetics evaluation, 20 rotational path RPDs were all evaluated as "A", but only 7(two weeks after) and 6 (one year after) linear path RPDs were evaluated as "A"(P<0.05). There was no significant difference in other evaluation results
Katoen, Joost P.; Maneesh Khattri, M.; Zapreev, I.S.; Zapreev, I.S.
2005-01-01
This short tool paper introduces MRMC, a model checker for discrete-time and continuous-time Markov reward models. It supports reward extensions of PCTL and CSL, and allows for the automated verification of properties concerning long-run and instantaneous rewards as well as cumulative rewards. In
Markov Decision Processes in Practice
Boucherie, Richardus J.; van Dijk, N.M.
2017-01-01
It is over 30 years ago since D.J. White started his series of surveys on practical applications of Markov decision processes (MDP), over 20 years after the phenomenal book by Martin Puterman on the theory of MDP, and over 10 years since Eugene A. Feinberg and Adam Shwartz published their Handbook
Markov chain modelling of pitting corrosion in underground pipelines
Energy Technology Data Exchange (ETDEWEB)
Caleyo, F. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico)], E-mail: fcaleyo@gmail.com; Velazquez, J.C. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico); Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado, 10400 La Habana (Cuba); Hallen, J.M. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico)
2009-09-15
A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.
Markov chain modelling of pitting corrosion in underground pipelines
International Nuclear Information System (INIS)
Caleyo, F.; Velazquez, J.C.; Valor, A.; Hallen, J.M.
2009-01-01
A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.
Learning Markov Decision Processes for Model Checking
DEFF Research Database (Denmark)
Mao, Hua; Chen, Yingke; Jaeger, Manfred
2012-01-01
. The proposed learning algorithm is adapted from algorithms for learning deterministic probabilistic finite automata, and extended to include both probabilistic and nondeterministic transitions. The algorithm is empirically analyzed and evaluated by learning system models of slot machines. The evaluation......Constructing an accurate system model for formal model verification can be both resource demanding and time-consuming. To alleviate this shortcoming, algorithms have been proposed for automatically learning system models based on observed system behaviors. In this paper we extend the algorithm...... on learning probabilistic automata to reactive systems, where the observed system behavior is in the form of alternating sequences of inputs and outputs. We propose an algorithm for automatically learning a deterministic labeled Markov decision process model from the observed behavior of a reactive system...
Markov chains and mixing times
Levin, David A
2017-01-01
Markov Chains and Mixing Times is a magical book, managing to be both friendly and deep. It gently introduces probabilistic techniques so that an outsider can follow. At the same time, it is the first book covering the geometric theory of Markov chains and has much that will be new to experts. It is certainly THE book that I will use to teach from. I recommend it to all comers, an amazing achievement. -Persi Diaconis, Mary V. Sunseri Professor of Statistics and Mathematics, Stanford University Mixing times are an active research topic within many fields from statistical physics to the theory of algorithms, as well as having intrinsic interest within mathematical probability and exploiting discrete analogs of important geometry concepts. The first edition became an instant classic, being accessible to advanced undergraduates and yet bringing readers close to current research frontiers. This second edition adds chapters on monotone chains, the exclusion process and hitting time parameters. Having both exercises...
Markov Chain Ontology Analysis (MCOA).
Frost, H Robert; McCray, Alexa T
2012-02-03
Biomedical ontologies have become an increasingly critical lens through which researchers analyze the genomic, clinical and bibliographic data that fuels scientific research. Of particular relevance are methods, such as enrichment analysis, that quantify the importance of ontology classes relative to a collection of domain data. Current analytical techniques, however, remain limited in their ability to handle many important types of structural complexity encountered in real biological systems including class overlaps, continuously valued data, inter-instance relationships, non-hierarchical relationships between classes, semantic distance and sparse data. In this paper, we describe a methodology called Markov Chain Ontology Analysis (MCOA) and illustrate its use through a MCOA-based enrichment analysis application based on a generative model of gene activation. MCOA models the classes in an ontology, the instances from an associated dataset and all directional inter-class, class-to-instance and inter-instance relationships as a single finite ergodic Markov chain. The adjusted transition probability matrix for this Markov chain enables the calculation of eigenvector values that quantify the importance of each ontology class relative to other classes and the associated data set members. On both controlled Gene Ontology (GO) data sets created with Escherichia coli, Drosophila melanogaster and Homo sapiens annotations and real gene expression data extracted from the Gene Expression Omnibus (GEO), the MCOA enrichment analysis approach provides the best performance of comparable state-of-the-art methods. A methodology based on Markov chain models and network analytic metrics can help detect the relevant signal within large, highly interdependent and noisy data sets and, for applications such as enrichment analysis, has been shown to generate superior performance on both real and simulated data relative to existing state-of-the-art approaches.
Markov processes characterization and convergence
Ethier, Stewart N
2009-01-01
The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists."[A]nyone who works with Markov processes whose state space is uncountably infinite will need this most impressive book as a guide and reference."-American Scientist"There is no question but that space should immediately be reserved for [this] book on the library shelf. Those who aspire to mastery of the contents should also reserve a large number of long winter evenings."-Zentralblatt f?r Mathematik und ihre Grenzgebiete/Mathematics Abstracts"Ethier and Kurtz have produced an excellent treatment of the modern theory of Markov processes that [is] useful both as a reference work and as a graduate textbook."-Journal of Statistical PhysicsMarkov Proce...
Analyzing the profit-loss sharing contracts with Markov model
Directory of Open Access Journals (Sweden)
Imam Wahyudi
2016-12-01
Full Text Available The purpose of this paper is to examine how to use first order Markov chain to build a reliable monitoring system for the profit-loss sharing based contracts (PLS as the mode of financing contracts in Islamic bank with censored continuous-time observations. The paper adopts the longitudinal analysis with the first order Markov chain framework. Laplace transform was used with homogenous continuous time assumption, from discretized generator matrix, to generate the transition matrix. Various metrics, i.e.: eigenvalue and eigenvector were used to test the first order Markov chain assumption. Cox semi parametric model was used also to analyze the momentum and waiting time effect as non-Markov behavior. The result shows that first order Markov chain is powerful as a monitoring tool for Islamic banks. We find that waiting time negatively affected present rating downgrade (upgrade significantly. Likewise, momentum covariate showed negative effect. Finally, the result confirms that different origin rating have different movement behavior. The paper explores the potential of Markov chain framework as a risk management tool for Islamic banks. It provides valuable insight and integrative model for banks to manage their borrower accounts. This model can be developed to be a powerful early warning system to identify which borrower needs to be monitored intensively. Ultimately, this model could potentially increase the efficiency, productivity and competitiveness of Islamic banks in Indonesia. The analysis used only rating data. Further study should be able to give additional information about the determinant factors of rating movement of the borrowers by incorporating various factors such as contract-related factors, bank-related factors, borrower-related factors and macroeconomic factors.
CSIR Research Space (South Africa)
Rens, G
2015-01-01
Full Text Available A novel algorithm to speed up online planning in partially observable Markov decision processes (POMDPs) is introduced. I propose a method for compressing nodes in belief-decision-trees while planning occurs. Whereas belief-decision-trees branch...
Learning Markov models for stationary system behaviors
DEFF Research Database (Denmark)
Chen, Yingke; Mao, Hua; Jaeger, Manfred
2012-01-01
to a single long observation sequence, and in these situations existing automatic learning methods cannot be applied. In this paper, we adapt algorithms for learning variable order Markov chains from a single observation sequence of a target system, so that stationary system properties can be verified using......Establishing an accurate model for formal verification of an existing hardware or software system is often a manual process that is both time consuming and resource demanding. In order to ease the model construction phase, methods have recently been proposed for automatically learning accurate...... the learned model. Experiments demonstrate that system properties (formulated as stationary probabilities of LTL formulas) can be reliably identified using the learned model....
International Nuclear Information System (INIS)
Jung, Joon-Yong; Jee, Won-Hee; Chun, Ho Jong; Ahn, Myeong Im; Kim, Yang-Soo
2010-01-01
Background: Partial-thickness tear of the rotator cuff is a common cause of shoulder pain. Magnetic resonance (MR) arthrography has been described as a useful measure to diagnose rotator cuff abnormalities. Purpose: To determine the reliability and accuracy of MR arthrography with abduction and external rotation (ABER) view for the diagnosis of partial-thickness tears of the rotator cuff. Material and Methods: Among patients who underwent MR arthrographies, 22 patients (12 men, 10 women; mean age 45 years) who had either partial-thickness tear or normal tendon on arthroscopy were included. MR images were independently scored by two observers for partial-thickness tears of the rotator cuff. Interobserver and intraobserver agreements for detection of partial-thickness tears of the rotator cuff were calculated by using κ coefficients. The differences in areas under the receiver operating characteristic (ROC) curves were assessed with a univariate Z-score test. Differences in sensitivity and specificity for interpretations based on different imaging series were tested for significance using the McNemar statistic. Results: Sensitivity, specificity, and accuracy of each reader on MR imaging without ABER view were 83%, 90%, and 86%, and 83%, 80%, and 82%, respectively, whereas on overall interpretation including ABER view, the sensitivity, specificity, and accuracy of each reader were 92%, 70%, and 82%, and 92%, 80%, and 86%, respectively. Including ABER view, interobserver agreement for partial-thickness tear increased from κ=0.55 to κ=0.68. Likewise, intraobserver agreements increased from κ=0.79 and 0.53 to κ=0.81 and 0.70 for each reader, respectively. The areas under the ROC curves for each reader were 0.96 and 0.90, which were not significantly different. Conclusion: Including ABER view in routine sequences of MR arthrography increases the sensitivity, and inter- and intraobserver agreements for detecting partial-thickness tear of rotator cuff tendon
Energy Technology Data Exchange (ETDEWEB)
Jung, Joon-Yong; Jee, Won-Hee; Chun, Ho Jong; Ahn, Myeong Im (Dept. of Radiology, Seoul St. Mary' s Hospital, School of Medicine, Catholic Univ. of Korea, Seoul (Korea)), e-mail: whjee@catholic.ac.kr; Kim, Yang-Soo (Dept. of Orthopedic Surgery, Seoul St. Mary' s Hospital, School of Medicine, Catholic Univ. of Korea, Seoul (Korea))
2010-03-15
Background: Partial-thickness tear of the rotator cuff is a common cause of shoulder pain. Magnetic resonance (MR) arthrography has been described as a useful measure to diagnose rotator cuff abnormalities. Purpose: To determine the reliability and accuracy of MR arthrography with abduction and external rotation (ABER) view for the diagnosis of partial-thickness tears of the rotator cuff. Material and Methods: Among patients who underwent MR arthrographies, 22 patients (12 men, 10 women; mean age 45 years) who had either partial-thickness tear or normal tendon on arthroscopy were included. MR images were independently scored by two observers for partial-thickness tears of the rotator cuff. Interobserver and intraobserver agreements for detection of partial-thickness tears of the rotator cuff were calculated by using kappa coefficients. The differences in areas under the receiver operating characteristic (ROC) curves were assessed with a univariate Z-score test. Differences in sensitivity and specificity for interpretations based on different imaging series were tested for significance using the McNemar statistic. Results: Sensitivity, specificity, and accuracy of each reader on MR imaging without ABER view were 83%, 90%, and 86%, and 83%, 80%, and 82%, respectively, whereas on overall interpretation including ABER view, the sensitivity, specificity, and accuracy of each reader were 92%, 70%, and 82%, and 92%, 80%, and 86%, respectively. Including ABER view, interobserver agreement for partial-thickness tear increased from kappa=0.55 to kappa=0.68. Likewise, intraobserver agreements increased from kappa=0.79 and 0.53 to kappa=0.81 and 0.70 for each reader, respectively. The areas under the ROC curves for each reader were 0.96 and 0.90, which were not significantly different. Conclusion: Including ABER view in routine sequences of MR arthrography increases the sensitivity, and inter- and intraobserver agreements for detecting partial-thickness tear of rotator cuff
Verification of Open Interactive Markov Chains
Brazdil, Tomas; Hermanns, Holger; Krcal, Jan; Kretinsky, Jan; Rehak, Vojtech
2012-01-01
Interactive Markov chains (IMC) are compositional behavioral models extending both labeled transition systems and continuous-time Markov chains. IMC pair modeling convenience - owed to compositionality properties - with effective verification algorithms and tools - owed to Markov properties. Thus far however, IMC verification did not consider compositionality properties, but considered closed systems. This paper discusses the evaluation of IMC in an open and thus compositional interpretation....
Spectral methods for quantum Markov chains
Energy Technology Data Exchange (ETDEWEB)
Szehr, Oleg
2014-05-08
The aim of this project is to contribute to our understanding of quantum time evolutions, whereby we focus on quantum Markov chains. The latter constitute a natural generalization of the ubiquitous concept of a classical Markov chain to describe evolutions of quantum mechanical systems. We contribute to the theory of such processes by introducing novel methods that allow us to relate the eigenvalue spectrum of the transition map to convergence as well as stability properties of the Markov chain.
Spectral methods for quantum Markov chains
International Nuclear Information System (INIS)
Szehr, Oleg
2014-01-01
The aim of this project is to contribute to our understanding of quantum time evolutions, whereby we focus on quantum Markov chains. The latter constitute a natural generalization of the ubiquitous concept of a classical Markov chain to describe evolutions of quantum mechanical systems. We contribute to the theory of such processes by introducing novel methods that allow us to relate the eigenvalue spectrum of the transition map to convergence as well as stability properties of the Markov chain.
A scaling analysis of a cat and mouse Markov chain
Litvak, Nelli; Robert, Philippe
2012-01-01
If ($C_n$) a Markov chain on a discrete state space $S$, a Markov chain ($C_n, M_n$) on the product space $S \\times S$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain behaves like the original Markov chain and the second component changes only when both
Criterion of Semi-Markov Dependent Risk Model
Institute of Scientific and Technical Information of China (English)
Xiao Yun MO; Xiang Qun YANG
2014-01-01
A rigorous definition of semi-Markov dependent risk model is given. This model is a generalization of the Markov dependent risk model. A criterion and necessary conditions of semi-Markov dependent risk model are obtained. The results clarify relations between elements among semi-Markov dependent risk model more clear and are applicable for Markov dependent risk model.
Directory of Open Access Journals (Sweden)
Jean B. Lasserre
2000-01-01
Full Text Available We consider the class of Markov kernels for which the weak or strong Feller property fails to hold at some discontinuity set. We provide a simple necessary and sufficient condition for existence of an invariant probability measure as well as a Foster-Lyapunov sufficient condition. We also characterize a subclass, the quasi (weak or strong Feller kernels, for which the sequences of expected occupation measures share the same asymptotic properties as for (weak or strong Feller kernels. In particular, it is shown that the sequences of expected occupation measures of strong and quasi strong-Feller kernels with an invariant probability measure converge setwise to an invariant measure.
Markov process of muscle motors
International Nuclear Information System (INIS)
Kondratiev, Yu; Pechersky, E; Pirogov, S
2008-01-01
We study a Markov random process describing muscle molecular motor behaviour. Every motor is either bound up with a thin filament or unbound. In the bound state the motor creates a force proportional to its displacement from the neutral position. In both states the motor spends an exponential time depending on the state. The thin filament moves at a velocity proportional to the average of all displacements of all motors. We assume that the time which a motor stays in the bound state does not depend on its displacement. Then one can find an exact solution of a nonlinear equation appearing in the limit of an infinite number of motors
Barbu, Vlad
2008-01-01
Semi-Markov processes are much more general and better adapted to applications than the Markov ones because sojourn times in any state can be arbitrarily distributed, as opposed to the geometrically distributed sojourn time in the Markov case. This book concerns with the estimation of discrete-time semi-Markov and hidden semi-Markov processes
International Nuclear Information System (INIS)
Chen Shibing
2011-01-01
Objective: To observe the long-term efficacy of partial spleen embolization combined with vincristine infusion in treating refractory idiopathic thrombocytopenic purpura (ITP) and Evans syndrome. Methods: During the period of 2000-2007, partial spleen embolization together with vincristine infusion was carried out in 30 patients with refractory idiopathic thrombocytopenic purpura (n=24) or Evans syndrome (n=6). Vincristine infusion (2 mg) via splenic artery was performed before partial spleen embolization procedure. The long-term effectiveness was observed and analyzed. Results: One week after the treatment, the platelet count was increased from preoperative (10.23±8.28) × 10 9 /L to (140.28±85.45) × 10 9 /L in patients with ITP, while the platelet count was increased from preoperative (12±8) × 10 9 /L to (210±60) × 10 9 /L in patients with Evans syndrome. Meanwhile, the hemoglobin level showed an increase in different degrees, from preoperative (63.00±13.62) g/L to postoperative (123.00±13.14) g/L. The therapeutic effectiveness was 100%. During the follow-up time lasting for 3-5 years, recurrence was seen in 11 patients (36.7%) and the overall efficacy rate was 63.3%. Conclusion: For the treatment of refractory idiopathic thrombocytopenic purpura and Evans syndrome, partial spleen embolization combined with vincristine infusion carries reliable long-term efficacy. (author)
Estimation and uncertainty of reversible Markov models.
Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank
2015-11-07
Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software--http://pyemma.org--as of version 2.0.
Bayesian tomography by interacting Markov chains
Romary, T.
2017-12-01
In seismic tomography, we seek to determine the velocity of the undergound from noisy first arrival travel time observations. In most situations, this is an ill posed inverse problem that admits several unperfect solutions. Given an a priori distribution over the parameters of the velocity model, the Bayesian formulation allows to state this problem as a probabilistic one, with a solution under the form of a posterior distribution. The posterior distribution is generally high dimensional and may exhibit multimodality. Moreover, as it is known only up to a constant, the only sensible way to addressthis problem is to try to generate simulations from the posterior. The natural tools to perform these simulations are Monte Carlo Markov chains (MCMC). Classical implementations of MCMC algorithms generally suffer from slow mixing: the generated states are slow to enter the stationary regime, that is to fit the observations, and when one mode of the posterior is eventually identified, it may become difficult to visit others. Using a varying temperature parameter relaxing the constraint on the data may help to enter the stationary regime. Besides, the sequential nature of MCMC makes them ill fitted toparallel implementation. Running a large number of chains in parallel may be suboptimal as the information gathered by each chain is not mutualized. Parallel tempering (PT) can be seen as a first attempt to make parallel chains at different temperatures communicate but only exchange information between current states. In this talk, I will show that PT actually belongs to a general class of interacting Markov chains algorithm. I will also show that this class enables to design interacting schemes that can take advantage of the whole history of the chain, by authorizing exchanges toward already visited states. The algorithms will be illustrated with toy examples and an application to first arrival traveltime tomography.
Markov chain aggregation for agent-based models
Banisch, Sven
2016-01-01
This self-contained text develops a Markov chain approach that makes the rigorous analysis of a class of microscopic models that specify the dynamics of complex systems at the individual level possible. It presents a general framework of aggregation in agent-based and related computational models, one which makes use of lumpability and information theory in order to link the micro and macro levels of observation. The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent-based model (ABM), which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. An explicit formal representation of a resulting “micro-chain” including microscopic transition rates is derived for a class of models by using the random mapping representation of a Markov process. The type of probability distribution used to implement the stochastic part of the model, which defines the upd...
Valchev, Nikola; Zijdewind, Inge; Keysers, Christian; Gazzola, Valeria; Avenanti, Alessio; Maurits, Natasha M.
Seeing others performing an action induces the observers' motor cortex to "resonate" with the observed action. Transcranial magnetic stimulation (TMS) studies suggest that such motor resonance reflects the encoding of various motor features of the observed action, including the apparent motor
Valchev, N.; Zijdewind, I.; Keysers, C.; Gazzola, V.; Avenanti, A.; Maurits, N.M.
2015-01-01
Seeing others performing an action induces the observers' motor cortex to "resonate" with the observed action. Transcranial magnetic stimulation (TMS) studies suggest that such motor resonance reflects the encoding of various motor features of the observed action, including the apparent motor
Large deviations for Markov chains in the positive quadrant
Energy Technology Data Exchange (ETDEWEB)
Borovkov, A A; Mogul' skii, A A [S.L. Sobolev Institute for Mathematics, Siberian Branch of the Russian Academy of Sciences, Novosibirsk (Russian Federation)
2001-10-31
The paper deals with so-called N-partially space-homogeneous time-homogeneous Markov chains X(y,n), n=0,1,2,..., X(y,0)=y, in the positive quadrant. These Markov chains are characterized by the following property of the transition probabilities P(y,A)=P(X(y,1) element of A): for some N{>=}0 the measure P(y,dx) depends only on x{sub 2}, y{sub 2}, and x{sub 1}-y{sub 1} in the domain x{sub 1}>N, y{sub 1}>N, and only on x{sub 1}, y{sub 1}, and x{sub 2}-y{sub 2} in the domain x{sub 2}>N, y{sub 2}>N. For such chains the asymptotic behaviour is found for a fixed set B as s{yields}{infinity}, |x|{yields}{infinity}, and n{yields}{infinity}. Some other conditions on the growth of parameters are also considered, for example, |x-y|{yields}{infinity}, |y|{yields}{infinity}. A study is made of the structure of the most probable trajectories, which give the main contribution to this asymptotics, and a number of other results pertaining to the topic are established. Similar results are obtained for the narrower class of 0-partially homogeneous ergodic chains under less restrictive moment conditions on the transition probabilities P(y,dx). Moreover, exact asymptotic expressions for the probabilities P(X(0,n) element of x+B) are found for 0-partially homogeneous ergodic chains under some additional conditions. The interest in partially homogeneous Markov chains in positive octants is due to the mathematical aspects (new and interesting problems arise in the framework of general large deviation theory) as well as applied issues, for such chains prove to be quite accurate mathematical models for numerous basic types of queueing and communication networks such as the widely known Jackson networks, polling systems, or communication networks associated with the ALOHA algorithm. There is a vast literature dealing with the analysis of these objects. The present paper is an attempt to find the extent to which an asymptotic analysis is possible for Markov chains of this type in their general
Timed Comparisons of Semi-Markov Processes
DEFF Research Database (Denmark)
Pedersen, Mathias Ruggaard; Larsen, Kim Guldstrand; Bacci, Giorgio
2018-01-01
-Markov processes, and investigate the question of how to compare two semi-Markov processes with respect to their time-dependent behaviour. To this end, we introduce the relation of being “faster than” between processes and study its algorithmic complexity. Through a connection to probabilistic automata we obtain...
Probabilistic Reachability for Parametric Markov Models
DEFF Research Database (Denmark)
Hahn, Ernst Moritz; Hermanns, Holger; Zhang, Lijun
2011-01-01
Given a parametric Markov model, we consider the problem of computing the rational function expressing the probability of reaching a given set of states. To attack this principal problem, Daws has suggested to first convert the Markov chain into a finite automaton, from which a regular expression...
Inhomogeneous Markov point processes by transformation
DEFF Research Database (Denmark)
Jensen, Eva B. Vedel; Nielsen, Linda Stougaard
2000-01-01
We construct parametrized models for point processes, allowing for both inhomogeneity and interaction. The inhomogeneity is obtained by applying parametrized transformations to homogeneous Markov point processes. An interesting model class, which can be constructed by this transformation approach......, is that of exponential inhomogeneous Markov point processes. Statistical inference For such processes is discussed in some detail....
Markov-modulated and feedback fluid queues
Scheinhardt, Willem R.W.
1998-01-01
In the last twenty years the field of Markov-modulated fluid queues has received considerable attention. In these models a fluid reservoir receives and/or releases fluid at rates which depend on the actual state of a background Markov chain. In the first chapter of this thesis we give a short
Active Learning of Markov Decision Processes for System Verification
DEFF Research Database (Denmark)
Chen, Yingke; Nielsen, Thomas Dyhre
2012-01-01
deterministic Markov decision processes from data by actively guiding the selection of input actions. The algorithm is empirically analyzed by learning system models of slot machines, and it is demonstrated that the proposed active learning procedure can significantly reduce the amount of data required...... demanding process, and this shortcoming has motivated the development of algorithms for automatically learning system models from observed system behaviors. Recently, algorithms have been proposed for learning Markov decision process representations of reactive systems based on alternating sequences...... of input/output observations. While alleviating the problem of manually constructing a system model, the collection/generation of observed system behaviors can also prove demanding. Consequently we seek to minimize the amount of data required. In this paper we propose an algorithm for learning...
Classification Using Markov Blanket for Feature Selection
DEFF Research Database (Denmark)
Zeng, Yifeng; Luo, Jian
2009-01-01
Selecting relevant features is in demand when a large data set is of interest in a classification task. It produces a tractable number of features that are sufficient and possibly improve the classification performance. This paper studies a statistical method of Markov blanket induction algorithm...... for filtering features and then applies a classifier using the Markov blanket predictors. The Markov blanket contains a minimal subset of relevant features that yields optimal classification performance. We experimentally demonstrate the improved performance of several classifiers using a Markov blanket...... induction as a feature selection method. In addition, we point out an important assumption behind the Markov blanket induction algorithm and show its effect on the classification performance....
Quantum Markov Chain Mixing and Dissipative Engineering
DEFF Research Database (Denmark)
Kastoryano, Michael James
2012-01-01
This thesis is the fruit of investigations on the extension of ideas of Markov chain mixing to the quantum setting, and its application to problems of dissipative engineering. A Markov chain describes a statistical process where the probability of future events depends only on the state...... of the system at the present point in time, but not on the history of events. Very many important processes in nature are of this type, therefore a good understanding of their behaviour has turned out to be very fruitful for science. Markov chains always have a non-empty set of limiting distributions...... (stationary states). The aim of Markov chain mixing is to obtain (upper and/or lower) bounds on the number of steps it takes for the Markov chain to reach a stationary state. The natural quantum extensions of these notions are density matrices and quantum channels. We set out to develop a general mathematical...
The Bacterial Sequential Markov Coalescent.
De Maio, Nicola; Wilson, Daniel J
2017-05-01
Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is
Hidden Markov Model Application to Transfer The Trader Online Forex Brokers
Directory of Open Access Journals (Sweden)
Farida Suharleni
2012-05-01
Full Text Available Hidden Markov Model is elaboration of Markov chain, which is applicable to cases that can’t directly observe. In this research, Hidden Markov Model is used to know trader’s transition to broker forex online. In Hidden Markov Model, observed state is observable part and hidden state is hidden part. Hidden Markov Model allows modeling system that contains interrelated observed state and hidden state. As observed state in trader’s transition to broker forex online is category 1, category 2, category 3, category 4, category 5 by condition of every broker forex online, whereas as hidden state is broker forex online Marketiva, Masterforex, Instaforex, FBS and Others. First step on application of Hidden Markov Model in this research is making construction model by making a probability of transition matrix (A from every broker forex online. Next step is making a probability of observation matrix (B by making conditional probability of five categories, that is category 1, category 2, category 3, category 4, category 5 by condition of every broker forex online and also need to determine an initial state probability (π from every broker forex online. The last step is using Viterbi algorithm to find hidden state sequences that is broker forex online sequences which is the most possible based on model and observed state that is the five categories. Application of Hidden Markov Model is done by making program with Viterbi algorithm using Delphi 7.0 software with observed state based on simulation data. Example: By the number of observation T = 5 and observed state sequences O = (2,4,3,5,1 is found hidden state sequences which the most possible with observed state O as following : where X1 = FBS, X2 = Masterforex, X3 = Marketiva, X4 = Others, and X5 = Instaforex.
Potegal, Michael; Drewel, Elena H; MacDonald, John T
2018-01-01
We explored associations between EEG pathophysiology and emotional/behavioral (E/B) problems of children with two types of epilepsy using standard parent questionnaires and two new indicators: tantrums recorded by parents at home and brief, emotion-eliciting situations in the laboratory. Children with Benign Rolandic epilepsy (BRE, N = 6) reportedly had shorter, more angry tantrums from which they recovered quickly. Children with Complex Partial Seizures (CPS, N = 13) had longer, sadder tantrums often followed by bad moods. More generally, BRE correlated with anger and aggression; CPS with sadness and withdrawal. Scores of a composite group of siblings ( N = 11) were generally intermediate between the BRE and CPS groups. Across all children, high voltage theta and/or interictal epileptiform discharges (IEDs) correlated with negative emotional reactions. Such EEG abnormalities in left hemisphere correlated with greater social fear, right hemisphere EEG abnormalities with greater anger. Right hemisphere localization in CPS was also associated with parent-reported problems at home. If epilepsy alters neural circuitry thereby increasing negative emotions, additional assessment of anti-epileptic drug treatment of epilepsy-related E/B problems would be warranted.
Directory of Open Access Journals (Sweden)
Michael Potegal
2018-03-01
Full Text Available We explored associations between EEG pathophysiology and emotional/behavioral (E/B problems of children with two types of epilepsy using standard parent questionnaires and two new indicators: tantrums recorded by parents at home and brief, emotion-eliciting situations in the laboratory. Children with Benign Rolandic epilepsy (BRE, N = 6 reportedly had shorter, more angry tantrums from which they recovered quickly. Children with Complex Partial Seizures (CPS, N = 13 had longer, sadder tantrums often followed by bad moods. More generally, BRE correlated with anger and aggression; CPS with sadness and withdrawal. Scores of a composite group of siblings (N = 11 were generally intermediate between the BRE and CPS groups. Across all children, high voltage theta and/or interictal epileptiform discharges (IEDs correlated with negative emotional reactions. Such EEG abnormalities in left hemisphere correlated with greater social fear, right hemisphere EEG abnormalities with greater anger. Right hemisphere localization in CPS was also associated with parent-reported problems at home. If epilepsy alters neural circuitry thereby increasing negative emotions, additional assessment of anti-epileptic drug treatment of epilepsy-related E/B problems would be warranted.
Schmidt games and Markov partitions
International Nuclear Information System (INIS)
Tseng, Jimmy
2009-01-01
Let T be a C 2 -expanding self-map of a compact, connected, C ∞ , Riemannian manifold M. We correct a minor gap in the proof of a theorem from the literature: the set of points whose forward orbits are nondense has full Hausdorff dimension. Our correction allows us to strengthen the theorem. Combining the correction with Schmidt games, we generalize the theorem in dimension one: given a point x 0 in M, the set of points whose forward orbit closures miss x 0 is a winning set. Finally, our key lemma, the no matching lemma, may be of independent interest in the theory of symbolic dynamics or the theory of Markov partitions
DEFF Research Database (Denmark)
Ditlevsen, Susanne; Samson, Adeline
2014-01-01
Parameter estimation in multidimensional diffusion models with only one coordinate observed is highly relevant in many biological applications, but a statistically difficult problem. In neuroscience, the membrane potential evolution in single neurons can be measured at high frequency, but biophys...
Toledano, Rafael; Jovel, Camilo Espinosa; Jiménez-Huete, Adolfo; Bayarri, Pau Giner; Campos, Dulce; Gomariz, Elena López; Giráldez, Beatriz González; García-Morales, Irene; Falip, Mercé; Agredano, Paula Martínez; Palao, Susana; Prior, María José Aguilar Amat; Pascual, María Rosa Querol; Navacerrada, Francisco José; González, Francisco Javier López; Ojeda, Joaquín; Sáez, Aránzazu Alfaro; Bermejo, Pedro Emilio; Gil-Nagel, Antonio
2017-08-01
Eslicarbazepine acetate (ESL, Aptiom™) is a once-daily anticonvulsant, approved as adjunctive treatment of partial-onset seizures (POS). Historical-controlled trials investigating the use of ESL as monotherapy have demonstrated a favorable efficacy and tolerability profile in patients with POS. This prospective, non-interventional study recruited POS patients in 17 hospitals in Spain. After a 3-month baseline period, ESL therapy was initiated as 400mg QD and up-titrated to an optimal maintenance dose based on clinical response and tolerance. The incidence of seizures was assessed via seizure calendars and the nature and severity of adverse events (AEs) were also recorded. A total of 117 patients (aged 9-87years) enrolled in the study and were treated with ESL at either 400mg/day (3.4% patients), 800mg/day (61% patients), 1200mg/day (27.1% patients) or 1600mg/day (8.5% patients). At 3months, 82.0% (n=72) of patients achieved a ≥50% reduction in seizure frequency, compared to 79.7% (n=67) of patients at 6months and 83.0% (n=49) at 12months. Patients who suffered secondary generalized tonic-clonic (SGTC) seizures had seizure-free rates of 71% (n=27), 69.6% (n=29), and 72.7% (n=16) at 3, 6, and 12months, respectively. Overall, 18 patients (15.3%) reported AEs of instability and dizziness (n=9), somnolence (n=3), mild hyponatremia (n=3), headache (n=1), hypertriglyceridemia (n=1), and allergic reaction (n=1), which caused ESL discontinuation of ESL treatment. ESL is effective and well tolerated as monotherapy for patients with POS, which supports previous findings. Early use is supported by its frequent use as monotherapy in this study and lack of severe side effects. Copyright © 2017 Elsevier Inc. All rights reserved.
Pemodelan Markov Switching Dengan Time-varying Transition Probability
Savitri, Anggita Puri; Warsito, Budi; Rahmawati, Rita
2016-01-01
Exchange rate or currency is an economic variable which reflects country's state of economy. It fluctuates over time because of its ability to switch the condition or regime caused by economic and political factors. The changes in the exchange rate are depreciation and appreciation. Therefore, it could be modeled using Markov Switching with Time-Varying Transition Probability which observe the conditional changes and use information variable. From this model, time-varying transition probabili...
Finite Markov processes and their applications
Iosifescu, Marius
2007-01-01
A self-contained treatment of finite Markov chains and processes, this text covers both theory and applications. Author Marius Iosifescu, vice president of the Romanian Academy and director of its Center for Mathematical Statistics, begins with a review of relevant aspects of probability theory and linear algebra. Experienced readers may start with the second chapter, a treatment of fundamental concepts of homogeneous finite Markov chain theory that offers examples of applicable models.The text advances to studies of two basic types of homogeneous finite Markov chains: absorbing and ergodic ch
Markov chains models, algorithms and applications
Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen
2013-01-01
This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters. Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods
Markov chains analytic and Monte Carlo computations
Graham, Carl
2014-01-01
Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec
Filtering with Discrete State Observations
International Nuclear Information System (INIS)
Dufour, F.; Elliott, R. J.
1999-01-01
The problem of estimating a finite state Markov chain observed via a process on the same state space is discussed. Optimal solutions are given for both the 'weak' and 'strong' formulations of the problem. The 'weak' formulation proceeds using a reference probability and a measure change for the Markov chain. The 'strong' formulation considers an observation process related to perturbations of the counting processes associated with the Markov chain. In this case the 'small noise' convergence is investigated
A scaling analysis of a cat and mouse Markov chain
Litvak, Nelli; Robert, Philippe
Motivated by an original on-line page-ranking algorithm, starting from an arbitrary Markov chain $(C_n)$ on a discrete state space ${\\cal S}$, a Markov chain $(C_n,M_n)$ on the product space ${\\cal S}^2$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain
Applying Markov Chains for NDVI Time Series Forecasting of Latvian Regions
Directory of Open Access Journals (Sweden)
Stepchenko Arthur
2015-12-01
Full Text Available Time series of earth observation based estimates of vegetation inform about variations in vegetation at the scale of Latvia. A vegetation index is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation. NDVI index is an important variable for vegetation forecasting and management of various problems, such as climate change monitoring, energy usage monitoring, managing the consumption of natural resources, agricultural productivity monitoring, drought monitoring and forest fire detection. In this paper, we make a one-step-ahead prediction of 7-daily time series of NDVI index using Markov chains. The choice of a Markov chain is due to the fact that a Markov chain is a sequence of random variables where each variable is located in some state. And a Markov chain contains probabilities of moving from one state to other.
Generated dynamics of Markov and quantum processes
Janßen, Martin
2016-01-01
This book presents Markov and quantum processes as two sides of a coin called generated stochastic processes. It deals with quantum processes as reversible stochastic processes generated by one-step unitary operators, while Markov processes are irreversible stochastic processes generated by one-step stochastic operators. The characteristic feature of quantum processes are oscillations, interference, lots of stationary states in bounded systems and possible asymptotic stationary scattering states in open systems, while the characteristic feature of Markov processes are relaxations to a single stationary state. Quantum processes apply to systems where all variables, that control reversibility, are taken as relevant variables, while Markov processes emerge when some of those variables cannot be followed and are thus irrelevant for the dynamic description. Their absence renders the dynamic irreversible. A further aim is to demonstrate that almost any subdiscipline of theoretical physics can conceptually be put in...
Confluence reduction for Markov automata (extended version)
Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette
Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. Recently, the process algebra MAPA was introduced to efficiently model such systems. As always, the state space explosion threatens the analysability of the models
Subharmonic projections for a quantum Markov semigroup
International Nuclear Information System (INIS)
Fagnola, Franco; Rebolledo, Rolando
2002-01-01
This article introduces a concept of subharmonic projections for a quantum Markov semigroup, in view of characterizing the support projection of a stationary state in terms of the semigroup generator. These results, together with those of our previous article [J. Math. Phys. 42, 1296 (2001)], lead to a method for proving the existence of faithful stationary states. This is often crucial in the analysis of ergodic properties of quantum Markov semigroups. The method is illustrated by applications to physical models
Transition Effect Matrices and Quantum Markov Chains
Gudder, Stan
2009-06-01
A transition effect matrix (TEM) is a quantum generalization of a classical stochastic matrix. By employing a TEM we obtain a quantum generalization of a classical Markov chain. We first discuss state and operator dynamics for a quantum Markov chain. We then consider various types of TEMs and vector states. In particular, we study invariant, equilibrium and singular vector states and investigate projective, bistochastic, invertible and unitary TEMs.
Energy Technology Data Exchange (ETDEWEB)
Frank, T D [Center for the Ecological Study of Perception and Action, Department of Psychology, University of Connecticut, 406 Babbidge Road, Storrs, CT 06269 (United States)
2008-07-18
We discuss nonlinear Markov processes defined on discrete time points and discrete state spaces using Markov chains. In this context, special attention is paid to the distinction between linear and nonlinear Markov processes. We illustrate that the Chapman-Kolmogorov equation holds for nonlinear Markov processes by a winner-takes-all model for social conformity. (fast track communication)
International Nuclear Information System (INIS)
Frank, T D
2008-01-01
We discuss nonlinear Markov processes defined on discrete time points and discrete state spaces using Markov chains. In this context, special attention is paid to the distinction between linear and nonlinear Markov processes. We illustrate that the Chapman-Kolmogorov equation holds for nonlinear Markov processes by a winner-takes-all model for social conformity. (fast track communication)
Hidden Markov models: the best models for forager movements?
Joo, Rocio; Bertrand, Sophie; Tam, Jorge; Fablet, Ronan
2013-01-01
One major challenge in the emerging field of movement ecology is the inference of behavioural modes from movement patterns. This has been mainly addressed through Hidden Markov models (HMMs). We propose here to evaluate two sets of alternative and state-of-the-art modelling approaches. First, we consider hidden semi-Markov models (HSMMs). They may better represent the behavioural dynamics of foragers since they explicitly model the duration of the behavioural modes. Second, we consider discriminative models which state the inference of behavioural modes as a classification issue, and may take better advantage of multivariate and non linear combinations of movement pattern descriptors. For this work, we use a dataset of >200 trips from human foragers, Peruvian fishermen targeting anchovy. Their movements were recorded through a Vessel Monitoring System (∼1 record per hour), while their behavioural modes (fishing, searching and cruising) were reported by on-board observers. We compare the efficiency of hidden Markov, hidden semi-Markov, and three discriminative models (random forests, artificial neural networks and support vector machines) for inferring the fishermen behavioural modes, using a cross-validation procedure. HSMMs show the highest accuracy (80%), significantly outperforming HMMs and discriminative models. Simulations show that data with higher temporal resolution, HSMMs reach nearly 100% of accuracy. Our results demonstrate to what extent the sequential nature of movement is critical for accurately inferring behavioural modes from a trajectory and we strongly recommend the use of HSMMs for such purpose. In addition, this work opens perspectives on the use of hybrid HSMM-discriminative models, where a discriminative setting for the observation process of HSMMs could greatly improve inference performance.
Hidden Markov models: the best models for forager movements?
Directory of Open Access Journals (Sweden)
Rocio Joo
Full Text Available One major challenge in the emerging field of movement ecology is the inference of behavioural modes from movement patterns. This has been mainly addressed through Hidden Markov models (HMMs. We propose here to evaluate two sets of alternative and state-of-the-art modelling approaches. First, we consider hidden semi-Markov models (HSMMs. They may better represent the behavioural dynamics of foragers since they explicitly model the duration of the behavioural modes. Second, we consider discriminative models which state the inference of behavioural modes as a classification issue, and may take better advantage of multivariate and non linear combinations of movement pattern descriptors. For this work, we use a dataset of >200 trips from human foragers, Peruvian fishermen targeting anchovy. Their movements were recorded through a Vessel Monitoring System (∼1 record per hour, while their behavioural modes (fishing, searching and cruising were reported by on-board observers. We compare the efficiency of hidden Markov, hidden semi-Markov, and three discriminative models (random forests, artificial neural networks and support vector machines for inferring the fishermen behavioural modes, using a cross-validation procedure. HSMMs show the highest accuracy (80%, significantly outperforming HMMs and discriminative models. Simulations show that data with higher temporal resolution, HSMMs reach nearly 100% of accuracy. Our results demonstrate to what extent the sequential nature of movement is critical for accurately inferring behavioural modes from a trajectory and we strongly recommend the use of HSMMs for such purpose. In addition, this work opens perspectives on the use of hybrid HSMM-discriminative models, where a discriminative setting for the observation process of HSMMs could greatly improve inference performance.
Adaptive Markov Chain Monte Carlo
Jadoon, Khan
2016-08-08
A substantial interpretation of electromagnetic induction (EMI) measurements requires quantifying optimal model parameters and uncertainty of a nonlinear inverse problem. For this purpose, an adaptive Bayesian Markov chain Monte Carlo (MCMC) algorithm is used to assess multi-orientation and multi-offset EMI measurements in an agriculture field with non-saline and saline soil. In the MCMC simulations, posterior distribution was computed using Bayes rule. The electromagnetic forward model based on the full solution of Maxwell\\'s equations was used to simulate the apparent electrical conductivity measured with the configurations of EMI instrument, the CMD mini-Explorer. The model parameters and uncertainty for the three-layered earth model are investigated by using synthetic data. Our results show that in the scenario of non-saline soil, the parameters of layer thickness are not well estimated as compared to layers electrical conductivity because layer thicknesses in the model exhibits a low sensitivity to the EMI measurements, and is hence difficult to resolve. Application of the proposed MCMC based inversion to the field measurements in a drip irrigation system demonstrate that the parameters of the model can be well estimated for the saline soil as compared to the non-saline soil, and provide useful insight about parameter uncertainty for the assessment of the model outputs.
Directory of Open Access Journals (Sweden)
Eric A Zilli
2008-12-01
Full Text Available Behavioral tasks are often used to study the different memory systems present in humans and animals. Such tasks are usually designed to isolate and measure some aspect of a single memory system. However, it is not necessarily clear that any given task actually does isolate a system or that the strategy used by a subject in the experiment is the one desired by the experimenter. We have previously shown that when tasks are written mathematically as a form of partially-observable Markov decision processes, the structure of the tasks provide information regarding the possible utility of certain memory systems. These previous analyses dealt with the disambiguation problem: given a specific ambiguous observation of the environment, is there information provided by a given memory strategy that can disambiguate that observation to allow a correct decisionµ Here we extend this approach to cases where multiple memory systems can be strategically combined in different ways. Specifically, we analyze the disambiguation arising from three ways by which episodic-like memory retrieval might be cued (by another episodic-like memory, by a semantic association, or by working memory for some earlier observation. We also consider the disambiguation arising from holding earlier working memories, episodic-like memories or semantic associations in working memory. From these analyses we can begin to develop a quantitative hierarchy among memory systems in which stimulus-response memories and semantic associations provide no disambiguation while the episodic memory system provides the most flexible
On the entropy of a hidden Markov process.
Jacquet, Philippe; Seroussi, Gadiel; Szpankowski, Wojciech
2008-05-01
We study the entropy rate of a hidden Markov process (HMP) defined by observing the output of a binary symmetric channel whose input is a first-order binary Markov process. Despite the simplicity of the models involved, the characterization of this entropy is a long standing open problem. By presenting the probability of a sequence under the model as a product of random matrices, one can see that the entropy rate sought is equal to a top Lyapunov exponent of the product. This offers an explanation for the elusiveness of explicit expressions for the HMP entropy rate, as Lyapunov exponents are notoriously difficult to compute. Consequently, we focus on asymptotic estimates, and apply the same product of random matrices to derive an explicit expression for a Taylor approximation of the entropy rate with respect to the parameter of the binary symmetric channel. The accuracy of the approximation is validated against empirical simulation results. We also extend our results to higher-order Markov processes and to Rényi entropies of any order.
Extracting Markov Models of Peptide Conformational Dynamics from Simulation Data.
Schultheis, Verena; Hirschberger, Thomas; Carstens, Heiko; Tavan, Paul
2005-07-01
A high-dimensional time series obtained by simulating a complex and stochastic dynamical system (like a peptide in solution) may code an underlying multiple-state Markov process. We present a computational approach to most plausibly identify and reconstruct this process from the simulated trajectory. Using a mixture of normal distributions we first construct a maximum likelihood estimate of the point density associated with this time series and thus obtain a density-oriented partition of the data space. This discretization allows us to estimate the transfer operator as a matrix of moderate dimension at sufficient statistics. A nonlinear dynamics involving that matrix and, alternatively, a deterministic coarse-graining procedure are employed to construct respective hierarchies of Markov models, from which the model most plausibly mapping the generating stochastic process is selected by consideration of certain observables. Within both procedures the data are classified in terms of prototypical points, the conformations, marking the various Markov states. As a typical example, the approach is applied to analyze the conformational dynamics of a tripeptide in solution. The corresponding high-dimensional time series has been obtained from an extended molecular dynamics simulation.
''adding'' algorithm for the Markov chain formalism for radiation transfer
International Nuclear Information System (INIS)
Esposito, L.W.
1979-01-01
The Markov chain radiative transfer method of Esposito and House has been shown to be both efficient and accurate for calculation of the diffuse reflection from a homogeneous scattering planetary atmosphere. The use of a new algorithm similar to the ''adding'' formula of Hansen and Travis extends the application of this formalism to an arbitrarily deep atmosphere. The basic idea for this algorithm is to consider a preceding calculation as a single state of a new Markov chain. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. The time required for the algorithm is comparable to that for a doubling calculation for a homogeneous atmosphere, but for a non-homogeneous atmosphere the new method is considerably faster than the standard ''adding'' routine. As with he standard ''adding'' method, the information on the internal radiation field is lost during the calculation. This method retains the advantage of the earlier Markov chain method that the time required is relatively insensitive to the number of illumination angles or observation angles for which the diffuse reflection is calculated. A technical write-up giving fuller details of the algorithm and a sample code are available from the author
Hidden Markov Model for Stock Selection
Directory of Open Access Journals (Sweden)
Nguyet Nguyen
2015-10-01
Full Text Available The hidden Markov model (HMM is typically used to predict the hidden regimes of observation data. Therefore, this model finds applications in many different areas, such as speech recognition systems, computational molecular biology and financial market predictions. In this paper, we use HMM for stock selection. We first use HMM to make monthly regime predictions for the four macroeconomic variables: inflation (consumer price index (CPI, industrial production index (INDPRO, stock market index (S&P 500 and market volatility (VIX. At the end of each month, we calibrate HMM’s parameters for each of these economic variables and predict its regimes for the next month. We then look back into historical data to find the time periods for which the four variables had similar regimes with the forecasted regimes. Within those similar periods, we analyze all of the S&P 500 stocks to identify which stock characteristics have been well rewarded during the time periods and assign scores and corresponding weights for each of the stock characteristics. A composite score of each stock is calculated based on the scores and weights of its features. Based on this algorithm, we choose the 50 top ranking stocks to buy. We compare the performances of the portfolio with the benchmark index, S&P 500. With an initial investment of $100 in December 1999, over 15 years, in December 2014, our portfolio had an average gain per annum of 14.9% versus 2.3% for the S&P 500.
Stoll, Richard; Cappel, I; Jablonski-Momeni, Anahita; Pieper, K; Stachniss, V
2007-01-01
This study evaluated the long-term survival of inlays and partial crowns made of IPS Empress. For this purpose, the patient data of a prospective study were examined in retrospect and statistically evaluated. All of the inlays and partial crowns fabricated of IPS-Empress within the Department of Operative Dentistry at the School of Dental Medicine of Philipps University, Marburg, Germany were systematically recorded in a database between 1991 and 2001. The corresponding patient files were revised at the end of 2001. The information gathered in this way was used to evaluate the survival of the restorations using the method described by Kaplan and Meyer. A total of n = 1624 restorations were fabricated of IPS-Empress within the observation period. During this time, n = 53 failures were recorded. The remaining restorations were observed for a mean period of 18.77 months. The failures were mainly attributed to fractures, endodontic problems and cementation errors. The last failure was established after 82 months. At this stage, a cumulative survival probability of p = 0.81 was registered with a standard error of 0.04. At this time, n = 30 restorations were still being observed. Restorations on vital teeth (n = 1588) showed 46 failures, with a cumulative survival probability of p = 0.82. Restorations performed on non-vital teeth (n = 36) showed seven failures, with a cumulative survival probability of p = 0.53. Highly significant differences were found between the two groups (p < 0.0001) in a log-rank test. No significant difference (p = 0.41) was found between the patients treated by students (n = 909) and those treated by qualified dentists (n = 715). Likewise, no difference (p = 0.13) was established between the restorations seated with a high viscosity cement (n = 295) and those placed with a low viscosity cement (n = 1329).
Chen, Ming; Hu, Xiang-long; Wu, Zu-xing
2010-06-01
To observe changes of the partial oxygen pressure in the deep tissues along the Large Intestine Meridian (LIM) during acupuncture stimulation, so as to reveal the characteristics of energy metabolism in the tissues along the LIM. Thirty-one healthy volunteer subjects were enlisted in the present study. Partial oxygen pressure (POP) in the tissues (at a depth of about 1.5 cm) of acupoints Binao (LI 14), Shouwuli (LI 13), Shousanli (LI 10), 2 non-acupoints [the midpoints between Quchi (LI 11) and LI 14, and between Yangxi (LI 5) and LI 11) of the LIM, and 10 non-meridian points, 1.5-2.0 cm lateral and medial to each of the tested points of the LIM was detected before, during and after electroacupuncture (EA) stimulation of Hegu (LI 4) by using a tissue oxygen tension needle-like sensor. In normal condition, the POP values in the deep tissues along the LIM were significantly higher than those of the non-meridian control points on its bilateral sides. During and after EA of Hegu (LI 4), the POP levels decreased significantly in the deep tissues along the LIM in comparison with pre-EA (P 0.05). POP is significantly higher in the deep tissues along the LIM of healthy subjects under normal conditions, which can be downregulated by EA of Hegu (LI 4), suggesting an increase of both the utilization rate of oxygen and energy metabolism after EA.
Zipf exponent of trajectory distribution in the hidden Markov model
Bochkarev, V. V.; Lerner, E. Yu
2014-03-01
This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different.
Zipf exponent of trajectory distribution in the hidden Markov model
International Nuclear Information System (INIS)
Bochkarev, V V; Lerner, E Yu
2014-01-01
This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different
Performance Modeling of Communication Networks with Markov Chains
Mo, Jeonghoon
2010-01-01
This book is an introduction to Markov chain modeling with applications to communication networks. It begins with a general introduction to performance modeling in Chapter 1 where we introduce different performance models. We then introduce basic ideas of Markov chain modeling: Markov property, discrete time Markov chain (DTMe and continuous time Markov chain (CTMe. We also discuss how to find the steady state distributions from these Markov chains and how they can be used to compute the system performance metric. The solution methodologies include a balance equation technique, limiting probab
Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe
2016-01-01
Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.
Markov and mixed models with applications
DEFF Research Database (Denmark)
Mortensen, Stig Bousgaard
This thesis deals with mathematical and statistical models with focus on applications in pharmacokinetic and pharmacodynamic (PK/PD) modelling. These models are today an important aspect of the drug development in the pharmaceutical industry and continued research in statistical methodology within...... or uncontrollable factors in an individual. Modelling using SDEs also provides new tools for estimation of unknown inputs to a system and is illustrated with an application to estimation of insulin secretion rates in diabetic patients. Models for the eect of a drug is a broader area since drugs may affect...... for non-parametric estimation of Markov processes are proposed to give a detailed description of the sleep process during the night. Statistically the Markov models considered for sleep states are closely related to the PK models based on SDEs as both models share the Markov property. When the models...
Consistent Estimation of Partition Markov Models
Directory of Open Access Journals (Sweden)
Jesús E. García
2017-04-01
Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.
Accelerated decomposition techniques for large discounted Markov decision processes
Larach, Abdelhadi; Chafik, S.; Daoui, C.
2017-12-01
Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorithm, which is a variant of Tarjan's algorithm that simultaneously finds the SCCs and their belonging levels. Second, a new definition of the restricted MDPs is presented to ameliorate some hierarchical solutions in discounted MDPs using value iteration (VI) algorithm based on a list of state-action successors. Finally, a robotic motion-planning example and the experiment results are presented to illustrate the benefit of the proposed decomposition algorithms.
Markov bridges, bisection and variance reduction
DEFF Research Database (Denmark)
Asmussen, Søren; Hobolth, Asger
. In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented......Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints...
Inhomogeneous Markov Models for Describing Driving Patterns
DEFF Research Database (Denmark)
Iversen, Emil Banning; Møller, Jan K.; Morales, Juan Miguel
2017-01-01
. Specifically, an inhomogeneous Markov model that captures the diurnal variation in the use of a vehicle is presented. The model is defined by the time-varying probabilities of starting and ending a trip, and is justified due to the uncertainty associated with the use of the vehicle. The model is fitted to data...... collected from the actual utilization of a vehicle. Inhomogeneous Markov models imply a large number of parameters. The number of parameters in the proposed model is reduced using B-splines....
Inhomogeneous Markov Models for Describing Driving Patterns
DEFF Research Database (Denmark)
Iversen, Jan Emil Banning; Møller, Jan Kloppenborg; Morales González, Juan Miguel
. Specically, an inhomogeneous Markov model that captures the diurnal variation in the use of a vehicle is presented. The model is dened by the time-varying probabilities of starting and ending a trip and is justied due to the uncertainty associated with the use of the vehicle. The model is tted to data...... collected from the actual utilization of a vehicle. Inhomogeneous Markov models imply a large number of parameters. The number of parameters in the proposed model is reduced using B-splines....
Detecting Structural Breaks using Hidden Markov Models
DEFF Research Database (Denmark)
Ntantamis, Christos
Testing for structural breaks and identifying their location is essential for econometric modeling. In this paper, a Hidden Markov Model (HMM) approach is used in order to perform these tasks. Breaks are defined as the data points where the underlying Markov Chain switches from one state to another....... The estimation of the HMM is conducted using a variant of the Iterative Conditional Expectation-Generalized Mixture (ICE-GEMI) algorithm proposed by Delignon et al. (1997), that permits analysis of the conditional distributions of economic data and allows for different functional forms across regimes...
Predicting Protein Secondary Structure with Markov Models
DEFF Research Database (Denmark)
Fischer, Paul; Larsen, Simon; Thomsen, Claus
2004-01-01
we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....
Markov processes an introduction for physical scientists
Gillespie, Daniel T
1991-01-01
Markov process theory is basically an extension of ordinary calculus to accommodate functions whos time evolutions are not entirely deterministic. It is a subject that is becoming increasingly important for many fields of science. This book develops the single-variable theory of both continuous and jump Markov processes in a way that should appeal especially to physicists and chemists at the senior and graduate level.Key Features* A self-contained, prgamatic exposition of the needed elements of random variable theory* Logically integrated derviations of the Chapman-Kolmogorov e
Prediction of Annual Rainfall Pattern Using Hidden Markov Model ...
African Journals Online (AJOL)
ADOWIE PERE
Hidden Markov model is very influential in stochastic world because of its ... the earth from the clouds. The usual ... Rainfall modelling and ... Markov Models have become popular tools ... environment sciences, University of Jos, plateau state,.
Extending Markov Automata with State and Action Rewards
Guck, Dennis; Timmer, Mark; Blom, Stefan; Bertrand, N.; Bortolussi, L.
This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton that allows the modelling of systems incorporating rewards in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Our models support both rewards that are
Hidden-Markov-Model Analysis Of Telemanipulator Data
Hannaford, Blake; Lee, Paul
1991-01-01
Mathematical model and procedure based on hidden-Markov-model concept undergoing development for use in analysis and prediction of outputs of force and torque sensors of telerobotic manipulators. In model, overall task broken down into subgoals, and transition probabilities encode ease with which operator completes each subgoal. Process portion of model encodes task-sequence/subgoal structure, and probability-density functions for forces and torques associated with each state of manipulation encode sensor signals that one expects to observe at subgoal. Parameters of model constructed from engineering knowledge of task.
Stochastic modeling of pitting corrosion in underground pipelines using Markov chains
Energy Technology Data Exchange (ETDEWEB)
Velazquez, J.C.; Caleyo, F.; Hallen, J.M.; Araujo, J.E. [Instituto Politecnico Nacional (IPN), Mexico D.F. (Mexico). Escuela Superior de Ingenieria Quimica e Industrias Extractivas (ESIQIE); Valor, A. [Universidad de La Habana, La Habana (Cuba)
2009-07-01
A non-homogenous, linear growth (pure birth) Markov process, with discrete states in continuous time, has been used to model external pitting corrosion in underground pipelines. The transition probability function for the pit depth is obtained from the analytical solution of the forward Kolmogorov equations for this process. The parameters of the transition probability function between depth states can be identified from the observed time evolution of the mean of the pit depth distribution. Monte Carlo simulations were used to predict the time evolution of the mean value of the pit depth distribution in soils with different physicochemical characteristics. The simulated distributions have been used to create an empirical Markov-chain-based stochastic model for predicting the evolution of pitting corrosion from the observed properties of the soil in contact with the pipeline. Real- life case studies, involving simulated and measured pit depth distributions are presented to illustrate the application of the proposed Markov chains model. (author)
Perturbation theory for Markov chains via Wasserstein distance
Rudolf, Daniel; Schweizer, Nikolaus
2017-01-01
Perturbation theory for Markov chains addresses the question of how small differences in the transition probabilities of Markov chains are reflected in differences between their distributions. We prove powerful and flexible bounds on the distance of the nth step distributions of two Markov chains
Quantum Enhanced Inference in Markov Logic Networks.
Wittek, Peter; Gogolin, Christian
2017-04-19
Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.
Markov Random Fields on Triangle Meshes
DEFF Research Database (Denmark)
Andersen, Vedrana; Aanæs, Henrik; Bærentzen, Jakob Andreas
2010-01-01
In this paper we propose a novel anisotropic smoothing scheme based on Markov Random Fields (MRF). Our scheme is formulated as two coupled processes. A vertex process is used to smooth the mesh by displacing the vertices according to a MRF smoothness prior, while an independent edge process label...
A Martingale Decomposition of Discrete Markov Chains
DEFF Research Database (Denmark)
Hansen, Peter Reinhard
We consider a multivariate time series whose increments are given from a homogeneous Markov chain. We show that the martingale component of this process can be extracted by a filtering method and establish the corresponding martingale decomposition in closed-form. This representation is useful fo...
Renewal characterization of Markov modulated Poisson processes
Directory of Open Access Journals (Sweden)
Marcel F. Neuts
1989-01-01
Full Text Available A Markov Modulated Poisson Process (MMPP M(t defined on a Markov chain J(t is a pure jump process where jumps of M(t occur according to a Poisson process with intensity λi whenever the Markov chain J(t is in state i. M(t is called strongly renewal (SR if M(t is a renewal process for an arbitrary initial probability vector of J(t with full support on P={i:λi>0}. M(t is called weakly renewal (WR if there exists an initial probability vector of J(t such that the resulting MMPP is a renewal process. The purpose of this paper is to develop general characterization theorems for the class SR and some sufficiency theorems for the class WR in terms of the first passage times of the bivariate Markov chain [J(t,M(t]. Relevance to the lumpability of J(t is also studied.
Evaluation of Usability Utilizing Markov Models
Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane
2012-01-01
Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…
Bayesian analysis for reversible Markov chains
Diaconis, P.; Rolles, S.W.W.
2006-01-01
We introduce a natural conjugate prior for the transition matrix of a reversible Markov chain. This allows estimation and testing. The prior arises from random walk with reinforcement in the same way the Dirichlet prior arises from Pólya’s urn. We give closed form normalizing constants, a simple
Bisimulation and Simulation Relations for Markov Chains
Baier, Christel; Hermanns, H.; Katoen, Joost P.; Wolf, Verena; Aceto, L.; Gordon, A.
2006-01-01
Formal notions of bisimulation and simulation relation play a central role for any kind of process algebra. This short paper sketches the main concepts for bisimulation and simulation relations for probabilistic systems, modelled by discrete- or continuous-time Markov chains.
Discounted Markov games : generalized policy iteration method
Wal, van der J.
1978-01-01
In this paper, we consider two-person zero-sum discounted Markov games with finite state and action spaces. We show that the Newton-Raphson or policy iteration method as presented by Pollats-chek and Avi-Itzhak does not necessarily converge, contradicting a proof of Rao, Chandrasekaran, and Nair.
Hidden Markov Models for Human Genes
DEFF Research Database (Denmark)
Baldi, Pierre; Brunak, Søren; Chauvin, Yves
1997-01-01
We analyse the sequential structure of human genomic DNA by hidden Markov models. We apply models of widely different design: conventional left-right constructs and models with a built-in periodic architecture. The models are trained on segments of DNA sequences extracted such that they cover com...
Markov Trends in Macroeconomic Time Series
R. Paap (Richard)
1997-01-01
textabstractMany macroeconomic time series are characterised by long periods of positive growth, expansion periods, and short periods of negative growth, recessions. A popular model to describe this phenomenon is the Markov trend, which is a stochastic segmented trend where the slope depends on the
Optimal dividend distribution under Markov regime switching
Jiang, Z.; Pistorius, M.
2012-01-01
We investigate the problem of optimal dividend distribution for a company in the presence of regime shifts. We consider a company whose cumulative net revenues evolve as a Brownian motion with positive drift that is modulated by a finite state Markov chain, and model the discount rate as a
Revisiting Weak Simulation for Substochastic Markov Chains
DEFF Research Database (Denmark)
Jansen, David N.; Song, Lei; Zhang, Lijun
2013-01-01
of the logic PCTL\\x, and its completeness was conjectured. We revisit this result and show that soundness does not hold in general, but only for Markov chains without divergence. It is refuted for some systems with substochastic distributions. Moreover, we provide a counterexample to completeness...
Fracture Mechanical Markov Chain Crack Growth Model
DEFF Research Database (Denmark)
Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard
1991-01-01
propagation process can be described by a discrete space Markov theory. The model is applicable to deterministic as well as to random loading. Once the model parameters for a given material have been determined, the results can be used for any structure as soon as the geometrical function is known....
Multi-dimensional quasitoeplitz Markov chains
Directory of Open Access Journals (Sweden)
Alexander N. Dudin
1999-01-01
Full Text Available This paper deals with multi-dimensional quasitoeplitz Markov chains. We establish a sufficient equilibrium condition and derive a functional matrix equation for the corresponding vector-generating function, whose solution is given algorithmically. The results are demonstrated in the form of examples and applications in queues with BMAP-input, which operate in synchronous random environment.
Markov chains with quasitoeplitz transition matrix
Directory of Open Access Journals (Sweden)
Alexander M. Dukhovny
1989-01-01
Full Text Available This paper investigates a class of Markov chains which are frequently encountered in various applications (e.g. queueing systems, dams and inventories with feedback. Generating functions of transient and steady state probabilities are found by solving a special Riemann boundary value problem on the unit circle. A criterion of ergodicity is established.
Markov Chain Estimation of Avian Seasonal Fecundity
To explore the consequences of modeling decisions on inference about avian seasonal fecundity we generalize previous Markov chain (MC) models of avian nest success to formulate two different MC models of avian seasonal fecundity that represent two different ways to model renestin...
Noise can speed convergence in Markov chains.
Franzke, Brandon; Kosko, Bart
2011-10-01
A new theorem shows that noise can speed convergence to equilibrium in discrete finite-state Markov chains. The noise applies to the state density and helps the Markov chain explore improbable regions of the state space. The theorem ensures that a stochastic-resonance noise benefit exists for states that obey a vector-norm inequality. Such noise leads to faster convergence because the noise reduces the norm components. A corollary shows that a noise benefit still occurs if the system states obey an alternate norm inequality. This leads to a noise-benefit algorithm that requires knowledge of the steady state. An alternative blind algorithm uses only past state information to achieve a weaker noise benefit. Simulations illustrate the predicted noise benefits in three well-known Markov models. The first model is a two-parameter Ehrenfest diffusion model that shows how noise benefits can occur in the class of birth-death processes. The second model is a Wright-Fisher model of genotype drift in population genetics. The third model is a chemical reaction network of zeolite crystallization. A fourth simulation shows a convergence rate increase of 64% for states that satisfy the theorem and an increase of 53% for states that satisfy the corollary. A final simulation shows that even suboptimal noise can speed convergence if the noise applies over successive time cycles. Noise benefits tend to be sharpest in Markov models that do not converge quickly and that do not have strong absorbing states.
Model Checking Infinite-State Markov Chains
Remke, Anne Katharina Ingrid; Haverkort, Boudewijn R.H.M.; Cloth, L.
2004-01-01
In this paper algorithms for model checking CSL (continuous stochastic logic) against infinite-state continuous-time Markov chains of so-called quasi birth-death type are developed. In doing so we extend the applicability of CSL model checking beyond the recently proposed case for finite-state
Model Checking Markov Chains: Techniques and Tools
Zapreev, I.S.
2008-01-01
This dissertation deals with four important aspects of model checking Markov chains: the development of efficient model-checking tools, the improvement of model-checking algorithms, the efficiency of the state-space reduction techniques, and the development of simulation-based model-checking
Nonlinearly perturbed semi-Markov processes
Silvestrov, Dmitrii
2017-01-01
The book presents new methods of asymptotic analysis for nonlinearly perturbed semi-Markov processes with a finite phase space. These methods are based on special time-space screening procedures for sequential phase space reduction of semi-Markov processes combined with the systematical use of operational calculus for Laurent asymptotic expansions. Effective recurrent algorithms are composed for getting asymptotic expansions, without and with explicit upper bounds for remainders, for power moments of hitting times, stationary and conditional quasi-stationary distributions for nonlinearly perturbed semi-Markov processes. These results are illustrated by asymptotic expansions for birth-death-type semi-Markov processes, which play an important role in various applications. The book will be a useful contribution to the continuing intensive studies in the area. It is an essential reference for theoretical and applied researchers in the field of stochastic processes and their applications that will cont...
Quantum Enhanced Inference in Markov Logic Networks
Wittek, Peter; Gogolin, Christian
2017-04-01
Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.
Markov chain of distances between parked cars
International Nuclear Information System (INIS)
Seba, Petr
2008-01-01
We describe the distribution of distances between parked cars as a solution of certain Markov processes and show that its solution is obtained with the help of a distributional fixed point equation. Under certain conditions the process is solved explicitly. The resulting probability density is compared with the actual parking data measured in the city. (fast track communication)
Continuity Properties of Distances for Markov Processes
DEFF Research Database (Denmark)
Jaeger, Manfred; Mao, Hua; Larsen, Kim Guldstrand
2014-01-01
In this paper we investigate distance functions on finite state Markov processes that measure the behavioural similarity of non-bisimilar processes. We consider both probabilistic bisimilarity metrics, and trace-based distances derived from standard Lp and Kullback-Leibler distances. Two desirable...
Model Checking Structured Infinite Markov Chains
Remke, Anne Katharina Ingrid
2008-01-01
In the past probabilistic model checking hast mostly been restricted to finite state models. This thesis explores the possibilities of model checking with continuous stochastic logic (CSL) on infinite-state Markov chains. We present an in-depth treatment of model checking algorithms for two special
Efficient Modelling and Generation of Markov Automata
Koutny, M.; Timmer, Mark; Ulidowski, I.; Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette
This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the
A Metrized Duality Theorem for Markov Processes
DEFF Research Database (Denmark)
Kozen, Dexter; Mardare, Radu Iulian; Panangaden, Prakash
2014-01-01
We extend our previous duality theorem for Markov processes by equipping the processes with a pseudometric and the algebras with a notion of metric diameter. We are able to show that the isomorphisms of our previous duality theorem become isometries in this quantitative setting. This opens the wa...
Solt, Andras S; Bostock, Mark J; Shrestha, Binesh; Kumar, Prashant; Warne, Tony; Tate, Christopher G; Nietlispach, Daniel
2017-11-27
A complex conformational energy landscape determines G-protein-coupled receptor (GPCR) signalling via intracellular binding partners (IBPs), e.g., G s and β-arrestin. Using 13 C methyl methionine NMR for the β 1 -adrenergic receptor, we identify ligand efficacy-dependent equilibria between an inactive and pre-active state and, in complex with G s -mimetic nanobody, between more and less active ternary complexes. Formation of a basal activity complex through ligand-free nanobody-receptor interaction reveals structural differences on the cytoplasmic receptor side compared to the full agonist-bound nanobody-coupled form, suggesting that ligand-induced variations in G-protein interaction underpin partial agonism. Significant differences in receptor dynamics are observed ranging from rigid nanobody-coupled states to extensive μs-to-ms timescale dynamics when bound to a full agonist. We suggest that the mobility of the full agonist-bound form primes the GPCR to couple to IBPs. On formation of the ternary complex, ligand efficacy determines the quality of the interaction between the rigidified receptor and an IBP and consequently the signalling level.
Reliability analysis and prediction of mixed mode load using Markov Chain Model
International Nuclear Information System (INIS)
Nikabdullah, N.; Singh, S. S. K.; Alebrahim, R.; Azizi, M. A.; K, Elwaleed A.; Noorani, M. S. M.
2014-01-01
The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading
Directory of Open Access Journals (Sweden)
Zhi-Hui Deng
2015-12-01
Full Text Available AIM:To observe the availability and tolerance of 0.1% bromfenac sodium hydrate ophthalmic solution in the partial substitution of glucocorticoid after laser subepithelial keratomileusis(LASEK. METHODS:Totally 180 patients(180 eyesreceived LASEK were selected and divided into study group and control group according to different medications. The study group adopted 0.1% bromfenac sodium hydrate ophthalmic solution combined with glucocorticoid; the control group adopted glucocorticoid. The changes of visual acuity and intraocular pressure(IOPof two groups were recorded before and after surgery and the occurrence of diffuse larnellar kerafitis(DLKafter surgery were observed. RESULTS:After 1mo of surgery, visual acuity of study group was 1.25±0.22 while that of control group was 0.97±0.23(PP>0.05. After 1 and 3mo of surgery, IOP of study group was 12.29±2.71 and 12.67±2.33mmHg while that of control group was 14.26±2.65 and 14.56±2.61mmHg, the difference was statistically significant(PP>0.05. In terms of tolerance, the control group had 4 cases(4 eyestaking the IOP-lowering medication. The study group had no uncomfortable cases. The DLK level of the study group at 0, 1, 2 was 93.33%, 6.67%, 0%, respectively and those in control group was 75.56%, 17.78% and 6.67%, respectively, and the differences were significant(PCONCLUSION:0.1% bromfenac sodium hydrate ophthalmic solution can efficiently stabilize the patient's IOP after LASEK. The patient has a better visual acuity, visual function and fewer complications. The tolerance is also favorable. It is worthy of promotion.
Study of the seismic activity in central Ionian Islands via semi-Markov modelling
Pertsinidou, Christina Elisavet; Tsaklidis, George; Papadimitriou, Eleftheria
2017-06-01
The seismicity of the central Ionian Islands ( M ≥ 5.2, 1911-2014) is studied via a semi-Markov chain which is investigated in terms of the destination probabilities (occurrence probabilities). The interevent times are considered to follow geometric (in which case the semi-Markov model reduces to a Markov model) or Pareto distributions. The study of the destination probabilities is useful for forecasting purposes because they can provide the more probable earthquake magnitude and occurrence time. Using the first half of the data sample for the estimation procedure and the other half for forecasting purposes it is found that the time windows obtained by the destination probabilities include 72.9% of the observed earthquake occurrence times (for all magnitudes) and 71.4% for the larger ( M ≥ 6.0) ones.
Discrete-time semi-Markov modeling of human papillomavirus persistence
Mitchell, C. E.; Hudgens, M. G.; King, C. C.; Cu-Uvin, S.; Lo, Y.; Rompalo, A.; Sobel, J.; Smith, J. S.
2011-01-01
Multi-state modeling is often employed to describe the progression of a disease process. In epidemiological studies of certain diseases, the disease state is typically only observed at periodic clinical visits, producing incomplete longitudinal data. In this paper we consider fitting semi-Markov models to estimate the persistence of human papillomavirus (HPV) type-specific infection in studies where the status of HPV type(s) is assessed periodically. Simulation study results are presented indicating the semi-Markov estimator is more accurate than an estimator currently used in the HPV literature. The methods are illustrated using data from the HIV Epidemiology Research Study (HERS). PMID:21538985
Ilić, L.; Kuzmanoski, M.; Kolarž, P.; Nina, A.; Srećković, V.; Mijić, Z.; Bajčetić, J.; Andrić, M.
2018-06-01
Measurements of atmospheric parameters were carried out during the partial solar eclipse (51% coverage of solar disc) observed in Belgrade on 20 March 2015. The measured parameters included height of the planetary boundary layer (PBL), meteorological parameters, solar radiation, surface ozone and air ions, as well as Very Low Frequency (VLF, 3-30 kHz) and Low Frequency (LF, 30-300 kHz) signals to detect low-ionospheric plasma perturbations. The observed decrease of global solar and UV-B radiation was 48%, similar to the solar disc coverage. Meteorological parameters showed similar behavior at two measurement sites, with different elevations and different measurement heights. Air temperature change due to solar eclipse was more pronounced at the lower measurement height, showing a decrease of 2.6 °C, with 15-min time delay relative to the eclipse maximum. However, at the other site temperature did not decrease; its morning increase ceased with the start of the eclipse, and continued after the eclipse maximum. Relative humidity at both sites remained almost constant until the eclipse maximum and then decreased as the temperature increased. The wind speed decreased and reached minimum 35 min after the last contact. The eclipse-induced decrease of PBL height was about 200 m, with minimum reached 20 min after the eclipse maximum. Although dependent on UV radiation, surface ozone concentration did not show the expected decrease, possibly due to less significant influence of photochemical reactions at the measurement site and decline of PBL height. Air-ion concentration decreased during the solar eclipse, with minimum almost coinciding with the eclipse maximum. Additionally, the referential Line-of-Sight (LOS) radio link was set in the area of Belgrade, using the carrier frequency of 3 GHz. Perturbation of the receiving signal level (RSL) was observed on March 20, probably induced by the solar eclipse. Eclipse-related perturbations in ionospheric D-region were detected
Markov Random Field Surface Reconstruction
DEFF Research Database (Denmark)
Paulsen, Rasmus Reinhold; Bærentzen, Jakob Andreas; Larsen, Rasmus
2010-01-01
) and knowledge about data (the observation model) in an orthogonal fashion. Local models that account for both scene-specific knowledge and physical properties of the scanning device are described. Furthermore, how the optimal distance field can be computed is demonstrated using conjugate gradients, sparse...
Logofet, D O; Evstigneev, O I; Aleĭnikov, A A; Morozova, A O
2014-01-01
A homogeneous Markov chain of three aggregated states "pond--swamp--wood" is proposed as a model of cyclic zoogenic successions caused by beaver (Castor fiber L.) life activity in a forest biogeocoenosis. To calibrate the chain transition matrix, the data have appeared sufficient that were gained from field studies undertaken in "Bryanskii Les" Reserve in the years of 2002-2008. Major outcomes of the calibrated model ensue from the formulae of finite homogeneous Markov chain theory: the stationary probability distribution of states, thematrix (T) of mean first passage times, and the mean durations (M(j)) of succession stages. The former illustrates the distribution of relative areas under succession stages if the current trends and transition rates of succession are conserved in the long-term--it has appeared close to the observed distribution. Matrix T provides for quantitative characteristics of the cyclic process, specifying the ranges the experts proposed for the duration of stages in the conceptual scheme of succession. The calculated values of M(j) detect potential discrepancies between empirical data, the expert knowledge that summarizes the data, and the postulates accepted in the mathematical model. The calculated M2 value falls outside the expert range, which gives a reason to doubt the validity of expert estimation proposed, the aggregation mode chosen for chain states, or/and the accuracy-of data available, i.e., to draw certain "lessons" from partially successful calibration. Refusal to postulate the time homogeneity or the Markov property of the chain is also discussed among possible ways to improve the model.
Asymptotic behavior of Bayes estimators for hidden Markov models with application to ion channels
de Gunst, M.C.M.; Shcherbakova, O.V.
2008-01-01
In this paper we study the asymptotic behavior of Bayes estimators for hidden Markov models as the number of observations goes to infinity. The theorem that we prove is similar to the Bernstein-von Mises theorem on the asymptotic behavior of the posterior distribution for the case of independent
Activity recognition using semi-Markov models on real world smart home datasets
van Kasteren, T.L.M.; Englebienne, G.; Kröse, B.J.A.
2010-01-01
Accurately recognizing human activities from sensor data recorded in a smart home setting is a challenging task. Typically, probabilistic models such as the hidden Markov model (HMM) or conditional random fields (CRF) are used to map the observed sensor data onto the hidden activity states. A
Nuclear security assessment with Markov model approach
International Nuclear Information System (INIS)
Suzuki, Mitsutoshi; Terao, Norichika
2013-01-01
Nuclear security risk assessment with the Markov model based on random event is performed to explore evaluation methodology for physical protection in nuclear facilities. Because the security incidences are initiated by malicious and intentional acts, expert judgment and Bayes updating are used to estimate scenario and initiation likelihood, and it is assumed that the Markov model derived from stochastic process can be applied to incidence sequence. Both an unauthorized intrusion as Design Based Threat (DBT) and a stand-off attack as beyond-DBT are assumed to hypothetical facilities, and performance of physical protection and mitigation and minimization of consequence are investigated to develop the assessment methodology in a semi-quantitative manner. It is shown that cooperation between facility operator and security authority is important to respond to the beyond-DBT incidence. (author)
MARKOV CHAIN PORTFOLIO LIQUIDITY OPTIMIZATION MODEL
Directory of Open Access Journals (Sweden)
Eder Oliveira Abensur
2014-05-01
Full Text Available The international financial crisis of September 2008 and May 2010 showed the importance of liquidity as an attribute to be considered in portfolio decisions. This study proposes an optimization model based on available public data, using Markov chain and Genetic Algorithms concepts as it considers the classic duality of risk versus return and incorporating liquidity costs. The work intends to propose a multi-criterion non-linear optimization model using liquidity based on a Markov chain. The non-linear model was tested using Genetic Algorithms with twenty five Brazilian stocks from 2007 to 2009. The results suggest that this is an innovative development methodology and useful for developing an efficient and realistic financial portfolio, as it considers many attributes such as risk, return and liquidity.
An interlacing theorem for reversible Markov chains
International Nuclear Information System (INIS)
Grone, Robert; Salamon, Peter; Hoffmann, Karl Heinz
2008-01-01
Reversible Markov chains are an indispensable tool in the modeling of a vast class of physical, chemical, biological and statistical problems. Examples include the master equation descriptions of relaxing physical systems, stochastic optimization algorithms such as simulated annealing, chemical dynamics of protein folding and Markov chain Monte Carlo statistical estimation. Very often the large size of the state spaces requires the coarse graining or lumping of microstates into fewer mesoscopic states, and a question of utmost importance for the validity of the physical model is how the eigenvalues of the corresponding stochastic matrix change under this operation. In this paper we prove an interlacing theorem which gives explicit bounds on the eigenvalues of the lumped stochastic matrix. (fast track communication)
An interlacing theorem for reversible Markov chains
Energy Technology Data Exchange (ETDEWEB)
Grone, Robert; Salamon, Peter [Department of Mathematics and Statistics, San Diego State University, San Diego, CA 92182-7720 (United States); Hoffmann, Karl Heinz [Institut fuer Physik, Technische Universitaet Chemnitz, D-09107 Chemnitz (Germany)
2008-05-30
Reversible Markov chains are an indispensable tool in the modeling of a vast class of physical, chemical, biological and statistical problems. Examples include the master equation descriptions of relaxing physical systems, stochastic optimization algorithms such as simulated annealing, chemical dynamics of protein folding and Markov chain Monte Carlo statistical estimation. Very often the large size of the state spaces requires the coarse graining or lumping of microstates into fewer mesoscopic states, and a question of utmost importance for the validity of the physical model is how the eigenvalues of the corresponding stochastic matrix change under this operation. In this paper we prove an interlacing theorem which gives explicit bounds on the eigenvalues of the lumped stochastic matrix. (fast track communication)
Stochastic Dynamics through Hierarchically Embedded Markov Chains.
Vasconcelos, Vítor V; Santos, Fernando P; Santos, Francisco C; Pacheco, Jorge M
2017-02-03
Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects-such as mutations in evolutionary dynamics and a random exploration of choices in social systems-including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.
Handbook of Markov chain Monte Carlo
Brooks, Steve
2011-01-01
""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.
Second Order Optimality in Markov Decision Chains
Czech Academy of Sciences Publication Activity Database
Sladký, Karel
2017-01-01
Roč. 53, č. 6 (2017), s. 1086-1099 ISSN 0023-5954 R&D Projects: GA ČR GA15-10331S Institutional support: RVO:67985556 Keywords : Markov decision chains * second order optimality * optimalilty conditions for transient, discounted and average models * policy and value iterations Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 0.379, year: 2016 http://library.utia.cas.cz/separaty/2017/E/sladky-0485146.pdf
Dynamical fluctuations for semi-Markov processes
Czech Academy of Sciences Publication Activity Database
Maes, C.; Netočný, Karel; Wynants, B.
2009-01-01
Roč. 42, č. 36 (2009), 365002/1-365002/21 ISSN 1751-8113 R&D Projects: GA ČR GC202/07/J051 Institutional research plan: CEZ:AV0Z10100520 Keywords : nonequilibrium fluctuations * semi-Markov processes Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.577, year: 2009 http://www.iop.org/EJ/abstract/1751-8121/42/36/365002
Analysis of a quantum Markov chain
International Nuclear Information System (INIS)
Marbeau, J.; Gudder, S.
1990-01-01
A quantum chain is analogous to a classical stationary Markov chain except that the probability measure is replaced by a complex amplitude measure and the transition probability matrix is replaced by a transition amplitude matrix. After considering the general situation, we study a particular example of a quantum chain whose transition amplitude matrix has the form of a Dirichlet matrix. Such matrices generate a discrete analog of the usual continuum Feynman amplitude. We then compute the probability distribution for these quantum chains
Modelling of cyclical stratigraphy using Markov chains
Energy Technology Data Exchange (ETDEWEB)
Kulatilake, P.H.S.W.
1987-07-01
State-of-the-art on modelling of cyclical stratigraphy using first-order Markov chains is reviewed. Shortcomings of the presently available procedures are identified. A procedure which eliminates all the identified shortcomings is presented. Required statistical tests to perform this modelling are given in detail. An example (the Oficina formation in eastern Venezuela) is given to illustrate the presented procedure. 12 refs., 3 tabs. 1 fig.
Markov Chains For Testing Redundant Software
White, Allan L.; Sjogren, Jon A.
1990-01-01
Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.
Operational Markov Condition for Quantum Processes
Pollock, Felix A.; Rodríguez-Rosario, César; Frauenheim, Thomas; Paternostro, Mauro; Modi, Kavan
2018-01-01
We derive a necessary and sufficient condition for a quantum process to be Markovian which coincides with the classical one in the relevant limit. Our condition unifies all previously known definitions for quantum Markov processes by accounting for all potentially detectable memory effects. We then derive a family of measures of non-Markovianity with clear operational interpretations, such as the size of the memory required to simulate a process or the experimental falsifiability of a Markovian hypothesis.
Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.
Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine
2010-09-01
Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.
Temperature scaling method for Markov chains.
Crosby, Lonnie D; Windus, Theresa L
2009-01-22
The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.
Simulation of daily rainfall through markov chain modeling
International Nuclear Information System (INIS)
Sadiq, N.
2015-01-01
Being an agricultural country, the inhabitants of dry land in cultivated areas mainly rely on the daily rainfall for watering their fields. A stochastic model based on first order Markov Chain was developed to simulate daily rainfall data for Multan, D. I. Khan, Nawabshah, Chilas and Barkhan for the period 1981-2010. Transitional probability matrices of first order Markov Chain was utilized to generate the daily rainfall occurrence while gamma distribution was used to generate the daily rainfall amount. In order to achieve the parametric values of mentioned cities, method of moments is used to estimate the shape and scale parameters which lead to synthetic sequence generation as per gamma distribution. In this study, unconditional and conditional probabilities of wet and dry days in sum with means and standard deviations are considered as the essential parameters for the simulated stochastic generation of daily rainfalls. It has been found that the computerized synthetic rainfall series concurred pretty well with the actual observed rainfall series. (author)
Extreme event statistics in a drifting Markov chain
Kindermann, Farina; Hohmann, Michael; Lausch, Tobias; Mayer, Daniel; Schmidt, Felix; Widera, Artur
2017-07-01
We analyze extreme event statistics of experimentally realized Markov chains with various drifts. Our Markov chains are individual trajectories of a single atom diffusing in a one-dimensional periodic potential. Based on more than 500 individual atomic traces we verify the applicability of the Sparre Andersen theorem to our system despite the presence of a drift. We present detailed analysis of four different rare-event statistics for our system: the distributions of extreme values, of record values, of extreme value occurrence in the chain, and of the number of records in the chain. We observe that, for our data, the shape of the extreme event distributions is dominated by the underlying exponential distance distribution extracted from the atomic traces. Furthermore, we find that even small drifts influence the statistics of extreme events and record values, which is supported by numerical simulations, and we identify cases in which the drift can be determined without information about the underlying random variable distributions. Our results facilitate the use of extreme event statistics as a signal for small drifts in correlated trajectories.
Entropies from Markov Models as Complexity Measures of Embedded Attractors
Directory of Open Access Journals (Sweden)
Julián D. Arias-Londoño
2015-06-01
Full Text Available This paper addresses the problem of measuring complexity from embedded attractors as a way to characterize changes in the dynamical behavior of different types of systems with a quasi-periodic behavior by observing their outputs. With the aim of measuring the stability of the trajectories of the attractor along time, this paper proposes three new estimations of entropy that are derived from a Markov model of the embedded attractor. The proposed estimators are compared with traditional nonparametric entropy measures, such as approximate entropy, sample entropy and fuzzy entropy, which only take into account the spatial dimension of the trajectory. The method proposes the use of an unsupervised algorithm to find the principal curve, which is considered as the “profile trajectory”, that will serve to adjust the Markov model. The new entropy measures are evaluated using three synthetic experiments and three datasets of physiological signals. In terms of consistency and discrimination capabilities, the results show that the proposed measures perform better than the other entropy measures used for comparison purposes.
Markov chain aggregation and its applications to combinatorial reaction networks.
Ganguly, Arnab; Petrov, Tatjana; Koeppl, Heinz
2014-09-01
We consider a continuous-time Markov chain (CTMC) whose state space is partitioned into aggregates, and each aggregate is assigned a probability measure. A sufficient condition for defining a CTMC over the aggregates is presented as a variant of weak lumpability, which also characterizes that the measure over the original process can be recovered from that of the aggregated one. We show how the applicability of de-aggregation depends on the initial distribution. The application section is devoted to illustrate how the developed theory aids in reducing CTMC models of biochemical systems particularly in connection to protein-protein interactions. We assume that the model is written by a biologist in form of site-graph-rewrite rules. Site-graph-rewrite rules compactly express that, often, only a local context of a protein (instead of a full molecular species) needs to be in a certain configuration in order to trigger a reaction event. This observation leads to suitable aggregate Markov chains with smaller state spaces, thereby providing sufficient reduction in computational complexity. This is further exemplified in two case studies: simple unbounded polymerization and early EGFR/insulin crosstalk.
Stencil method: a Markov model for transport in porous media
Delgoshaie, A. H.; Tchelepi, H.; Jenny, P.
2016-12-01
In porous media the transport of fluid is dominated by flow-field heterogeneity resulting from the underlying transmissibility field. Since the transmissibility is highly uncertain, many realizations of a geological model are used to describe the statistics of the transport phenomena in a Monte Carlo framework. One possible way to avoid the high computational cost of physics-based Monte Carlo simulations is to model the velocity field as a Markov process and use Markov Chain Monte Carlo. In previous works multiple Markov models for discrete velocity processes have been proposed. These models can be divided into two general classes of Markov models in time and Markov models in space. Both of these choices have been shown to be effective to some extent. However some studies have suggested that the Markov property cannot be confirmed for a temporal Markov process; Therefore there is not a consensus about the validity and value of Markov models in time. Moreover, previous spacial Markov models have only been used for modeling transport on structured networks and can not be readily applied to model transport in unstructured networks. In this work we propose a novel approach for constructing a Markov model in time (stencil method) for a discrete velocity process. The results form the stencil method are compared to previously proposed spacial Markov models for structured networks. The stencil method is also applied to unstructured networks and can successfully describe the dispersion of particles in this setting. Our conclusion is that both temporal Markov models and spacial Markov models for discrete velocity processes can be valid for a range of model parameters. Moreover, we show that the stencil model can be more efficient in many practical settings and is suited to model dispersion both on structured and unstructured networks.
Motion Imitation and Recognition using Parametric Hidden Markov Models
DEFF Research Database (Denmark)
Herzog, Dennis; Ude, Ales; Krüger, Volker
2008-01-01
) are important. Only together they convey the whole meaning of an action. Similarly, to imitate a movement, the robot needs to select the proper action and parameterize it, e.g., by the relative position of the object that needs to be grasped. We propose to utilize parametric hidden Markov models (PHMMs), which...... extend the classical HMMs by introducing a joint parameterization of the observation densities, to simultaneously solve the problems of action recognition, parameterization of the observed actions, and action synthesis. The proposed approach was fully implemented on a humanoid robot HOAP-3. To evaluate...... the approach, we focused on reaching and pointing actions. Even though the movements are very similar in appearance, our approach is able to distinguish the two movement types and discover the parameterization, and is thus enabling both, action recognition and action synthesis. Through parameterization we...
A logic for specifying stochastic actions and observations
CSIR Research Space (South Africa)
Rens, G
2014-03-01
Full Text Available (2007) 30. Smallwood, R., Sondik, E.: The optimal control of partially observable Markov processes over a finite horizon. Operations Research 21, 1071–1088 (1973) 31. Tarski, A.: A decision method for elementary algebra and geometry. Tech. rep., The RAND... POMDPs totally into the logical arena. One is then in very familiar territory and new opportunities for the advancement in reasoning about POMDPs may be opened up. Systems of linear inequalities are at the heart of Nilsson’s probabilistic logic [19...
Hidden Markov event sequence models: toward unsupervised functional MRI brain mapping.
Faisan, Sylvain; Thoraval, Laurent; Armspach, Jean-Paul; Foucher, Jack R; Metz-Lutz, Marie-Noëlle; Heitz, Fabrice
2005-01-01
Most methods used in functional MRI (fMRI) brain mapping require restrictive assumptions about the shape and timing of the fMRI signal in activated voxels. Consequently, fMRI data may be partially and misleadingly characterized, leading to suboptimal or invalid inference. To limit these assumptions and to capture the broad range of possible activation patterns, a novel statistical fMRI brain mapping method is proposed. It relies on hidden semi-Markov event sequence models (HSMESMs), a special class of hidden Markov models (HMMs) dedicated to the modeling and analysis of event-based random processes. Activation detection is formulated in terms of time coupling between (1) the observed sequence of hemodynamic response onset (HRO) events detected in the voxel's fMRI signal and (2) the "hidden" sequence of task-induced neural activation onset (NAO) events underlying the HROs. Both event sequences are modeled within a single HSMESM. The resulting brain activation model is trained to automatically detect neural activity embedded in the input fMRI data set under analysis. The data sets considered in this article are threefold: synthetic epoch-related, real epoch-related (auditory lexical processing task), and real event-related (oddball detection task) fMRI data sets. Synthetic data: Activation detection results demonstrate the superiority of the HSMESM mapping method with respect to a standard implementation of the statistical parametric mapping (SPM) approach. They are also very close, sometimes equivalent, to those obtained with an "ideal" implementation of SPM in which the activation patterns synthesized are reused for analysis. The HSMESM method appears clearly insensitive to timing variations of the hemodynamic response and exhibits low sensitivity to fluctuations of its shape (unsustained activation during task). Real epoch-related data: HSMESM activation detection results compete with those obtained with SPM, without requiring any prior definition of the expected
Markov chains and semi-Markov models in time-to-event analysis.
Abner, Erin L; Charnigo, Richard J; Kryscio, Richard J
2013-10-25
A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields.
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Partial Cancellation. Full Cancellation is desirable. But complexity requirements are enormous. 4000 tones, 100 Users billions of flops !!! Main Idea: Challenge: To determine which cross-talker to cancel on what “tone” for a given victim. Constraint: Total complexity is ...
Derivation of Markov processes that violate detailed balance
Lee, Julian
2018-03-01
Time-reversal symmetry of the microscopic laws dictates that the equilibrium distribution of a stochastic process must obey the condition of detailed balance. However, cyclic Markov processes that do not admit equilibrium distributions with detailed balance are often used to model systems driven out of equilibrium by external agents. I show that for a Markov model without detailed balance, an extended Markov model can be constructed, which explicitly includes the degrees of freedom for the driving agent and satisfies the detailed balance condition. The original cyclic Markov model for the driven system is then recovered as an approximation at early times by summing over the degrees of freedom for the driving agent. I also show that the widely accepted expression for the entropy production in a cyclic Markov model is actually a time derivative of an entropy component in the extended model. Further, I present an analytic expression for the entropy component that is hidden in the cyclic Markov model.
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
A Bayesian method for construction of Markov models to describe dynamics on various time-scales.
Rains, Emily K; Andersen, Hans C
2010-10-14
The dynamics of many biological processes of interest, such as the folding of a protein, are slow and complicated enough that a single molecular dynamics simulation trajectory of the entire process is difficult to obtain in any reasonable amount of time. Moreover, one such simulation may not be sufficient to develop an understanding of the mechanism of the process, and multiple simulations may be necessary. One approach to circumvent this computational barrier is the use of Markov state models. These models are useful because they can be constructed using data from a large number of shorter simulations instead of a single long simulation. This paper presents a new Bayesian method for the construction of Markov models from simulation data. A Markov model is specified by (τ,P,T), where τ is the mesoscopic time step, P is a partition of configuration space into mesostates, and T is an N(P)×N(P) transition rate matrix for transitions between the mesostates in one mesoscopic time step, where N(P) is the number of mesostates in P. The method presented here is different from previous Bayesian methods in several ways. (1) The method uses Bayesian analysis to determine the partition as well as the transition probabilities. (2) The method allows the construction of a Markov model for any chosen mesoscopic time-scale τ. (3) It constructs Markov models for which the diagonal elements of T are all equal to or greater than 0.5. Such a model will be called a "consistent mesoscopic Markov model" (CMMM). Such models have important advantages for providing an understanding of the dynamics on a mesoscopic time-scale. The Bayesian method uses simulation data to find a posterior probability distribution for (P,T) for any chosen τ. This distribution can be regarded as the Bayesian probability that the kinetics observed in the atomistic simulation data on the mesoscopic time-scale τ was generated by the CMMM specified by (P,T). An optimization algorithm is used to find the most
Directory of Open Access Journals (Sweden)
Liberti Gian Luigi
2016-01-01
Full Text Available This study reports some preliminary analyses of multichannel lidar measurements taken in Rome Tor Vergata (Italy during the 20th March 2015 partial solar eclipse. The objective is assessing the capability of the instrument to document the effect of the eclipse in the lower troposphere, with a particular emphasis on the information content at relatively small temporal and spatial scales.
Liberti, Gian Luigi; Dionisi, Davide; Federico, Stefano; Congeduti, Fernando
2016-06-01
This study reports some preliminary analyses of multichannel lidar measurements taken in Rome Tor Vergata (Italy) during the 20th March 2015 partial solar eclipse. The objective is assessing the capability of the instrument to document the effect of the eclipse in the lower troposphere, with a particular emphasis on the information content at relatively small temporal and spatial scales.
A New GMRES(m Method for Markov Chains
Directory of Open Access Journals (Sweden)
Bing-Yuan Pu
2013-01-01
Full Text Available This paper presents a class of new accelerated restarted GMRES method for calculating the stationary probability vector of an irreducible Markov chain. We focus on the mechanism of this new hybrid method by showing how to periodically combine the GMRES and vector extrapolation method into a much efficient one for improving the convergence rate in Markov chain problems. Numerical experiments are carried out to demonstrate the efficiency of our new algorithm on several typical Markov chain problems.
Context Tree Estimation in Variable Length Hidden Markov Models
Dumont, Thierry
2011-01-01
We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...
Deteksi Fraud Menggunakan Metode Model Markov Tersembunyi Pada Proses Bisnis
Directory of Open Access Journals (Sweden)
Andrean Hutama Koosasi
2017-03-01
Full Text Available Model Markov Tersembunyi merupakan sebuah metode statistik berdasarkan Model Markov sederhana yang memodelkan sistem serta membaginya dalam 2 (dua state, state tersembunyi dan state observasi. Dalam pengerjaan tugas akhir ini, penulis mengusulkan penggunaan metode Model Markov Tersembunyi untuk menemukan fraud didalam sebuah pelaksanaan proses bisnis. Dengan penggunaan metode Model Markov Tersembunyi ini, maka pengamatan terhadap elemen penyusun sebuah kasus/kejadian, yakni beberapa aktivitas, akan diperoleh sebuah nilai peluang, yang sekaligus memberikan prediksi terhadap kasus/kejadian tersebut, sebuah fraud atau tidak. Hasil ekpserimen ini menunjukkan bahwa metode yang diusulkan mampu memberikan prediksi akhir dengan evaluasi TPR sebesar 87,5% dan TNR sebesar 99,4%.
International Nuclear Information System (INIS)
1978-11-01
This discussion paper considers the possibility of applying to the recycle of plutonium in thermal reactors a particular method of partial processing based on the PUREX process but named CIVEX to emphasise the differences. The CIVEX process is based primarily on the retention of short-lived fission products. The paper suggests: (1) the recycle of fission products with uranium and plutonium in thermal reactor fuel would be technically feasible; (2) it would, however, take ten years or more to develop the CIVEX process to the point where it could be launched on a commercial scale; (3) since the majority of spent fuel to be reprocessed this century will have been in storage for ten years or more, the recycling of short-lived fission products with the U-Pu would not provide an effective means of making refabrication fuel ''inaccessible'' because the radioactivity associated with the fission products would have decayed. There would therefore be no advantage in partial processing
Markov Chain Analysis of Musical Dice Games
Volchenkov, D.; Dawin, J. R.
2012-07-01
A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.
Pruning Boltzmann networks and hidden Markov models
DEFF Research Database (Denmark)
Pedersen, Morten With; Stork, D.
1996-01-01
We present sensitivity-based pruning algorithms for general Boltzmann networks. Central to our methods is the efficient calculation of a second-order approximation to the true weight saliencies in a cross-entropy error. Building upon previous work which shows a formal correspondence between linear...... Boltzmann chains and hidden Markov models (HMMs), we argue that our method can be applied to HMMs as well. We illustrate pruning on Boltzmann zippers, which are equivalent to two HMMs with cross-connection links. We verify that our second-order approximation preserves the rank ordering of weight saliencies...
Decoding LDPC Convolutional Codes on Markov Channels
Directory of Open Access Journals (Sweden)
Kashyap Manohar
2008-01-01
Full Text Available Abstract This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.
Decoding LDPC Convolutional Codes on Markov Channels
Directory of Open Access Journals (Sweden)
Chris Winstead
2008-04-01
Full Text Available This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.
Evolving the structure of hidden Markov Models
DEFF Research Database (Denmark)
won, K. J.; Prugel-Bennett, A.; Krogh, A.
2006-01-01
A genetic algorithm (GA) is proposed for finding the structure of hidden Markov Models (HMMs) used for biological sequence analysis. The GA is designed to preserve biologically meaningful building blocks. The search through the space of HMM structures is combined with optimization of the emission...... and transition probabilities using the classic Baum-Welch algorithm. The system is tested on the problem of finding the promoter and coding region of C. jejuni. The resulting HMM has a superior discrimination ability to a handcrafted model that has been published in the literature....
Honest Importance Sampling with Multiple Markov Chains.
Tan, Aixin; Doss, Hani; Hobert, James P
2015-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable
ANALYSING ACCEPTANCE SAMPLING PLANS BY MARKOV CHAINS
Directory of Open Access Journals (Sweden)
Mohammad Mirabi
2012-01-01
Full Text Available
ENGLISH ABSTRACT: In this research, a Markov analysis of acceptance sampling plans in a single stage and in two stages is proposed, based on the quality of the items inspected. In a stage of this policy, if the number of defective items in a sample of inspected items is more than the upper threshold, the batch is rejected. However, the batch is accepted if the number of defective items is less than the lower threshold. Nonetheless, when the number of defective items falls between the upper and lower thresholds, the decision-making process continues to inspect the items and collect further samples. The primary objective is to determine the optimal values of the upper and lower thresholds using a Markov process to minimise the total cost associated with a batch acceptance policy. A solution method is presented, along with a numerical demonstration of the application of the proposed methodology.
AFRIKAANSE OPSOMMING: In hierdie navorsing word ’n Markov-ontleding gedoen van aannamemonsternemingsplanne wat plaasvind in ’n enkele stap of in twee stappe na gelang van die kwaliteit van die items wat geïnspekteer word. Indien die eerste monster toon dat die aantal defektiewe items ’n boonste grens oorskry, word die lot afgekeur. Indien die eerste monster toon dat die aantal defektiewe items minder is as ’n onderste grens, word die lot aanvaar. Indien die eerste monster toon dat die aantal defektiewe items in die gebied tussen die boonste en onderste grense lê, word die besluitnemingsproses voortgesit en verdere monsters word geneem. Die primêre doel is om die optimale waardes van die booonste en onderste grense te bepaal deur gebruik te maak van ’n Markov-proses sodat die totale koste verbonde aan die proses geminimiseer kan word. ’n Oplossing word daarna voorgehou tesame met ’n numeriese voorbeeld van die toepassing van die voorgestelde oplossing.
Vulnerability of networks of interacting Markov chains.
Kocarev, L; Zlatanov, N; Trajanov, D
2010-05-13
The concept of vulnerability is introduced for a model of random, dynamical interactions on networks. In this model, known as the influence model, the nodes are arranged in an arbitrary network, while the evolution of the status at a node is according to an internal Markov chain, but with transition probabilities that depend not only on the current status of that node but also on the statuses of the neighbouring nodes. Vulnerability is treated analytically and numerically for several networks with different topological structures, as well as for two real networks--the network of infrastructures and the EU power grid--identifying the most vulnerable nodes of these networks.
Genetic Algorithms Principles Towards Hidden Markov Model
Directory of Open Access Journals (Sweden)
Nabil M. Hewahi
2011-10-01
Full Text Available In this paper we propose a general approach based on Genetic Algorithms (GAs to evolve Hidden Markov Models (HMM. The problem appears when experts assign probability values for HMM, they use only some limited inputs. The assigned probability values might not be accurate to serve in other cases related to the same domain. We introduce an approach based on GAs to find
out the suitable probability values for the HMM to be mostly correct in more cases than what have been used to assign the probability values.
Directory of Open Access Journals (Sweden)
М.М. Karimova
2017-05-01
Full Text Available A girl with partial gigantism (the increased I and II fingers of the left foot is being examined. This condition is a rare and unresolved problem, as the definite reason of its development is not determined. Wait-and-see strategy is recommended, as well as correcting operations after closing of growth zones, and forming of data pool for generalization and development of schemes of drug and radial therapeutic methods.
Perspective: Markov models for long-timescale biomolecular dynamics
Energy Technology Data Exchange (ETDEWEB)
Schwantes, C. R.; McGibbon, R. T. [Department of Chemistry, Stanford University, Stanford, California 94305 (United States); Pande, V. S., E-mail: pande@stanford.edu [Department of Chemistry, Stanford University, Stanford, California 94305 (United States); Department of Computer Science, Stanford University, Stanford, California 94305 (United States); Department of Structural Biology, Stanford University, Stanford, California 94305 (United States); Biophysics Program, Stanford University, Stanford, California 94305 (United States)
2014-09-07
Molecular dynamics simulations have the potential to provide atomic-level detail and insight to important questions in chemical physics that cannot be observed in typical experiments. However, simply generating a long trajectory is insufficient, as researchers must be able to transform the data in a simulation trajectory into specific scientific insights. Although this analysis step has often been taken for granted, it deserves further attention as large-scale simulations become increasingly routine. In this perspective, we discuss the application of Markov models to the analysis of large-scale biomolecular simulations. We draw attention to recent improvements in the construction of these models as well as several important open issues. In addition, we highlight recent theoretical advances that pave the way for a new generation of models of molecular kinetics.
Control Design for Untimed Petri Nets Using Markov Decision Processes
Directory of Open Access Journals (Sweden)
Cherki Daoui
2017-01-01
Full Text Available Design of control sequences for discrete event systems (DESs has been presented modelled by untimed Petri nets (PNs. PNs are well-known mathematical and graphical models that are widely used to describe distributed DESs, including choices, synchronizations and parallelisms. The domains of application include, but are not restricted to, manufacturing systems, computer science and transportation networks. We are motivated by the observation that such systems need to plan their production or services. The paper is more particularly concerned with control issues in uncertain environments when unexpected events occur or when control errors disturb the behaviour of the system. To deal with such uncertainties, a new approach based on discrete time Markov decision processes (MDPs has been proposed that associates the modelling power of PNs with the planning power of MDPs. Finally, the simulation results illustrate the benefit of our method from the computational point of view. (original abstract
Hidden Markov modelling of movement data from fish
DEFF Research Database (Denmark)
Pedersen, Martin Wæver
Movement data from marine animals tagged with electronic tags are becoming increasingly diverse and plentiful. This trend entails a need for statistical methods that are able to filter the observations to extract the ecologically relevant content. This dissertation focuses on the development...... the behaviour of the animal. With the extended model can migratory and resident movement behaviour be related to geographical regions. For population inference multiple individual state-space analyses can be interconnected using mixed effects modelling. This framework provides parameter estimates...... approximated. This furthermore enables accurate probability densities of location to be computed. Finally, the performance of the HMM approach in analysing nonlinear state space models is compared with two alternatives: the AD Model Builder framework and BUGS, which relies on Markov chain Monte Carlo...
Dimensional Reduction for the General Markov Model on Phylogenetic Trees.
Sumner, Jeremy G
2017-03-01
We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.
Perspective: Markov models for long-timescale biomolecular dynamics
International Nuclear Information System (INIS)
Schwantes, C. R.; McGibbon, R. T.; Pande, V. S.
2014-01-01
Molecular dynamics simulations have the potential to provide atomic-level detail and insight to important questions in chemical physics that cannot be observed in typical experiments. However, simply generating a long trajectory is insufficient, as researchers must be able to transform the data in a simulation trajectory into specific scientific insights. Although this analysis step has often been taken for granted, it deserves further attention as large-scale simulations become increasingly routine. In this perspective, we discuss the application of Markov models to the analysis of large-scale biomolecular simulations. We draw attention to recent improvements in the construction of these models as well as several important open issues. In addition, we highlight recent theoretical advances that pave the way for a new generation of models of molecular kinetics
Nonequilibrium thermodynamic potentials for continuous-time Markov chains.
Verley, Gatien
2016-01-01
We connect the rare fluctuations of an equilibrium (EQ) process and the typical fluctuations of a nonequilibrium (NE) stationary process. In the framework of large deviation theory, this observation allows us to introduce NE thermodynamic potentials. For continuous-time Markov chains, we identify the relevant pairs of conjugated variables and propose two NE ensembles: one with fixed dynamics and fluctuating time-averaged variables, and another with fixed time-averaged variables, but a fluctuating dynamics. Accordingly, we show that NE processes are equivalent to conditioned EQ processes ensuring that NE potentials are Legendre dual. We find a variational principle satisfied by the NE potentials that reach their maximum in the NE stationary state and whose first derivatives produce the NE equations of state and second derivatives produce the NE Maxwell relations generalizing the Onsager reciprocity relations.
Utilizing Gaze Behavior for Inferring Task Transitions Using Abstract Hidden Markov Models
Directory of Open Access Journals (Sweden)
Daniel Fernando Tello Gamarra
2016-12-01
Full Text Available We demonstrate an improved method for utilizing observed gaze behavior and show that it is useful in inferring hand movement intent during goal directed tasks. The task dynamics and the relationship between hand and gaze behavior are learned using an Abstract Hidden Markov Model (AHMM. We show that the predicted hand movement transitions occur consistently earlier in AHMM models with gaze than those models that do not include gaze observations.
An Approach of Diagnosis Based On The Hidden Markov Chains Model
Directory of Open Access Journals (Sweden)
Karim Bouamrane
2008-07-01
Full Text Available Diagnosis is a key element in industrial system maintenance process performance. A diagnosis tool is proposed allowing the maintenance operators capitalizing on the knowledge of their trade and subdividing it for better performance improvement and intervention effectiveness within the maintenance process service. The Tool is based on the Markov Chain Model and more precisely the Hidden Markov Chains (HMC which has the system failures determination advantage, taking into account the causal relations, stochastic context modeling of their dynamics and providing a relevant diagnosis help by their ability of dubious information use. Since the FMEA method is a well adapted artificial intelligence field, the modeling with Markov Chains is carried out with its assistance. Recently, a dynamic programming recursive algorithm, called 'Viterbi Algorithm', is being used in the Hidden Markov Chains field. This algorithm provides as input to the HMC a set of system observed effects and generates at exit the various causes having caused the loss from one or several system functions.
Markov switching of the electricity supply curve and power prices dynamics
Mari, Carlo; Cananà, Lucianna
2012-02-01
Regime-switching models seem to well capture the main features of power prices behavior in deregulated markets. In a recent paper, we have proposed an equilibrium methodology to derive electricity prices dynamics from the interplay between supply and demand in a stochastic environment. In particular, assuming that the supply function is described by a power law where the exponent is a two-state strictly positive Markov process, we derived a regime switching dynamics of power prices in which regime switches are induced by transitions between Markov states. In this paper, we provide a dynamical model to describe the random behavior of power prices where the only non-Brownian component of the motion is endogenously introduced by Markov transitions in the exponent of the electricity supply curve. In this context, the stochastic process driving the switching mechanism becomes observable, and we will show that the non-Brownian component of the dynamics induced by transitions from Markov states is responsible for jumps and spikes of very high magnitude. The empirical analysis performed on three Australian markets confirms that the proposed approach seems quite flexible and capable of incorporating the main features of power prices time-series, thus reproducing the first four moments of log-returns empirical distributions in a satisfactory way.
Tokunaga and Horton self-similarity for level set trees of Markov chains
International Nuclear Information System (INIS)
Zaliapin, Ilia; Kovchegov, Yevgeniy
2012-01-01
Highlights: ► Self-similar properties of the level set trees for Markov chains are studied. ► Tokunaga and Horton self-similarity are established for symmetric Markov chains and regular Brownian motion. ► Strong, distributional self-similarity is established for symmetric Markov chains with exponential jumps. ► It is conjectured that fractional Brownian motions are Tokunaga self-similar. - Abstract: The Horton and Tokunaga branching laws provide a convenient framework for studying self-similarity in random trees. The Horton self-similarity is a weaker property that addresses the principal branching in a tree; it is a counterpart of the power-law size distribution for elements of a branching system. The stronger Tokunaga self-similarity addresses so-called side branching. The Horton and Tokunaga self-similarity have been empirically established in numerous observed and modeled systems, and proven for two paradigmatic models: the critical Galton–Watson branching process with finite progeny and the finite-tree representation of a regular Brownian excursion. This study establishes the Tokunaga and Horton self-similarity for a tree representation of a finite symmetric homogeneous Markov chain. We also extend the concept of Horton and Tokunaga self-similarity to infinite trees and establish self-similarity for an infinite-tree representation of a regular Brownian motion. We conjecture that fractional Brownian motions are also Tokunaga and Horton self-similar, with self-similarity parameters depending on the Hurst exponent.
Singer, Philipp; Helic, Denis; Taraghi, Behnam; Strohmaier, Markus
2014-01-01
One of the most frequently used models for understanding human navigation on the Web is the Markov chain model, where Web pages are represented as states and hyperlinks as probabilities of navigating from one page to another. Predominantly, human navigation on the Web has been thought to satisfy the memoryless Markov property stating that the next page a user visits only depends on her current page and not on previously visited ones. This idea has found its way in numerous applications such as Google's PageRank algorithm and others. Recently, new studies suggested that human navigation may better be modeled using higher order Markov chain models, i.e., the next page depends on a longer history of past clicks. Yet, this finding is preliminary and does not account for the higher complexity of higher order Markov chain models which is why the memoryless model is still widely used. In this work we thoroughly present a diverse array of advanced inference methods for determining the appropriate Markov chain order. We highlight strengths and weaknesses of each method and apply them for investigating memory and structure of human navigation on the Web. Our experiments reveal that the complexity of higher order models grows faster than their utility, and thus we confirm that the memoryless model represents a quite practical model for human navigation on a page level. However, when we expand our analysis to a topical level, where we abstract away from specific page transitions to transitions between topics, we find that the memoryless assumption is violated and specific regularities can be observed. We report results from experiments with two types of navigational datasets (goal-oriented vs. free form) and observe interesting structural differences that make a strong argument for more contextual studies of human navigation in future work.
Directory of Open Access Journals (Sweden)
Philipp Singer
Full Text Available One of the most frequently used models for understanding human navigation on the Web is the Markov chain model, where Web pages are represented as states and hyperlinks as probabilities of navigating from one page to another. Predominantly, human navigation on the Web has been thought to satisfy the memoryless Markov property stating that the next page a user visits only depends on her current page and not on previously visited ones. This idea has found its way in numerous applications such as Google's PageRank algorithm and others. Recently, new studies suggested that human navigation may better be modeled using higher order Markov chain models, i.e., the next page depends on a longer history of past clicks. Yet, this finding is preliminary and does not account for the higher complexity of higher order Markov chain models which is why the memoryless model is still widely used. In this work we thoroughly present a diverse array of advanced inference methods for determining the appropriate Markov chain order. We highlight strengths and weaknesses of each method and apply them for investigating memory and structure of human navigation on the Web. Our experiments reveal that the complexity of higher order models grows faster than their utility, and thus we confirm that the memoryless model represents a quite practical model for human navigation on a page level. However, when we expand our analysis to a topical level, where we abstract away from specific page transitions to transitions between topics, we find that the memoryless assumption is violated and specific regularities can be observed. We report results from experiments with two types of navigational datasets (goal-oriented vs. free form and observe interesting structural differences that make a strong argument for more contextual studies of human navigation in future work.
Markov random field based automatic image alignment for electron tomography.
Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark
2008-03-01
We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.
Epitope discovery with phylogenetic hidden Markov models.
LENUS (Irish Health Repository)
Lacerda, Miguel
2010-05-01
Existing methods for the prediction of immunologically active T-cell epitopes are based on the amino acid sequence or structure of pathogen proteins. Additional information regarding the locations of epitopes may be acquired by considering the evolution of viruses in hosts with different immune backgrounds. In particular, immune-dependent evolutionary patterns at sites within or near T-cell epitopes can be used to enhance epitope identification. We have developed a mutation-selection model of T-cell epitope evolution that allows the human leukocyte antigen (HLA) genotype of the host to influence the evolutionary process. This is one of the first examples of the incorporation of environmental parameters into a phylogenetic model and has many other potential applications where the selection pressures exerted on an organism can be related directly to environmental factors. We combine this novel evolutionary model with a hidden Markov model to identify contiguous amino acid positions that appear to evolve under immune pressure in the presence of specific host immune alleles and that therefore represent potential epitopes. This phylogenetic hidden Markov model provides a rigorous probabilistic framework that can be combined with sequence or structural information to improve epitope prediction. As a demonstration, we apply the model to a data set of HIV-1 protein-coding sequences and host HLA genotypes.
Neyman, Markov processes and survival analysis.
Yang, Grace
2013-07-01
J. Neyman used stochastic processes extensively in his applied work. One example is the Fix and Neyman (F-N) competing risks model (1951) that uses finite homogeneous Markov processes to analyse clinical trials with breast cancer patients. We revisit the F-N model, and compare it with the Kaplan-Meier (K-M) formulation for right censored data. The comparison offers a way to generalize the K-M formulation to include risks of recovery and relapses in the calculation of a patient's survival probability. The generalization is to extend the F-N model to a nonhomogeneous Markov process. Closed-form solutions of the survival probability are available in special cases of the nonhomogeneous processes, like the popular multiple decrement model (including the K-M model) and Chiang's staging model, but these models do not consider recovery and relapses while the F-N model does. An analysis of sero-epidemiology current status data with recurrent events is illustrated. Fix and Neyman used Neyman's RBAN (regular best asymptotic normal) estimates for the risks, and provided a numerical example showing the importance of considering both the survival probability and the length of time of a patient living a normal life in the evaluation of clinical trials. The said extension would result in a complicated model and it is unlikely to find analytical closed-form solutions for survival analysis. With ever increasing computing power, numerical methods offer a viable way of investigating the problem.
Unmixing hyperspectral images using Markov random fields
International Nuclear Information System (INIS)
Eches, Olivier; Dobigeon, Nicolas; Tourneret, Jean-Yves
2011-01-01
This paper proposes a new spectral unmixing strategy based on the normal compositional model that exploits the spatial correlations between the image pixels. The pure materials (referred to as endmembers) contained in the image are assumed to be available (they can be obtained by using an appropriate endmember extraction algorithm), while the corresponding fractions (referred to as abundances) are estimated by the proposed algorithm. Due to physical constraints, the abundances have to satisfy positivity and sum-to-one constraints. The image is divided into homogeneous distinct regions having the same statistical properties for the abundance coefficients. The spatial dependencies within each class are modeled thanks to Potts-Markov random fields. Within a Bayesian framework, prior distributions for the abundances and the associated hyperparameters are introduced. A reparametrization of the abundance coefficients is proposed to handle the physical constraints (positivity and sum-to-one) inherent to hyperspectral imagery. The parameters (abundances), hyperparameters (abundance mean and variance for each class) and the classification map indicating the classes of all pixels in the image are inferred from the resulting joint posterior distribution. To overcome the complexity of the joint posterior distribution, Markov chain Monte Carlo methods are used to generate samples asymptotically distributed according to the joint posterior of interest. Simulations conducted on synthetic and real data are presented to illustrate the performance of the proposed algorithm.
Markov transitions and the propagation of chaos
International Nuclear Information System (INIS)
Gottlieb, A.
1998-01-01
The propagation of chaos is a central concept of kinetic theory that serves to relate the equations of Boltzmann and Vlasov to the dynamics of many-particle systems. Propagation of chaos means that molecular chaos, i.e., the stochastic independence of two random particles in a many-particle system, persists in time, as the number of particles tends to infinity. We establish a necessary and sufficient condition for a family of general n-particle Markov processes to propagate chaos. This condition is expressed in terms of the Markov transition functions associated to the n-particle processes, and it amounts to saying that chaos of random initial states propagates if it propagates for pure initial states. Our proof of this result relies on the weak convergence approach to the study of chaos due to Sztitman and Tanaka. We assume that the space in which the particles live is homomorphic to a complete and separable metric space so that we may invoke Prohorov's theorem in our proof. We also show that, if the particles can be in only finitely many states, then molecular chaos implies that the specific entropies in the n-particle distributions converge to the entropy of the limiting single-particle distribution
Asymptotic evolution of quantum Markov chains
Energy Technology Data Exchange (ETDEWEB)
Novotny, Jaroslav [FNSPE, CTU in Prague, 115 19 Praha 1 - Stare Mesto (Czech Republic); Alber, Gernot [Institut fuer Angewandte Physik, Technische Universitaet Darmstadt, D-64289 Darmstadt (Germany)
2012-07-01
The iterated quantum operations, so called quantum Markov chains, play an important role in various branches of physics. They constitute basis for many discrete models capable to explore fundamental physical problems, such as the approach to thermal equilibrium, or the asymptotic dynamics of macroscopic physical systems far from thermal equilibrium. On the other hand, in the more applied area of quantum technology they also describe general characteristic properties of quantum networks or they can describe different quantum protocols in the presence of decoherence. A particularly, an interesting aspect of these quantum Markov chains is their asymptotic dynamics and its characteristic features. We demonstrate there is always a vector subspace (typically low-dimensional) of so-called attractors on which the resulting superoperator governing the iterative time evolution of quantum states can be diagonalized and in which the asymptotic quantum dynamics takes place. As the main result interesting algebraic relations are presented for this set of attractors which allow to specify their dual basis and to determine them in a convenient way. Based on this general theory we show some generalizations concerning the theory of fixed points or asymptotic evolution of random quantum operations.
Monotone measures of ergodicity for Markov chains
Directory of Open Access Journals (Sweden)
J. Keilson
1998-01-01
Full Text Available The following paper, first written in 1974, was never published other than as part of an internal research series. Its lack of publication is unrelated to the merits of the paper and the paper is of current importance by virtue of its relation to the relaxation time. A systematic discussion is provided of the approach of a finite Markov chain to ergodicity by proving the monotonicity of an important set of norms, each measures of egodicity, whether or not time reversibility is present. The paper is of particular interest because the discussion of the relaxation time of a finite Markov chain [2] has only been clean for time reversible chains, a small subset of the chains of interest. This restriction is not present here. Indeed, a new relaxation time quoted quantifies the relaxation time for all finite ergodic chains (cf. the discussion of Q1(t below Equation (1.7]. This relaxation time was developed by Keilson with A. Roy in his thesis [6], yet to be published.
Approximating Markov Chains: What and why
International Nuclear Information System (INIS)
Pincus, S.
1996-01-01
Much of the current study of dynamical systems is focused on geometry (e.g., chaos and bifurcations) and ergodic theory. Yet dynamical systems were originally motivated by an attempt to open-quote open-quote solve,close-quote close-quote or at least understand, a discrete-time analogue of differential equations. As such, numerical, analytical solution techniques for dynamical systems would seem desirable. We discuss an approach that provides such techniques, the approximation of dynamical systems by suitable finite state Markov Chains. Steady state distributions for these Markov Chains, a straightforward calculation, will converge to the true dynamical system steady state distribution, with appropriate limit theorems indicated. Thus (i) approximation by a computable, linear map holds the promise of vastly faster steady state solutions for nonlinear, multidimensional differential equations; (ii) the solution procedure is unaffected by the presence or absence of a probability density function for the attractor, entirely skirting singularity, fractal/multifractal, and renormalization considerations. The theoretical machinery underpinning this development also implies that under very general conditions, steady state measures are weakly continuous with control parameter evolution. This means that even though a system may change periodicity, or become chaotic in its limiting behavior, such statistical parameters as the mean, standard deviation, and tail probabilities change continuously, not abruptly with system evolution. copyright 1996 American Institute of Physics
DEFF Research Database (Denmark)
Mikosch, Thomas Valentin; Wintenberger, Olivier
2014-01-01
We introduce the cluster index of a multivariate stationary sequence and characterize the index in terms of the spectral tail process. This index plays a major role in limit theory for partial sums of sequences. We illustrate the use of the cluster index by characterizing infinite variance stable...... limit distributions and precise large deviation results for sums of multivariate functions acting on a stationary Markov chain under a drift condition....
Pathwise duals of monotone and additive Markov processes
Czech Academy of Sciences Publication Activity Database
Sturm, A.; Swart, Jan M.
-, - (2018) ISSN 0894-9840 R&D Projects: GA ČR GAP201/12/2613 Institutional support: RVO:67985556 Keywords : pathwise duality * monotone Markov process * additive Markov process * interacting particle system Subject RIV: BA - General Mathematics Impact factor: 0.854, year: 2016 http://library.utia.cas.cz/separaty/2016/SI/swart-0465436.pdf
An introduction to hidden Markov models for biological sequences
DEFF Research Database (Denmark)
Krogh, Anders Stærmose
1998-01-01
A non-matematical tutorial on hidden Markov models (HMMs) plus a description of one of the applications of HMMs: gene finding.......A non-matematical tutorial on hidden Markov models (HMMs) plus a description of one of the applications of HMMs: gene finding....
Asymptotics for Estimating Equations in Hidden Markov Models
DEFF Research Database (Denmark)
Hansen, Jørgen Vinsløv; Jensen, Jens Ledet
Results on asymptotic normality for the maximum likelihood estimate in hidden Markov models are extended in two directions. The stationarity assumption is relaxed, which allows for a covariate process influencing the hidden Markov process. Furthermore a class of estimating equations is considered...
Efficient Incorporation of Markov Random Fields in Change Detection
DEFF Research Database (Denmark)
Aanæs, Henrik; Nielsen, Allan Aasbjerg; Carstensen, Jens Michael
2009-01-01
of noise, implying that the pixel-wise classifier is also noisy. There is thus a need for incorporating local homogeneity constraints into such a change detection framework. For this modelling task Markov Random Fields are suitable. Markov Random Fields have, however, previously been plagued by lack...
Markov trace on the Yokonuma-Hecke algebra
International Nuclear Information System (INIS)
Juyumaya, J.
2002-11-01
The objective of this note is to prove that there exists a Markov trace on the Yokonuma-Hecke algebra. A motivation to define a Markov trace is to get polynomial invariants for knots in the sense of Jones construction. (author)
Compositionality for Markov reward chains with fast and silent transitions
Markovski, J.; Sokolova, A.; Trcka, N.; Vink, de E.P.
2009-01-01
A parallel composition is defined for Markov reward chains with stochastic discontinuity, and with fast and silent transitions. In this setting, compositionality with respect to the relevant aggregation preorders is established. For Markov reward chains with fast transitions the preorders are
Model Checking Markov Reward Models with Impulse Rewards
Cloth, Lucia; Katoen, Joost-Pieter; Khattri, Maneesh; Pulungan, Reza; Bondavalli, Andrea; Haverkort, Boudewijn; Tang, Dong
This paper considers model checking of Markov reward models (MRMs), continuous-time Markov chains with state rewards as well as impulse rewards. The reward extension of the logic CSL (Continuous Stochastic Logic) is interpreted over such MRMs, and two numerical algorithms are provided to check the
First hitting probabilities for semi markov chains and estimation
DEFF Research Database (Denmark)
Georgiadis, Stylianos
2017-01-01
We first consider a stochastic system described by an absorbing semi-Markov chain with finite state space and we introduce the absorption probability to a class of recurrent states. Afterwards, we study the first hitting probability to a subset of states for an irreducible semi-Markov chain...
ANALYTIC WORD RECOGNITION WITHOUT SEGMENTATION BASED ON MARKOV RANDOM FIELDS
Coisy, C.; Belaid, A.
2004-01-01
In this paper, a method for analytic handwritten word recognition based on causal Markov random fields is described. The words models are HMMs where each state corresponds to a letter; each letter is modelled by a NSHPHMM (Markov field). Global models are build dynamically, and used for recognition
A Markov decision model for optimising economic production lot size ...
African Journals Online (AJOL)
Adopting such a Markov decision process approach, the states of a Markov chain represent possible states of demand. The decision of whether or not to produce additional inventory units is made using dynamic programming. This approach demonstrates the existence of an optimal state-dependent EPL size, and produces ...
Portfolio allocation under the vendor managed inventory: A Markov ...
African Journals Online (AJOL)
Portfolio allocation under the vendor managed inventory: A Markov decision process. ... Journal of Applied Sciences and Environmental Management ... This study provides a review of Markov decision processes and investigates its suitability for solutions to portfolio allocation problems under vendor managed inventory in ...
Logics and Models for Stochastic Analysis Beyond Markov Chains
DEFF Research Database (Denmark)
Zeng, Kebin
, because of the generality of ME distributions, we have to leave the world of Markov chains. To support ME distributions with multiple exits, we introduce a multi-exits ME distribution together with a process algebra MEME to express the systems having the semantics as Markov renewal processes with ME...
Fitting timeseries by continuous-time Markov chains: A quadratic programming approach
International Nuclear Information System (INIS)
Crommelin, D.T.; Vanden-Eijnden, E.
2006-01-01
Construction of stochastic models that describe the effective dynamics of observables of interest is an useful instrument in various fields of application, such as physics, climate science, and finance. We present a new technique for the construction of such models. From the timeseries of an observable, we construct a discrete-in-time Markov chain and calculate the eigenspectrum of its transition probability (or stochastic) matrix. As a next step we aim to find the generator of a continuous-time Markov chain whose eigenspectrum resembles the observed eigenspectrum as closely as possible, using an appropriate norm. The generator is found by solving a minimization problem: the norm is chosen such that the object function is quadratic and convex, so that the minimization problem can be solved using quadratic programming techniques. The technique is illustrated on various toy problems as well as on datasets stemming from simulations of molecular dynamics and of atmospheric flows
Introduction to the numerical solutions of Markov chains
Stewart, Williams J
1994-01-01
A cornerstone of applied probability, Markov chains can be used to help model how plants grow, chemicals react, and atoms diffuse - and applications are increasingly being found in such areas as engineering, computer science, economics, and education. To apply the techniques to real problems, however, it is necessary to understand how Markov chains can be solved numerically. In this book, the first to offer a systematic and detailed treatment of the numerical solution of Markov chains, William Stewart provides scientists on many levels with the power to put this theory to use in the actual world, where it has applications in areas as diverse as engineering, economics, and education. His efforts make for essential reading in a rapidly growing field. Here, Stewart explores all aspects of numerically computing solutions of Markov chains, especially when the state is huge. He provides extensive background to both discrete-time and continuous-time Markov chains and examines many different numerical computing metho...
The Markov moment problem and extremal problems
Kreĭn, M G; Louvish, D
1977-01-01
In this book, an extensive circle of questions originating in the classical work of P. L. Chebyshev and A. A. Markov is considered from the more modern point of view. It is shown how results and methods of the generalized moment problem are interlaced with various questions of the geometry of convex bodies, algebra, and function theory. From this standpoint, the structure of convex and conical hulls of curves is studied in detail and isoperimetric inequalities for convex hulls are established; a theory of orthogonal and quasiorthogonal polynomials is constructed; problems on limiting values of integrals and on least deviating functions (in various metrics) are generalized and solved; problems in approximation theory and interpolation and extrapolation in various function classes (analytic, absolutely monotone, almost periodic, etc.) are solved, as well as certain problems in optimal control of linear objects.
Neuroevolution Mechanism for Hidden Markov Model
Directory of Open Access Journals (Sweden)
Nabil M. Hewahi
2011-12-01
Full Text Available Hidden Markov Model (HMM is a statistical model based on probabilities. HMM is becoming one of the major models involved in many applications such as natural language
processing, handwritten recognition, image processing, prediction systems and many more. In this research we are concerned with finding out the best HMM for a certain application domain. We propose a neuroevolution process that is based first on converting the HMM to a neural network, then generating many neural networks at random where each represents a HMM. We proceed by
applying genetic operators to obtain new set of neural networks where each represents HMMs, and updating the population. Finally select the best neural network based on a fitness function.
Improved hidden Markov model for nosocomial infections.
Khader, Karim; Leecaster, Molly; Greene, Tom; Samore, Matthew; Thomas, Alun
2014-12-01
We propose a novel hidden Markov model (HMM) for parameter estimation in hospital transmission models, and show that commonly made simplifying assumptions can lead to severe model misspecification and poor parameter estimates. A standard HMM that embodies two commonly made simplifying assumptions, namely a fixed patient count and binomially distributed detections is compared with a new alternative HMM that does not require these simplifying assumptions. Using simulated data, we demonstrate how each of the simplifying assumptions used by the standard model leads to model misspecification, whereas the alternative model results in accurate parameter estimates. © The Authors 2013. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
Monte Carlo simulation of Markov unreliability models
International Nuclear Information System (INIS)
Lewis, E.E.; Boehm, F.
1984-01-01
A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)
Recombination Processes and Nonlinear Markov Chains.
Pirogov, Sergey; Rybko, Alexander; Kalinina, Anastasia; Gelfand, Mikhail
2016-09-01
Bacteria are known to exchange genetic information by horizontal gene transfer. Since the frequency of homologous recombination depends on the similarity between the recombining segments, several studies examined whether this could lead to the emergence of subspecies. Most of them simulated fixed-size Wright-Fisher populations, in which the genetic drift should be taken into account. Here, we use nonlinear Markov processes to describe a bacterial population evolving under mutation and recombination. We consider a population structure as a probability measure on the space of genomes. This approach implies the infinite population size limit, and thus, the genetic drift is not assumed. We prove that under these conditions, the emergence of subspecies is impossible.
SHARP ENTRYWISE PERTURBATION BOUNDS FOR MARKOV CHAINS.
Thiede, Erik; VAN Koten, Brian; Weare, Jonathan
For many Markov chains of practical interest, the invariant distribution is extremely sensitive to perturbations of some entries of the transition matrix, but insensitive to others; we give an example of such a chain, motivated by a problem in computational statistical physics. We have derived perturbation bounds on the relative error of the invariant distribution that reveal these variations in sensitivity. Our bounds are sharp, we do not impose any structural assumptions on the transition matrix or on the perturbation, and computing the bounds has the same complexity as computing the invariant distribution or computing other bounds in the literature. Moreover, our bounds have a simple interpretation in terms of hitting times, which can be used to draw intuitive but rigorous conclusions about the sensitivity of a chain to various types of perturbations.
A Markov Chain Model for Contagion
Directory of Open Access Journals (Sweden)
Angelos Dassios
2014-11-01
Full Text Available We introduce a bivariate Markov chain counting process with contagion for modelling the clustering arrival of loss claims with delayed settlement for an insurance company. It is a general continuous-time model framework that also has the potential to be applicable to modelling the clustering arrival of events, such as jumps, bankruptcies, crises and catastrophes in finance, insurance and economics with both internal contagion risk and external common risk. Key distributional properties, such as the moments and probability generating functions, for this process are derived. Some special cases with explicit results and numerical examples and the motivation for further actuarial applications are also discussed. The model can be considered a generalisation of the dynamic contagion process introduced by Dassios and Zhao (2011.
Markov state models of protein misfolding
Sirur, Anshul; De Sancho, David; Best, Robert B.
2016-02-01
Markov state models (MSMs) are an extremely useful tool for understanding the conformational dynamics of macromolecules and for analyzing MD simulations in a quantitative fashion. They have been extensively used for peptide and protein folding, for small molecule binding, and for the study of native ensemble dynamics. Here, we adapt the MSM methodology to gain insight into the dynamics of misfolded states. To overcome possible flaws in root-mean-square deviation (RMSD)-based metrics, we introduce a novel discretization approach, based on coarse-grained contact maps. In addition, we extend the MSM methodology to include "sink" states in order to account for the irreversibility (on simulation time scales) of processes like protein misfolding. We apply this method to analyze the mechanism of misfolding of tandem repeats of titin domains, and how it is influenced by confinement in a chaperonin-like cavity.
Multivariate Markov chain modeling for stock markets
Maskawa, Jun-ichi
2003-06-01
We study a multivariate Markov chain model as a stochastic model of the price changes of portfolios in the framework of the mean field approximation. The time series of price changes are coded into the sequences of up and down spins according to their signs. We start with the discussion for small portfolios consisting of two stock issues. The generalization of our model to arbitrary size of portfolio is constructed by a recurrence relation. The resultant form of the joint probability of the stationary state coincides with Gibbs measure assigned to each configuration of spin glass model. Through the analysis of actual portfolios, it has been shown that the synchronization of the direction of the price changes is well described by the model.
Anatomy Ontology Matching Using Markov Logic Networks
Directory of Open Access Journals (Sweden)
Chunhua Li
2016-01-01
Full Text Available The anatomy of model species is described in ontologies, which are used to standardize the annotations of experimental data, such as gene expression patterns. To compare such data between species, we need to establish relationships between ontologies describing different species. Ontology matching is a kind of solutions to find semantic correspondences between entities of different ontologies. Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. We combine several different matching strategies through first-order logic formulas according to the structure of anatomy ontologies. Experiments on the adult mouse anatomy and the human anatomy have demonstrated the effectiveness of proposed approach in terms of the quality of result alignment.
Crossing over...Markov meets Mendel.
Mneimneh, Saad
2012-01-01
Chromosomal crossover is a biological mechanism to combine parental traits. It is perhaps the first mechanism ever taught in any introductory biology class. The formulation of crossover, and resulting recombination, came about 100 years after Mendel's famous experiments. To a great extent, this formulation is consistent with the basic genetic findings of Mendel. More importantly, it provides a mathematical insight for his two laws (and corrects them). From a mathematical perspective, and while it retains similarities, genetic recombination guarantees diversity so that we do not rapidly converge to the same being. It is this diversity that made the study of biology possible. In particular, the problem of genetic mapping and linkage-one of the first efforts towards a computational approach to biology-relies heavily on the mathematical foundation of crossover and recombination. Nevertheless, as students we often overlook the mathematics of these phenomena. Emphasizing the mathematical aspect of Mendel's laws through crossover and recombination will prepare the students to make an early realization that biology, in addition to being experimental, IS a computational science. This can serve as a first step towards a broader curricular transformation in teaching biological sciences. I will show that a simple and modern treatment of Mendel's laws using a Markov chain will make this step possible, and it will only require basic college-level probability and calculus. My personal teaching experience confirms that students WANT to know Markov chains because they hear about them from bioinformaticists all the time. This entire exposition is based on three homework problems that I designed for a course in computational biology. A typical reader is, therefore, an instructional staff member or a student in a computational field (e.g., computer science, mathematics, statistics, computational biology, bioinformatics). However, other students may easily follow by omitting the
Crossing over...Markov meets Mendel.
Directory of Open Access Journals (Sweden)
Saad Mneimneh
Full Text Available Chromosomal crossover is a biological mechanism to combine parental traits. It is perhaps the first mechanism ever taught in any introductory biology class. The formulation of crossover, and resulting recombination, came about 100 years after Mendel's famous experiments. To a great extent, this formulation is consistent with the basic genetic findings of Mendel. More importantly, it provides a mathematical insight for his two laws (and corrects them. From a mathematical perspective, and while it retains similarities, genetic recombination guarantees diversity so that we do not rapidly converge to the same being. It is this diversity that made the study of biology possible. In particular, the problem of genetic mapping and linkage-one of the first efforts towards a computational approach to biology-relies heavily on the mathematical foundation of crossover and recombination. Nevertheless, as students we often overlook the mathematics of these phenomena. Emphasizing the mathematical aspect of Mendel's laws through crossover and recombination will prepare the students to make an early realization that biology, in addition to being experimental, IS a computational science. This can serve as a first step towards a broader curricular transformation in teaching biological sciences. I will show that a simple and modern treatment of Mendel's laws using a Markov chain will make this step possible, and it will only require basic college-level probability and calculus. My personal teaching experience confirms that students WANT to know Markov chains because they hear about them from bioinformaticists all the time. This entire exposition is based on three homework problems that I designed for a course in computational biology. A typical reader is, therefore, an instructional staff member or a student in a computational field (e.g., computer science, mathematics, statistics, computational biology, bioinformatics. However, other students may easily follow by
Chen, Huaizhen; Pan, Xinpeng; Ji, Yuxin; Zhang, Guangzhi
2017-08-01
A system of aligned vertical fractures and fine horizontal shale layers combine to form equivalent orthorhombic media. Weak anisotropy parameters and fracture weaknesses play an important role in the description of orthorhombic anisotropy (OA). We propose a novel approach of utilizing seismic reflection amplitudes to estimate weak anisotropy parameters and fracture weaknesses from observed seismic data, based on azimuthal elastic impedance (EI). We first propose perturbation in stiffness matrix in terms of weak anisotropy parameters and fracture weaknesses, and using the perturbation and scattering function, we derive PP-wave reflection coefficient and azimuthal EI for the case of an interface separating two OA media. Then we demonstrate an approach to first use a model constrained damped least-squares algorithm to estimate azimuthal EI from partially incidence-phase-angle-stack seismic reflection data at different azimuths, and then extract weak anisotropy parameters and fracture weaknesses from the estimated azimuthal EI using a Bayesian Markov Chain Monte Carlo inversion method. In addition, a new procedure to construct rock physics effective model is presented to estimate weak anisotropy parameters and fracture weaknesses from well log interpretation results (minerals and their volumes, porosity, saturation, fracture density, etc.). Tests on synthetic and real data indicate that unknown parameters including elastic properties (P- and S-wave impedances and density), weak anisotropy parameters and fracture weaknesses can be estimated stably in the case of seismic data containing a moderate noise, and our approach can make a reasonable estimation of anisotropy in a fractured shale reservoir.
Modeling Uncertainty of Directed Movement via Markov Chains
Directory of Open Access Journals (Sweden)
YIN Zhangcai
2015-10-01
Full Text Available Probabilistic time geography (PTG is suggested as an extension of (classical time geography, in order to present the uncertainty of an agent located at the accessible position by probability. This may provide a quantitative basis for most likely finding an agent at a location. In recent years, PTG based on normal distribution or Brown bridge has been proposed, its variance, however, is irrelevant with the agent's speed or divergent with the increase of the speed; so they are difficult to take into account application pertinence and stability. In this paper, a new method is proposed to model PTG based on Markov chain. Firstly, a bidirectional conditions Markov chain is modeled, the limit of which, when the moving speed is large enough, can be regarded as the Brown bridge, thus has the characteristics of digital stability. Then, the directed movement is mapped to Markov chains. The essential part is to build step length, the state space and transfer matrix of Markov chain according to the space and time position of directional movement, movement speed information, to make sure the Markov chain related to the movement speed. Finally, calculating continuously the probability distribution of the directed movement at any time by the Markov chains, it can be get the possibility of an agent located at the accessible position. Experimental results show that, the variance based on Markov chains not only is related to speed, but also is tending towards stability with increasing the agent's maximum speed.
Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium
Kapfer, Sebastian C.; Krauth, Werner
2017-12-01
We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.
Markov processes from K. Ito's perspective (AM-155)
Stroock, Daniel W
2003-01-01
Kiyosi Itô''s greatest contribution to probability theory may be his introduction of stochastic differential equations to explain the Kolmogorov-Feller theory of Markov processes. Starting with the geometric ideas that guided him, this book gives an account of Itô''s program. The modern theory of Markov processes was initiated by A. N. Kolmogorov. However, Kolmogorov''s approach was too analytic to reveal the probabilistic foundations on which it rests. In particular, it hides the central role played by the simplest Markov processes: those with independent, identically distributed incremen
Sampling rare fluctuations of discrete-time Markov chains
Whitelam, Stephen
2018-03-01
We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.
Markov's theorem and algorithmically non-recognizable combinatorial manifolds
International Nuclear Information System (INIS)
Shtan'ko, M A
2004-01-01
We prove the theorem of Markov on the existence of an algorithmically non-recognizable combinatorial n-dimensional manifold for every n≥4. We construct for the first time a concrete manifold which is algorithmically non-recognizable. A strengthened form of Markov's theorem is proved using the combinatorial methods of regular neighbourhoods and handle theory. The proofs coincide for all n≥4. We use Borisov's group with insoluble word problem. It has two generators and twelve relations. The use of this group forms the base for proving the strengthened form of Markov's theorem
Assenza, G; Mecarelli, O; Lanzone, J; Assenza, F; Tombini, M; Di Lazzaro, V; Pulitano, P
2018-05-01
Eslicarbazepine acetate (ESL) is a third-generation member of the dibenzazepine family approved in 2009 by the European Medicines Agency with the indication of adjunctive therapy in adult people with partial-onset seizures (PPOS). We aimed at assessing the ESL impact on seizure frequency and quality of life in PPOS with a particular attention to sleepiness and depression. We evaluated 50 adult PPOS (>18 years; 48 ± 14 years-old; 23 males) treated with adjunctive ESL for ≥2months with a retrospective multi-centric design. Clinical files of the last 2 years were reviewed checking for monthly seizure frequency, treatment retention rate, adverse drug reactions (ADRs), concomitant anti-epileptic drugs and behavioural scales for sleepiness (Stanford Sleepiness Scale, SSS, and Epworth Sleepiness Scale, ESS), depression (Beck Depression Inventory-II, BDI) and overall quality of life (QOLIE-31). At the end of 96 ± 28 days of ESL treatment, the mean seizure reduction was 56%; 60% of patients had seizure reduction above 50%, with a 31% of the whole population becoming seizure free. We reported 16 ADRs with 4 hyponatremia. Retention rate was 76%. Patient reported less sleepiness after ESL (SSS, p = 0.031; ESS, p = 0.0000002). Before ESL, 38% of patients had pathologic BDI scores, which normalized in most of them (73%) after ESL (BDI improvement, p = 0.000012). These scores resulted in an amelioration of quality of life (QOLIE-31, p = 0.000002). ESL is a safe and effective anti-epileptic drug in a real life scenario, with an excellent behavioural profile for the overall quality of life and, in particular, for sleepiness and depression. Copyright © 2018 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.
Summary statistics for end-point conditioned continuous-time Markov chains
DEFF Research Database (Denmark)
Hobolth, Asger; Jensen, Jens Ledet
Continuous-time Markov chains are a widely used modelling tool. Applications include DNA sequence evolution, ion channel gating behavior and mathematical finance. We consider the problem of calculating properties of summary statistics (e.g. mean time spent in a state, mean number of jumps between...... two states and the distribution of the total number of jumps) for discretely observed continuous time Markov chains. Three alternative methods for calculating properties of summary statistics are described and the pros and cons of the methods are discussed. The methods are based on (i) an eigenvalue...... decomposition of the rate matrix, (ii) the uniformization method, and (iii) integrals of matrix exponentials. In particular we develop a framework that allows for analyses of rather general summary statistics using the uniformization method....
Segmentation of laser range radar images using hidden Markov field models
International Nuclear Information System (INIS)
Pucar, P.
1993-01-01
Segmentation of images in the context of model based stochastic techniques is connected with high, very often unpracticle computational complexity. The objective with this thesis is to take the models used in model based image processing, simplify and use them in suboptimal, but not computationally demanding algorithms. Algorithms that are essentially one-dimensional, and their extensions to two dimensions are given. The model used in this thesis is the well known hidden Markov model. Estimation of the number of hidden states from observed data is a problem that is addressed. The state order estimation problem is of general interest and is not specifically connected to image processing. An investigation of three state order estimation techniques for hidden Markov models is given. 76 refs
The Independence of Markov's Principle in Type Theory
DEFF Research Database (Denmark)
Coquand, Thierry; Mannaa, Bassel
2017-01-01
for the generic point of this model. Instead we design an extension of type theory, which intuitively extends type theory by the addition of a generic point of Cantor space. We then show the consistency of this extension by a normalization argument. Markov's principle does not hold in this extension......In this paper, we show that Markov's principle is not derivable in dependent type theory with natural numbers and one universe. One way to prove this would be to remark that Markov's principle does not hold in a sheaf model of type theory over Cantor space, since Markov's principle does not hold......, and it follows that it cannot be proved in type theory....
Classification of customer lifetime value models using Markov chain
Permana, Dony; Pasaribu, Udjianna S.; Indratno, Sapto W.; Suprayogi
2017-10-01
A firm’s potential reward in future time from a customer can be determined by customer lifetime value (CLV). There are some mathematic methods to calculate it. One method is using Markov chain stochastic model. Here, a customer is assumed through some states. Transition inter the states follow Markovian properties. If we are given some states for a customer and the relationships inter states, then we can make some Markov models to describe the properties of the customer. As Markov models, CLV is defined as a vector contains CLV for a customer in the first state. In this paper we make a classification of Markov Models to calculate CLV. Start from two states of customer model, we make develop in many states models. The development a model is based on weaknesses in previous model. Some last models can be expected to describe how real characters of customers in a firm.
Markov Chain: A Predictive Model for Manpower Planning ...
African Journals Online (AJOL)
ADOWIE PERE
Keywords: Markov Chain, Transition Probability Matrix, Manpower Planning, Recruitment, Promotion, .... movement of the workforce in Jordan productivity .... Planning periods, with T being the horizon, the value of t represents a session.
Continuous-time Markov decision processes theory and applications
Guo, Xianping
2009-01-01
This volume provides the first book entirely devoted to recent developments on the theory and applications of continuous-time Markov decision processes (MDPs). The MDPs presented here include most of the cases that arise in applications.
A simplified parsimonious higher order multivariate Markov chain model
Wang, Chao; Yang, Chuan-sheng
2017-09-01
In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.
A tridiagonal parsimonious higher order multivariate Markov chain model
Wang, Chao; Yang, Chuan-sheng
2017-09-01
In this paper, we present a tridiagonal parsimonious higher-order multivariate Markov chain model (TPHOMMCM). Moreover, estimation method of the parameters in TPHOMMCM is give. Numerical experiments illustrate the effectiveness of TPHOMMCM.
Optimisation of Hidden Markov Model using Baum–Welch algorithm ...
Indian Academy of Sciences (India)
The present work is a part of development of Hidden Markov Model. (HMM) based ... the Himalaya. In this work, HMMs have been developed for forecasting of maximum and minimum ..... data collection teams of Snow and Avalanche Study.
Markov chain: a predictive model for manpower planning | Ezugwu ...
African Journals Online (AJOL)
In respect of organizational management, numerous previous studies have ... and to forecast the academic staff structure of the university in the next five years. ... Keywords: Markov Chain, Transition Probability Matrix, Manpower Planning, ...
A Novel Method for Decoding Any High-Order Hidden Markov Model
Directory of Open Access Journals (Sweden)
Fei Ye
2014-01-01
Full Text Available This paper proposes a novel method for decoding any high-order hidden Markov model. First, the high-order hidden Markov model is transformed into an equivalent first-order hidden Markov model by Hadar’s transformation. Next, the optimal state sequence of the equivalent first-order hidden Markov model is recognized by the existing Viterbi algorithm of the first-order hidden Markov model. Finally, the optimal state sequence of the high-order hidden Markov model is inferred from the optimal state sequence of the equivalent first-order hidden Markov model. This method provides a unified algorithm framework for decoding hidden Markov models including the first-order hidden Markov model and any high-order hidden Markov model.
Spectral analysis of multi-dimensional self-similar Markov processes
International Nuclear Information System (INIS)
Modarresi, N; Rezakhah, S
2010-01-01
In this paper we consider a discrete scale invariant (DSI) process {X(t), t in R + } with scale l > 1. We consider a fixed number of observations in every scale, say T, and acquire our samples at discrete points α k , k in W, where α is obtained by the equality l = α T and W = {0, 1, ...}. We thus provide a discrete time scale invariant (DT-SI) process X(.) with the parameter space {α k , k in W}. We find the spectral representation of the covariance function of such a DT-SI process. By providing the harmonic-like representation of multi-dimensional self-similar processes, spectral density functions of them are presented. We assume that the process {X(t), t in R + } is also Markov in the wide sense and provide a discrete time scale invariant Markov (DT-SIM) process with the above scheme of sampling. We present an example of the DT-SIM process, simple Brownian motion, by the above sampling scheme and verify our results. Finally, we find the spectral density matrix of such a DT-SIM process and show that its associated T-dimensional self-similar Markov process is fully specified by {R H j (1), R j H (0), j = 0, 1, ..., T - 1}, where R H j (τ) is the covariance function of jth and (j + τ)th observations of the process.
Directory of Open Access Journals (Sweden)
Lun-Hui Xu
2013-01-01
Full Text Available Urban traffic self-adaptive control problem is dynamic and uncertain, so the states of traffic environment are hard to be observed. Efficient agent which controls a single intersection can be discovered automatically via multiagent reinforcement learning. However, in the majority of the previous works on this approach, each agent needed perfect observed information when interacting with the environment and learned individually with less efficient coordination. This study casts traffic self-adaptive control as a multiagent Markov game problem. The design employs traffic signal control agent (TSCA for each signalized intersection that coordinates with neighboring TSCAs. A mathematical model for TSCAs’ interaction is built based on nonzero-sum markov game which has been applied to let TSCAs learn how to cooperate. A multiagent Markov game reinforcement learning approach is constructed on the basis of single-agent Q-learning. This method lets each TSCA learn to update its Q-values under the joint actions and imperfect information. The convergence of the proposed algorithm is analyzed theoretically. The simulation results show that the proposed method is convergent and effective in realistic traffic self-adaptive control setting.
International Nuclear Information System (INIS)
Hemi, Hanane; Ghouili, Jamel; Cheriti, Ahmed
2015-01-01
Highlights: • A combination of Markov chain and an optimal control solved by Pontryagin’s Minimum Principle is presented. • This strategy is applied to hybrid electric vehicle dynamic model. • The hydrogen consumption is analyzed for two different vehicle mass and drive cycle. • The supercapacitor and fuel cell behavior is analyzed at high or sudden required power. - Abstract: In this article, a real time optimal control strategy based on Pontryagin’s Minimum Principle (PMP) combined with the Markov chain approach is used for a fuel cell/supercapacitor electrical vehicle. In real time, at high power and at high speed, two phenomena are observed. The first is obtained at higher required power, and the second is observed at sudden power demand. To avoid these situations, the Markov chain model is proposed to predict the future power demand during a driving cycle. The optimal control problem is formulated as an equivalent consumption minimization strategy (ECMS), that has to be solved by using the Pontryagin’s Minimum Principle. A Markov chain model is added as a separate block for a prediction of required power. This approach and the whole system are modeled and implemented using the MATLAB/Simulink. The model without Markov chain block and the model is with it are compared. The results presented demonstrate the importance of a Markov chain block added to a model
Markov Chain Models for the Stochastic Modeling of Pitting Corrosion
Valor, A.; Caleyo, F.; Alfonso, L.; Velázquez, J. C.; Hallen, J. M.
2013-01-01
The stochastic nature of pitting corrosion of metallic structures has been widely recognized. It is assumed that this kind of deterioration retains no memory of the past, so only the current state of the damage influences its future development. This characteristic allows pitting corrosion to be categorized as a Markov process. In this paper, two different models of pitting corrosion, developed using Markov chains, are presented. Firstly, a continuous-time, nonhomogeneous linear growth (pure ...
On almost-periodic points of a topological Markov chain
International Nuclear Information System (INIS)
Bogatyi, Semeon A; Redkozubov, Vadim V
2012-01-01
We prove that a transitive topological Markov chain has almost-periodic points of all D-periods. Moreover, every D-period is realized by continuously many distinct minimal sets. We give a simple constructive proof of the result which asserts that any transitive topological Markov chain has periodic points of almost all periods, and study the structure of the finite set of positive integers that are not periods.
On mean reward variance in semi-Markov processes
Czech Academy of Sciences Publication Activity Database
Sladký, Karel
2005-01-01
Roč. 62, č. 3 (2005), s. 387-397 ISSN 1432-2994 R&D Projects: GA ČR(CZ) GA402/05/0115; GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov and semi-Markov processes with rewards * variance of cumulative reward * asymptotic behaviour Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.259, year: 2005
Hidden Markov models in automatic speech recognition
Wrzoskowicz, Adam
1993-11-01
This article describes a method for constructing an automatic speech recognition system based on hidden Markov models (HMMs). The author discusses the basic concepts of HMM theory and the application of these models to the analysis and recognition of speech signals. The author provides algorithms which make it possible to train the ASR system and recognize signals on the basis of distinct stochastic models of selected speech sound classes. The author describes the specific components of the system and the procedures used to model and recognize speech. The author discusses problems associated with the choice of optimal signal detection and parameterization characteristics and their effect on the performance of the system. The author presents different options for the choice of speech signal segments and their consequences for the ASR process. The author gives special attention to the use of lexical, syntactic, and semantic information for the purpose of improving the quality and efficiency of the system. The author also describes an ASR system developed by the Speech Acoustics Laboratory of the IBPT PAS. The author discusses the results of experiments on the effect of noise on the performance of the ASR system and describes methods of constructing HMM's designed to operate in a noisy environment. The author also describes a language for human-robot communications which was defined as a complex multilevel network from an HMM model of speech sounds geared towards Polish inflections. The author also added mandatory lexical and syntactic rules to the system for its communications vocabulary.
Stability and perturbations of countable Markov maps
Jordan, Thomas; Munday, Sara; Sahlsten, Tuomas
2018-04-01
Let T and , , be countable Markov maps such that the branches of converge pointwise to the branches of T, as . We study the stability of various quantities measuring the singularity (dimension, Hölder exponent etc) of the topological conjugacy between and T when . This is a well-understood problem for maps with finitely-many branches, and the quantities are stable for small ɛ, that is, they converge to their expected values if . For the infinite branch case their stability might be expected to fail, but we prove that even in the infinite branch case the quantity is stable under some natural regularity assumptions on and T (under which, for instance, the Hölder exponent of fails to be stable). Our assumptions apply for example in the case of Gauss map, various Lüroth maps and accelerated Manneville-Pomeau maps when varying the parameter α. For the proof we introduce a mass transportation method from the cusp that allows us to exploit thermodynamical ideas from the finite branch case. Dedicated to the memory of Bernd O Stratmann
Lectures from Markov processes to Brownian motion
Chung, Kai Lai
1982-01-01
This book evolved from several stacks of lecture notes written over a decade and given in classes at slightly varying levels. In transforming the over lapping material into a book, I aimed at presenting some of the best features of the subject with a minimum of prerequisities and technicalities. (Needless to say, one man's technicality is another's professionalism. ) But a text frozen in print does not allow for the latitude of the classroom; and the tendency to expand becomes harder to curb without the constraints of time and audience. The result is that this volume contains more topics and details than I had intended, but I hope the forest is still visible with the trees. The book begins at the beginning with the Markov property, followed quickly by the introduction of option al times and martingales. These three topics in the discrete parameter setting are fully discussed in my book A Course In Probability Theory (second edition, Academic Press, 1974). The latter will be referred to throughout this book ...
Markov branching in the vertex splitting model
International Nuclear Information System (INIS)
Stefánsson, Sigurdur Örn
2012-01-01
We study a special case of the vertex splitting model which is a recent model of randomly growing trees. For any finite maximum vertex degree D, we find a one parameter model, with parameter α element of [0,1] which has a so-called Markov branching property. When D=∞ we find a two parameter model with an additional parameter γ element of [0,1] which also has this feature. In the case D = 3, the model bears resemblance to Ford's α-model of phylogenetic trees and when D=∞ it is similar to its generalization, the αγ-model. For α = 0, the model reduces to the well known model of preferential attachment. In the case α > 0, we prove convergence of the finite volume probability measures, generated by the growth rules, to a measure on infinite trees which is concentrated on the set of trees with a single spine. We show that the annealed Hausdorff dimension with respect to the infinite volume measure is 1/α. When γ = 0 the model reduces to a model of growing caterpillar graphs in which case we prove that the Hausdorff dimension is almost surely 1/α and that the spectral dimension is almost surely 2/(1 + α). We comment briefly on the distribution of vertex degrees and correlations between degrees of neighbouring vertices
Bayesian posterior distributions without Markov chains.
Cole, Stephen R; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B
2012-03-01
Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976-1983) assessing the relation between residential exposure to magnetic fields and the development of childhood cancer. Results from rejection sampling (odds ratio (OR) = 1.69, 95% posterior interval (PI): 0.57, 5.00) were similar to MCMC results (OR = 1.69, 95% PI: 0.58, 4.95) and approximations from data-augmentation priors (OR = 1.74, 95% PI: 0.60, 5.06). In example 2, the authors apply rejection sampling to a cohort study of 315 human immunodeficiency virus seroconverters (1984-1998) to assess the relation between viral load after infection and 5-year incidence of acquired immunodeficiency syndrome, adjusting for (continuous) age at seroconversion and race. In this more complex example, rejection sampling required a notably longer run time than MCMC sampling but remained feasible and again yielded similar results. The transparency of the proposed approach comes at a price of being less broadly applicable than MCMC.
Markov source model for printed music decoding
Kopec, Gary E.; Chou, Philip A.; Maltz, David A.
1995-03-01
This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.
Bennett, Casey C; Hauser, Kris
2013-01-01
In the modern healthcare system, rapidly expanding costs/complexity, the growing myriad of treatment options, and exploding information streams that often do not effectively reach the front lines hinder the ability to choose optimal treatment decisions over time. The goal in this paper is to develop a general purpose (non-disease-specific) computational/artificial intelligence (AI) framework to address these challenges. This framework serves two potential functions: (1) a simulation environment for exploring various healthcare policies, payment methodologies, etc., and (2) the basis for clinical artificial intelligence - an AI that can "think like a doctor". This approach combines Markov decision processes and dynamic decision networks to learn from clinical data and develop complex plans via simulation of alternative sequential decision paths while capturing the sometimes conflicting, sometimes synergistic interactions of various components in the healthcare system. It can operate in partially observable environments (in the case of missing observations or data) by maintaining belief states about patient health status and functions as an online agent that plans and re-plans as actions are performed and new observations are obtained. This framework was evaluated using real patient data from an electronic health record. The results demonstrate the feasibility of this approach; such an AI framework easily outperforms the current treatment-as-usual (TAU) case-rate/fee-for-service models of healthcare. The cost per unit of outcome change (CPUC) was $189 vs. $497 for AI vs. TAU (where lower is considered optimal) - while at the same time the AI approach could obtain a 30-35% increase in patient outcomes. Tweaking certain AI model parameters could further enhance this advantage, obtaining approximately 50% more improvement (outcome change) for roughly half the costs. Given careful design and problem formulation, an AI simulation framework can approximate optimal
Prognostics for Steam Generator Tube Rupture using Markov Chain model
International Nuclear Information System (INIS)
Kim, Gibeom; Heo, Gyunyoung; Kim, Hyeonmin
2016-01-01
This paper will describe the prognostics method for evaluating and forecasting the ageing effect and demonstrate the procedure of prognostics for the Steam Generator Tube Rupture (SGTR) accident. Authors will propose the data-driven method so called MCMC (Markov Chain Monte Carlo) which is preferred to the physical-model method in terms of flexibility and availability. Degradation data is represented as growth of burst probability over time. Markov chain model is performed based on transition probability of state. And the state must be discrete variable. Therefore, burst probability that is continuous variable have to be changed into discrete variable to apply Markov chain model to the degradation data. The Markov chain model which is one of prognostics methods was described and the pilot demonstration for a SGTR accident was performed as a case study. The Markov chain model is strong since it is possible to be performed without physical models as long as enough data are available. However, in the case of the discrete Markov chain used in this study, there must be loss of information while the given data is discretized and assigned to the finite number of states. In this process, original information might not be reflected on prediction sufficiently. This should be noted as the limitation of discrete models. Now we will be studying on other prognostics methods such as GPM (General Path Model) which is also data-driven method as well as the particle filer which belongs to physical-model method and conducting comparison analysis
Probability distributions for Markov chain based quantum walks
Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.
2018-01-01
We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.
Hidden Markov Model-based Pedestrian Navigation System using MEMS Inertial Sensors
Directory of Open Access Journals (Sweden)
Zhang Yingjun
2015-02-01
Full Text Available In this paper, a foot-mounted pedestrian navigation system using MEMS inertial sensors is implemented, where the zero-velocity detection is abstracted into a hidden Markov model with 4 states and 15 observations. Moreover, an observations extraction algorithm has been developed to extract observations from sensor outputs; sample sets are used to train and optimize the model parameters by the Baum-Welch algorithm. Finally, a navigation system is developed, and the performance of the pedestrian navigation system is evaluated using indoor and outdoor field tests, and the results show that position error is less than 3% of total distance travelled.
Basic problems solving for two-dimensional discrete 3 × 4 order hidden markov model
International Nuclear Information System (INIS)
Wang, Guo-gang; Gan, Zong-liang; Tang, Gui-jin; Cui, Zi-guan; Zhu, Xiu-chang
2016-01-01
A novel model is proposed to overcome the shortages of the classical hypothesis of the two-dimensional discrete hidden Markov model. In the proposed model, the state transition probability depends on not only immediate horizontal and vertical states but also on immediate diagonal state, and the observation symbol probability depends on not only current state but also on immediate horizontal, vertical and diagonal states. This paper defines the structure of the model, and studies the three basic problems of the model, including probability calculation, path backtracking and parameters estimation. By exploiting the idea that the sequences of states on rows or columns of the model can be seen as states of a one-dimensional discrete 1 × 2 order hidden Markov model, several algorithms solving the three questions are theoretically derived. Simulation results further demonstrate the performance of the algorithms. Compared with the two-dimensional discrete hidden Markov model, there are more statistical characteristics in the structure of the proposed model, therefore the proposed model theoretically can more accurately describe some practical problems.
Characterization of the rat exploratory behavior in the elevated plus-maze with Markov chains.
Tejada, Julián; Bosco, Geraldine G; Morato, Silvio; Roque, Antonio C
2010-11-30
The elevated plus-maze is an animal model of anxiety used to study the effect of different drugs on the behavior of the animal. It consists of a plus-shaped maze with two open and two closed arms elevated 50cm from the floor. The standard measures used to characterize exploratory behavior in the elevated plus-maze are the time spent and the number of entries in the open arms. In this work, we use Markov chains to characterize the exploratory behavior of the rat in the elevated plus-maze under three different conditions: normal and under the effects of anxiogenic and anxiolytic drugs. The spatial structure of the elevated plus-maze is divided into squares, which are associated with states of a Markov chain. By counting the frequencies of transitions between states during 5-min sessions in the elevated plus-maze, we constructed stochastic matrices for the three conditions studied. The stochastic matrices show specific patterns, which correspond to the observed behaviors of the rat under the three different conditions. For the control group, the stochastic matrix shows a clear preference for places in the closed arms. This preference is enhanced for the anxiogenic group. For the anxiolytic group, the stochastic matrix shows a pattern similar to a random walk. Our results suggest that Markov chains can be used together with the standard measures to characterize the rat behavior in the elevated plus-maze. Copyright © 2010 Elsevier B.V. All rights reserved.
Long-range memory and non-Markov statistical effects in human sensorimotor coordination
M. Yulmetyev, Renat; Emelyanova, Natalya; Hänggi, Peter; Gafarov, Fail; Prokhorov, Alexander
2002-12-01
In this paper, the non-Markov statistical processes and long-range memory effects in human sensorimotor coordination are investigated. The theoretical basis of this study is the statistical theory of non-stationary discrete non-Markov processes in complex systems (Phys. Rev. E 62, 6178 (2000)). The human sensorimotor coordination was experimentally studied by means of standard dynamical tapping test on the group of 32 young peoples with tap numbers up to 400. This test was carried out separately for the right and the left hand according to the degree of domination of each brain hemisphere. The numerical analysis of the experimental results was made with the help of power spectra of the initial time correlation function, the memory functions of low orders and the first three points of the statistical spectrum of non-Markovity parameter. Our observations demonstrate, that with the regard to results of the standard dynamic tapping-test it is possible to divide all examinees into five different dynamic types. We have introduced the conflict coefficient to estimate quantitatively the order-disorder effects underlying life systems. The last one reflects the existence of disbalance between the nervous and the motor human coordination. The suggested classification of the neurophysiological activity represents the dynamic generalization of the well-known neuropsychological types and provides the new approach in a modern neuropsychology.
Markov chain modeling of evolution of strains in reinforced concrete flexural beams
Directory of Open Access Journals (Sweden)
Anoop, M. B.
2012-09-01
Full Text Available From the analysis of experimentally observed variations in surface strains with loading in reinforced concrete beams, it is noted that there is a need to consider the evolution of strains (with loading as a stochastic process. Use of Markov Chains for modeling stochastic evolution of strains with loading in reinforced concrete flexural beams is studied in this paper. A simple, yet practically useful, bi-level homogeneous Gaussian Markov Chain (BLHGMC model is proposed for determining the state of strain in reinforced concrete beams. The BLHGMC model will be useful for predicting behavior/response of reinforced concrete beams leading to more rational design.A través del análisis de la evolución de la deformación superficial observada experimentalmente en vigas de hormigón armado al entrar en carga, se constata que dicho proceso debe considerarse estocástico. En este trabajo se estudia la utilización de cadenas de Markov para modelizar la evolución estocástica de la deformación de vigas flexotraccionadas. Se propone, para establecer el estado de deformación de estas, un modelo con distribución gaussiana tipo cadena de Markov homogénea de dos niveles (BLHGMC por sus siglas en inglés, cuyo empleo resulta sencillo y práctico. Se comprueba la utilidad del modelo BLHGMC para prever el comportamiento de estos elementos, lo que determina a su vez una mayor racionalidad a la hora de su cálculo y diseño
Hidden Semi-Markov Models for Predictive Maintenance
Directory of Open Access Journals (Sweden)
Francesco Cartella
2015-01-01
Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.
Entropy-based generating Markov partitions for complex systems
Rubido, Nicolás; Grebogi, Celso; Baptista, Murilo S.
2018-03-01
Finding the correct encoding for a generic dynamical system's trajectory is a complicated task: the symbolic sequence needs to preserve the invariant properties from the system's trajectory. In theory, the solution to this problem is found when a Generating Markov Partition (GMP) is obtained, which is only defined once the unstable and stable manifolds are known with infinite precision and for all times. However, these manifolds usually form highly convoluted Euclidean sets, are a priori unknown, and, as it happens in any real-world experiment, measurements are made with finite resolution and over a finite time-span. The task gets even more complicated if the system is a network composed of interacting dynamical units, namely, a high-dimensional complex system. Here, we tackle this task and solve it by defining a method to approximately construct GMPs for any complex system's finite-resolution and finite-time trajectory. We critically test our method on networks of coupled maps, encoding their trajectories into symbolic sequences. We show that these sequences are optimal because they minimise the information loss and also any spurious information added. Consequently, our method allows us to approximately calculate the invariant probability measures of complex systems from the observed data. Thus, we can efficiently define complexity measures that are applicable to a wide range of complex phenomena, such as the characterisation of brain activity from electroencephalogram signals measured at different brain regions or the characterisation of climate variability from temperature anomalies measured at different Earth regions.
Asteroid mass estimation using Markov-chain Monte Carlo
Siltala, Lauri; Granvik, Mikael
2017-11-01
Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to an inverse problem in at least 13 dimensions where the aim is to derive the mass of the perturbing asteroid(s) and six orbital elements for both the perturbing asteroid(s) and the test asteroid(s) based on astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations: the very rough 'marching' approximation, in which the asteroids' orbital elements are not fitted, thereby reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-chain Monte Carlo (MCMC) approach. We describe each of these algorithms with particular focus on the MCMC algorithm, and present example results using both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans.
Simulating reservoir lithologies by an actively conditioned Markov chain model
Feng, Runhai; Luthi, Stefan M.; Gisolf, Dries
2018-06-01
The coupled Markov chain model can be used to simulate reservoir lithologies between wells, by conditioning them on the observed data in the cored wells. However, with this method, only the state at the same depth as the current cell is going to be used for conditioning, which may be a problem if the geological layers are dipping. This will cause the simulated lithological layers to be broken or to become discontinuous across the reservoir. In order to address this problem, an actively conditioned process is proposed here, in which a tolerance angle is predefined. The states contained in the region constrained by the tolerance angle will be employed for conditioning in the horizontal chain first, after which a coupling concept with the vertical chain is implemented. In order to use the same horizontal transition matrix for different future states, the tolerance angle has to be small. This allows the method to work in reservoirs without complex structures caused by depositional processes or tectonic deformations. Directional artefacts in the modeling process are avoided through a careful choice of the simulation path. The tolerance angle and dipping direction of the strata can be obtained from a correlation between wells, or from seismic data, which are available in most hydrocarbon reservoirs, either by interpretation or by inversion that can also assist the construction of a horizontal probability matrix.
A hidden markov model derived structural alphabet for proteins.
Camproux, A C; Gautier, R; Tufféry, P
2004-06-04
Understanding and predicting protein structures depends on the complexity and the accuracy of the models used to represent them. We have set up a hidden Markov model that discretizes protein backbone conformation as series of overlapping fragments (states) of four residues length. This approach learns simultaneously the geometry of the states and their connections. We obtain, using a statistical criterion, an optimal systematic decomposition of the conformational variability of the protein peptidic chain in 27 states with strong connection logic. This result is stable over different protein sets. Our model fits well the previous knowledge related to protein architecture organisation and seems able to grab some subtle details of protein organisation, such as helix sub-level organisation schemes. Taking into account the dependence between the states results in a description of local protein structure of low complexity. On an average, the model makes use of only 8.3 states among 27 to describe each position of a protein structure. Although we use short fragments, the learning process on entire protein conformations captures the logic of the assembly on a larger scale. Using such a model, the structure of proteins can be reconstructed with an average accuracy close to 1.1A root-mean-square deviation and for a complexity of only 3. Finally, we also observe that sequence specificity increases with the number of states of the structural alphabet. Such models can constitute a very relevant approach to the analysis of protein architecture in particular for protein structure prediction.
Hamer, Hajo; Baulac, Michel; McMurray, Rob; Kockelmann, Edgar
2016-01-01
Zonisamide is licensed for adjunctive therapy for partial-onset seizures with or without secondary generalisation in patients 6 years and older and as monotherapy for the treatment of partial seizures in adult patients with newly diagnosed epilepsy, and shows a favourable pharmacokinetic profile with low interaction potential with other drugs. The aim of the present study was to gather real-life data on retention and modalities of zonisamide use when administered as only add-on treatment to a current AED monotherapy in adult patients with partial-onset seizures. This multicenter observational study was performed in 4 European countries and comprised three visits: baseline, and after 3 and 6 months. Data on patients' retention, reported efficacy, tolerability and safety, and quality of life was collected. Of 100 included patients, 93 could be evaluated. After 6 months, the retention rate of zonisamide add-on therapy was 82.8%. At this time, a reduction of seizure frequency of at least 50% was observed in 79.7% of patients, with 43.6% reporting seizure freedom over the last 3 months of the study period. Adverse events were reported by 19.4% of patients, with fatigue, agitation, dizziness, and headache being most frequent. Approximately 25% of patients were older than 60 years, many of whom suffered from late-onset epilepsy. Compared to younger patients, these patients showed considerable differences with regard to their antiepileptic drug regimen at baseline, and slightly higher responder and retention rates at 6 months. Despite limitations due to the non-interventional open-label design and the low sample size, the results show that zonisamide as only add-on therapy is well retained, indicating effectiveness in the majority of patients under real-life conditions. Copyright © 2015. Published by Elsevier Ltd.
Maximally reliable Markov chains under energy constraints.
Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam
2009-07-01
Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.
Snyder, Morgan E.; Waldron, John W. F.
2018-03-01
The deformation history of the Upper Paleozoic Maritimes Basin, Atlantic Canada, can be partially unraveled by examining fractures (joints, veins, and faults) that are well exposed on the shorelines of the macrotidal Bay of Fundy, in subsurface core, and on image logs. Data were collected from coastal outcrops and well core across the Windsor-Kennetcook subbasin, a subbasin in the Maritimes Basin, using the circular scan-line and vertical scan-line methods in outcrop, and FMI Image log analysis of core. We use cross-cutting and abutting relationships between fractures to understand relative timing of fracturing, followed by a statistical test (Markov chain analysis) to separate groups of fractures. This analysis, previously used in sedimentology, was modified to statistically test the randomness of fracture timing relationships. The results of the Markov chain analysis suggest that fracture initiation can be attributed to movement along the Minas Fault Zone, an E-W fault system that bounds the Windsor-Kennetcook subbasin to the north. Four sets of fractures are related to dextral strike slip along the Minas Fault Zone in the late Paleozoic, and four sets are related to sinistral reactivation of the same boundary in the Mesozoic.
Markov Chain Models for the Stochastic Modeling of Pitting Corrosion
Directory of Open Access Journals (Sweden)
A. Valor
2013-01-01
Full Text Available The stochastic nature of pitting corrosion of metallic structures has been widely recognized. It is assumed that this kind of deterioration retains no memory of the past, so only the current state of the damage influences its future development. This characteristic allows pitting corrosion to be categorized as a Markov process. In this paper, two different models of pitting corrosion, developed using Markov chains, are presented. Firstly, a continuous-time, nonhomogeneous linear growth (pure birth Markov process is used to model external pitting corrosion in underground pipelines. A closed-form solution of the system of Kolmogorov's forward equations is used to describe the transition probability function in a discrete pit depth space. The transition probability function is identified by correlating the stochastic pit depth mean with the empirical deterministic mean. In the second model, the distribution of maximum pit depths in a pitting experiment is successfully modeled after the combination of two stochastic processes: pit initiation and pit growth. Pit generation is modeled as a nonhomogeneous Poisson process, in which induction time is simulated as the realization of a Weibull process. Pit growth is simulated using a nonhomogeneous Markov process. An analytical solution of Kolmogorov's system of equations is also found for the transition probabilities from the first Markov state. Extreme value statistics is employed to find the distribution of maximum pit depths.
Prediction of inspection intervals using the Markov analysis
International Nuclear Information System (INIS)
Rea, R.; Arellano, J.
2005-01-01
To solve the unmanageable number of states of Markov of systems that have a great number of components, it is intends a modification to the method of Markov, denominated Markov truncated analysis, in which is assumed that it is worthless the dependence among faults of components. With it the number of states is increased in a lineal way (not exponential) with the number of components of the system, simplifying the analysis vastly. As example, the proposed method was applied to the system HPCS of the CLV considering its 18 main components. It thinks about that each component can take three states: operational, with hidden fault and with revealed fault. Additionally, it takes into account the configuration of the system HPCS by means of a block diagram of dependability to estimate their unavailability at level system. The results of the model here proposed are compared with other methods and approaches used to simplify the Markov analysis. It also intends the modification of the intervals of inspection of three components of the system HPCS. This finishes with base in the developed Markov model and in the maximum time allowed by the code ASME (NUREG-1482) to inspect components of systems that are in reservation in nuclear power plants. (Author)
Benchmarking of a Markov multizone model of contaminant transport.
Jones, Rachael M; Nicas, Mark
2014-10-01
A Markov chain model previously applied to the simulation of advection and diffusion process of gaseous contaminants is extended to three-dimensional transport of particulates in indoor environments. The model framework and assumptions are described. The performance of the Markov model is benchmarked against simple conventional models of contaminant transport. The Markov model is able to replicate elutriation predictions of particle deposition with distance from a point source, and the stirred settling of respirable particles. Comparisons with turbulent eddy diffusion models indicate that the Markov model exhibits numerical diffusion in the first seconds after release, but over time accurately predicts mean lateral dispersion. The Markov model exhibits some instability with grid length aspect when turbulence is incorporated by way of the turbulent diffusion coefficient, and advection is present. However, the magnitude of prediction error may be tolerable for some applications and can be avoided by incorporating turbulence by way of fluctuating velocity (e.g. turbulence intensity). © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Directory of Open Access Journals (Sweden)
Eric A Zilli
2008-07-01
Full Text Available Researchers use a variety of behavioral tasks to analyze the effect of biological manipulations on memory function. This research will benefit from a systematic mathematical method for analyzing memory demands in behavioral tasks. In the framework of reinforcement learning theory, these tasks can be mathematically described as partially-observable Markov decision processes. While a wealth of evidence collected over the past 15 years relates the basal ganglia to the reinforcement learning framework, only recently has much attention been paid to including psychological concepts such as working memory or episodic memory in these models. This paper presents an analysis that provides a quantitative description of memory states sufficient for correct choices at specific decision points. Using information from the mathematical structure of the task descriptions, we derive measures that indicate whether working memory (for one or more cues or episodic memory can provide strategically useful information to an agent. In particular, the analysis determines which observed states must be maintained in or retrieved from memory to perform these specific tasks. We demonstrate the analysis on three simplified tasks as well as eight more complex memory tasks drawn from the animal and human literature (two alternation tasks, two sequence disambiguation tasks, two non-matching tasks, the 2-back task, and the 1-2-AX task. The results of these analyses agree with results from quantitative simulations of the task reported in previous publications and provide simple indications of the memory demands of the tasks which can require far less computation than a full simulation of the task. This may provide a basis for a quantitative behavioral stoichiometry of memory tasks.
Zilli, Eric A; Hasselmo, Michael E
2008-07-23
Researchers use a variety of behavioral tasks to analyze the effect of biological manipulations on memory function. This research will benefit from a systematic mathematical method for analyzing memory demands in behavioral tasks. In the framework of reinforcement learning theory, these tasks can be mathematically described as partially-observable Markov decision processes. While a wealth of evidence collected over the past 15 years relates the basal ganglia to the reinforcement learning framework, only recently has much attention been paid to including psychological concepts such as working memory or episodic memory in these models. This paper presents an analysis that provides a quantitative description of memory states sufficient for correct choices at specific decision points. Using information from the mathematical structure of the task descriptions, we derive measures that indicate whether working memory (for one or more cues) or episodic memory can provide strategically useful information to an agent. In particular, the analysis determines which observed states must be maintained in or retrieved from memory to perform these specific tasks. We demonstrate the analysis on three simplified tasks as well as eight more complex memory tasks drawn from the animal and human literature (two alternation tasks, two sequence disambiguation tasks, two non-matching tasks, the 2-back task, and the 1-2-AX task). The results of these analyses agree with results from quantitative simulations of the task reported in previous publications and provide simple indications of the memory demands of the tasks which can require far less computation than a full simulation of the task. This may provide a basis for a quantitative behavioral stoichiometry of memory tasks.
Asteroid mass estimation with Markov-chain Monte Carlo
Siltala, Lauri; Granvik, Mikael
2017-10-01
Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem at minimum where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid by fitting their trajectories to their observed positions. The fitting has typically been carried out with linearized methods such as the least-squares method. These methods need to make certain assumptions regarding the shape of the probability distributions of the model parameters. This is problematic as these assumptions have not been validated. We have developed a new Markov-chain Monte Carlo method for mass estimation which does not require an assumption regarding the shape of the parameter distribution. Recently, we have implemented several upgrades to our MCMC method including improved schemes for handling observational errors and outlier data alongside the option to consider multiple perturbers and/or test asteroids simultaneously. These upgrades promise significantly improved results: based on two separate results for (19) Fortuna with different test asteroids we previously hypothesized that simultaneous use of both test asteroids would lead to an improved result similar to the average literature value for (19) Fortuna with substantially reduced uncertainties. Our upgraded algorithm indeed finds a result essentially equal to the literature value for this asteroid, confirming our previous hypothesis. Here we show these new results for (19) Fortuna and other example cases, and compare our results to previous estimates. Finally, we discuss our plans to improve our algorithm further, particularly in connection with Gaia.
Lerner, Vladimir S.
2012-01-01
The impulses, cutting entropy functional (EF) measure on trajectories Markov diffusion process, integrate information path functional (IPF) composing discrete information Bits extracted from observing random process. Each cut brings memory of the cutting entropy, which provides both reduction of the process entropy and discrete unit of the cutting entropy a Bit. Consequently, information is memorized entropy cutting in random observations which process interactions. The origin of information ...
Influence of credit scoring on the dynamics of Markov chain
Galina, Timofeeva
2015-11-01
Markov processes are widely used to model the dynamics of a credit portfolio and forecast the portfolio risk and profitability. In the Markov chain model the loan portfolio is divided into several groups with different quality, which determined by presence of indebtedness and its terms. It is proposed that dynamics of portfolio shares is described by a multistage controlled system. The article outlines mathematical formalization of controls which reflect the actions of the bank's management in order to improve the loan portfolio quality. The most important control is the organization of approval procedure of loan applications. The credit scoring is studied as a control affecting to the dynamic system. Different formalizations of "good" and "bad" consumers are proposed in connection with the Markov chain model.
Markov chain solution of photon multiple scattering through turbid slabs.
Lin, Ying; Northrop, William F; Li, Xuesong
2016-11-14
This work introduces a Markov Chain solution to model photon multiple scattering through turbid slabs via anisotropic scattering process, i.e., Mie scattering. Results show that the proposed Markov Chain model agree with commonly used Monte Carlo simulation for various mediums such as medium with non-uniform phase functions and absorbing medium. The proposed Markov Chain solution method successfully converts the complex multiple scattering problem with practical phase functions into a matrix form and solves transmitted/reflected photon angular distributions by matrix multiplications. Such characteristics would potentially allow practical inversions by matrix manipulation or stochastic algorithms where widely applied stochastic methods such as Monte Carlo simulations usually fail, and thus enable practical diagnostics reconstructions such as medical diagnosis, spray analysis, and atmosphere sciences.
The spectral method and ergodic theorems for general Markov chains
International Nuclear Information System (INIS)
Nagaev, S V
2015-01-01
We study the ergodic properties of Markov chains with an arbitrary state space and prove a geometric ergodic theorem. The method of the proof is new: it may be described as an operator method. Our main result is an ergodic theorem for Harris-Markov chains in the case when the return time to some fixed set has finite expectation. Our conditions for the transition function are more general than those used by Athreya-Ney and Nummelin. Unlike them, we impose restrictions not on the original transition function but on the transition function of an embedded Markov chain constructed from the return times to the fixed set mentioned above. The proof uses the spectral theory of linear operators on a Banach space
An Application of Graph Theory in Markov Chains Reliability Analysis
Directory of Open Access Journals (Sweden)
Pavel Skalny
2014-01-01
Full Text Available The paper presents reliability analysis which was realized for an industrial company. The aim of the paper is to present the usage of discrete time Markov chains and the flow in network approach. Discrete Markov chains a well-known method of stochastic modelling describes the issue. The method is suitable for many systems occurring in practice where we can easily distinguish various amount of states. Markov chains are used to describe transitions between the states of the process. The industrial process is described as a graph network. The maximal flow in the network corresponds to the production. The Ford-Fulkerson algorithm is used to quantify the production for each state. The combination of both methods are utilized to quantify the expected value of the amount of manufactured products for the given time period.
Prediction of pipeline corrosion rate based on grey Markov models
International Nuclear Information System (INIS)
Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin
2009-01-01
Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)
Energy Technology Data Exchange (ETDEWEB)
Rea, R.; Arellano, J. [IIE, Calle Reforma 113, Col. Palmira, Cuernavaca, Morelos (Mexico)]. e-mail: rrea@iie.org.mx
2005-07-01
To solve the unmanageable number of states of Markov of systems that have a great number of components, it is intends a modification to the method of Markov, denominated Markov truncated analysis, in which is assumed that it is worthless the dependence among faults of components. With it the number of states is increased in a lineal way (not exponential) with the number of components of the system, simplifying the analysis vastly. As example, the proposed method was applied to the system HPCS of the CLV considering its 18 main components. It thinks about that each component can take three states: operational, with hidden fault and with revealed fault. Additionally, it takes into account the configuration of the system HPCS by means of a block diagram of dependability to estimate their unavailability at level system. The results of the model here proposed are compared with other methods and approaches used to simplify the Markov analysis. It also intends the modification of the intervals of inspection of three components of the system HPCS. This finishes with base in the developed Markov model and in the maximum time allowed by the code ASME (NUREG-1482) to inspect components of systems that are in reservation in nuclear power plants. (Author)
Basic problems and solution methods for two-dimensional continuous 3 × 3 order hidden Markov model
International Nuclear Information System (INIS)
Wang, Guo-gang; Tang, Gui-jin; Gan, Zong-liang; Cui, Zi-guan; Zhu, Xiu-chang
2016-01-01
A novel model referred to as two-dimensional continuous 3 × 3 order hidden Markov model is put forward to avoid the disadvantages of the classical hypothesis of two-dimensional continuous hidden Markov model. This paper presents three equivalent definitions of the model, in which the state transition probability relies on not only immediate horizontal and vertical states but also immediate diagonal state, and in which the probability density of the observation relies on not only current state but also immediate horizontal and vertical states. The paper focuses on the three basic problems of the model, namely probability density calculation, parameters estimation and path backtracking. Some algorithms solving the questions are theoretically derived, by exploiting the idea that the sequences of states on rows or columns of the model can be viewed as states of a one-dimensional continuous 1 × 2 order hidden Markov model. Simulation results further demonstrate the performance of the algorithms. Because there are more statistical characteristics in the structure of the proposed new model, it can more accurately describe some practical problems, as compared to two-dimensional continuous hidden Markov model.
Statistical Inference for Partially Observed Diffusion Processes
DEFF Research Database (Denmark)
Jensen, Anders Christian
This thesis is concerned with parameter estimation for multivariate diffusion models. It gives a short introduction to diffusion models, and related mathematical concepts. we then introduce the method of prediction-based estimating functions and describe in detail the application for a two......-Uhlenbeck process, while chapter eight describes the detials of an R-package that was developed in relations to the application of the estimationprocedure of chapters five and six....
Logic for specifying partially observable stochastic domains
CSIR Research Space (South Africa)
Rens, G
2011-07-01
Full Text Available to place it back on the floor. In situations where the oil-can is full, the robot gets 5 units of reward for grabbing the can, and it gets 10 units for a drink action. Otherwise, the robot gets no rewards. Rewards motivate an agent to behave as desired... with notions of probability. It will be shown how stochastic domains can be specified, including new kinds of axioms dealing with perception and a frame solution for the proposed logic. 1 Introduction and Motivation In the physical real world...
On mixing of Markov measures associated with b−bistochastic QSOs
Energy Technology Data Exchange (ETDEWEB)
Mukhamedov, Farrukh; Embong, Ahmad Fadillah, E-mail: ahmadfadillah.90@gmail.com [Department of Computational Theoretical Sciences Faculty of Science, International Islamic University Malaysia P.O. Box, 141, 25200, Kuantan, Pahang (Malaysia)
2016-06-02
New majorization is in advantage as compared to the classical one since it can be defined as a partial order on sequences. We call it as b−order. Further, the defined order is used to establish a bistochasticity of nonlinear operators in which, in this study is restricted to the simplest case of nonlinear operators i.e quadratic operators. The discussions in this paper are based on bistochasticity of Quadratic Stochastic Operators (QSO) with respect to the b−order. In short, such operators are called b−bistochastic QSO. The main objectives in this paper are to show the construction of non-homogeneous Markov measures associated with QSO and to show the defined measures associated with the classes of b−bistochastic QSOs meet the mixing property.
On mixing of Markov measures associated with b−bistochastic QSOs
International Nuclear Information System (INIS)
Mukhamedov, Farrukh; Embong, Ahmad Fadillah
2016-01-01
New majorization is in advantage as compared to the classical one since it can be defined as a partial order on sequences. We call it as b−order. Further, the defined order is used to establish a bistochasticity of nonlinear operators in which, in this study is restricted to the simplest case of nonlinear operators i.e quadratic operators. The discussions in this paper are based on bistochasticity of Quadratic Stochastic Operators (QSO) with respect to the b−order. In short, such operators are called b−bistochastic QSO. The main objectives in this paper are to show the construction of non-homogeneous Markov measures associated with QSO and to show the defined measures associated with the classes of b−bistochastic QSOs meet the mixing property.
Application of Hidden Markov Models in Biomolecular Simulations.
Shukla, Saurabh; Shamsi, Zahra; Moffett, Alexander S; Selvam, Balaji; Shukla, Diwakar
2017-01-01
Hidden Markov models (HMMs) provide a framework to analyze large trajectories of biomolecular simulation datasets. HMMs decompose the conformational space of a biological molecule into finite number of states that interconvert among each other with certain rates. HMMs simplify long timescale trajectories for human comprehension, and allow comparison of simulations with experimental data. In this chapter, we provide an overview of building HMMs for analyzing bimolecular simulation datasets. We demonstrate the procedure for building a Hidden Markov model for Met-enkephalin peptide simulation dataset and compare the timescales of the process.
Hidden Markov processes theory and applications to biology
Vidyasagar, M
2014-01-01
This book explores important aspects of Markov and hidden Markov processes and the applications of these ideas to various problems in computational biology. The book starts from first principles, so that no previous knowledge of probability is necessary. However, the work is rigorous and mathematical, making it useful to engineers and mathematicians, even those not interested in biological applications. A range of exercises is provided, including drills to familiarize the reader with concepts and more advanced problems that require deep thinking about the theory. Biological applications are t
Transportation and concentration inequalities for bifurcating Markov chains
DEFF Research Database (Denmark)
Penda, S. Valère Bitseki; Escobar-Bach, Mikael; Guillin, Arnaud
2017-01-01
We investigate the transportation inequality for bifurcating Markov chains which are a class of processes indexed by a regular binary tree. Fitting well models like cell growth when each individual gives birth to exactly two offsprings, we use transportation inequalities to provide useful...... concentration inequalities.We also study deviation inequalities for the empirical means under relaxed assumptions on the Wasserstein contraction for the Markov kernels. Applications to bifurcating nonlinear autoregressive processes are considered for point-wise estimates of the non-linear autoregressive...
Martingales and Markov chains solved exercises and elements of theory
Baldi, Paolo; Priouret, Pierre
2002-01-01
CONDITIONAL EXPECTATIONSIntroductionDefinition and First PropertiesConditional Expectations and Conditional LawsExercisesSolutionsSTOCHASTIC PROCESSESGeneral FactsStopping TimesExercisesSolutionsMARTINGALESFirst DefinitionsFirst PropertiesThe Stopping TheoremMaximal InequalitiesSquare Integral MartingalesConvergence TheoremsRegular MartingalesExercisesProblemsSolutionsMARKOV CHAINSTransition Matrices, Markov ChainsConstruction and ExistenceComputations on the Canonical ChainPotential OperatorsPassage ProblemsRecurrence, TransienceRecurrent Irreducible ChainsPeriodicityExercisesProblemsSolution
Markov chain analysis of single spin flip Ising simulations
International Nuclear Information System (INIS)
Hennecke, M.
1997-01-01
The Markov processes defined by random and loop-based schemes for single spin flip attempts in Monte Carlo simulations of the 2D Ising model are investigated, by explicitly constructing their transition matrices. Their analysis reveals that loops over all lattice sites using a Metropolis-type single spin flip probability often do not define ergodic Markov chains, and have distorted dynamical properties even if they are ergodic. The transition matrices also enable a comparison of the dynamics of random versus loop spin selection and Glauber versus Metropolis probabilities
Hierarchical Multiple Markov Chain Model for Unsupervised Texture Segmentation
Czech Academy of Sciences Publication Activity Database
Scarpa, G.; Gaetano, R.; Haindl, Michal; Zerubia, J.
2009-01-01
Roč. 18, č. 8 (2009), s. 1830-1843 ISSN 1057-7149 R&D Projects: GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : Classification * texture analysis * segmentation * hierarchical image models * Markov process Subject RIV: BD - Theory of Information Impact factor: 2.848, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-hierarchical multiple markov chain model for unsupervised texture segmentation.pdf
Quantum Markov processes and applications in many-body systems
International Nuclear Information System (INIS)
Temme, P. K.
2010-01-01
This thesis is concerned with the investigation of quantum as well as classical Markov processes and their application in the field of strongly correlated many-body systems. A Markov process is a special kind of stochastic process, which is determined by an evolution that is independent of its history and only depends on the current state of the system. The application of Markov processes has a long history in the field of statistical mechanics and classical many-body theory. Not only are Markov processes used to describe the dynamics of stochastic systems, but they predominantly also serve as a practical method that allows for the computation of fundamental properties of complex many-body systems by means of probabilistic algorithms. The aim of this thesis is to investigate the properties of quantum Markov processes, i.e. Markov processes taking place in a quantum mechanical state space, and to gain a better insight into complex many-body systems by means thereof. Moreover, we formulate a novel quantum algorithm which allows for the computation of the thermal and ground states of quantum many-body systems. After a brief introduction to quantum Markov processes we turn to an investigation of their convergence properties. We find bounds on the convergence rate of the quantum process by generalizing geometric bounds found for classical processes. We generalize a distance measure that serves as the basis for our investigations, the chi-square divergence, to non-commuting probability spaces. This divergence allows for a convenient generalization of the detailed balance condition to quantum processes. We then devise the quantum algorithm that can be seen as the natural generalization of the ubiquitous Metropolis algorithm to simulate quantum many-body Hamiltonians. By this we intend to provide further evidence, that a quantum computer can serve as a fully-fledged quantum simulator, which is not only capable of describing the dynamical evolution of quantum systems, but
Bisimulation on Markov Processes over Arbitrary Measurable Spaces
DEFF Research Database (Denmark)
Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand
2014-01-01
We introduce a notion of bisimulation on labelled Markov Processes over generic measurable spaces in terms of arbitrary binary relations. Our notion of bisimulation is proven to coincide with the coalgebraic definition of Aczel and Mendler in terms of the Giry functor, which associates with a mea......We introduce a notion of bisimulation on labelled Markov Processes over generic measurable spaces in terms of arbitrary binary relations. Our notion of bisimulation is proven to coincide with the coalgebraic definition of Aczel and Mendler in terms of the Giry functor, which associates...
Detecting Faults By Use Of Hidden Markov Models
Smyth, Padhraic J.
1995-01-01
Frequency of false alarms reduced. Faults in complicated dynamic system (e.g., antenna-aiming system, telecommunication network, or human heart) detected automatically by method of automated, continuous monitoring. Obtains time-series data by sampling multiple sensor outputs at discrete intervals of t and processes data via algorithm determining whether system in normal or faulty state. Algorithm implements, among other things, hidden first-order temporal Markov model of states of system. Mathematical model of dynamics of system not needed. Present method is "prior" method mentioned in "Improved Hidden-Markov-Model Method of Detecting Faults" (NPO-18982).
On a Markov chain roulette-type game
International Nuclear Information System (INIS)
El-Shehawey, M A; El-Shreef, Gh A
2009-01-01
A Markov chain on non-negative integers which arises in a roulette-type game is discussed. The transition probabilities are p 01 = ρ, p Nj = δ Nj , p i,i+W = q, p i,i-1 = p = 1 - q, 1 ≤ W < N, 0 ≤ ρ ≤ 1, N - W < j ≤ N and i = 1, 2, ..., N - W. Using formulae for the determinant of a partitioned matrix, a closed form expression for the solution of the Markov chain roulette-type game is deduced. The present analysis is supported by two mathematical models from tumor growth and war with bargaining
Adaptive partial volume classification of MRI data
International Nuclear Information System (INIS)
Chiverton, John P; Wells, Kevin
2008-01-01
Tomographic biomedical images are commonly affected by an imaging artefact known as the partial volume (PV) effect. The PV effect produces voxels composed of a mixture of tissues in anatomical magnetic resonance imaging (MRI) data resulting in a continuity of these tissue classes. Anatomical MRI data typically consist of a number of contiguous regions of tissues or even contiguous regions of PV voxels. Furthermore discontinuities exist between the boundaries of these contiguous image regions. The work presented here probabilistically models the PV effect using spatial regularization in the form of continuous Markov random fields (MRFs) to classify anatomical MRI brain data, simulated and real. A unique approach is used to adaptively control the amount of spatial regularization imposed by the MRF. Spatially derived image gradient magnitude is used to identify the discontinuities between image regions of contiguous tissue voxels and PV voxels, imposing variable amounts of regularization determined by simulation. Markov chain Monte Carlo (MCMC) is used to simulate the posterior distribution of the probabilistic image model. Promising quantitative results are presented for PV classification of simulated and real MRI data of the human brain.
Directory of Open Access Journals (Sweden)
Pramita Suwal
2017-03-01
Full Text Available Aim: To compare the effects of cast partial denture with conventional all acrylic denture in respect to retention, stability, masticatory efficiency, comfort and periodontal health of abutments. Methods: 50 adult partially edentulous patient seeking for replacement of missing teeth having Kennedy class I and II arches with or without modification areas were selected for the study. Group-A was treated with cast partial denture and Group-B with acrylic partial denture. Data collected during follow-up visit of 3 months, 6 months, and 1 year by evaluating retention, stability, masticatory efficiency, comfort, periodontal health of abutment. Results: Chi-square test was applied to find out differences between the groups at 95% confidence interval where p = 0.05. One year comparison shows that cast partial denture maintained retention and stability better than acrylic partial denture (p< 0.05. The masticatory efficiency was significantly compromising from 3rd month to 1 year in all acrylic partial denture groups (p< 0.05. The comfort of patient with cast partial denture was maintained better during the observation period (p< 0.05. Periodontal health of abutment was gradually deteriorated in all acrylic denture group (p
International Nuclear Information System (INIS)
Lee, Youn Myoung
1995-02-01
As a newly approaching model, a stochastic model using continuous time Markov process for nuclide decay chain transport of arbitrary length in the fractured porous rock medium has been proposed, by which the need for solving a set of partial differential equations corresponding to various sets of side conditions can be avoided. Once the single planar fracture in the rock matrix is represented by a series of finite number of compartments having region wise constant parameter values in them, the medium is continuous in view of various processes associated with nuclide transport but discrete in medium space and such geologic system is assumed to have Markov property, since the Markov process requires that only the present value of the time dependent random variable be known to determine the future value of random variable, nuclide transport in the medium can then be modeled as a continuous time Markov process. Processes that are involved in nuclide transport are advective transport due to groundwater flow, diffusion into the rock matrix, adsorption onto the wall of the fracture and within the pores in the rock matrix, and radioactive decay chain. The transition probabilities for nuclide from the transition intensities between and out of the compartments are represented utilizing Chapman-Kolmogorov equation, through which the expectation and the variance of nuclide distribution for each compartment or the fractured rock medium can be obtained. Some comparisons between Markov process model developed in this work and available analytical solutions for one-dimensional layered porous medium, fractured medium with rock matrix diffusion, and porous medium considering three member nuclide decay chain without rock matrix diffusion have been made showing comparatively good agreement for all cases. To verify the model developed in this work another comparative study was also made by fitting the experimental data obtained with NaLS and uranine running in the artificial fractured
Study on the Evolution of Weights on the Market of Competitive Products using Markov Chains
Directory of Open Access Journals (Sweden)
Daniel Mihai Amariei
2016-10-01
Full Text Available In this paper aims the application through the Markov Process mode, within the software product WinQSB, Markov chain in the establishment of the development on the market of five brands of athletic shoes.
Hidden Markov Model of atomic quantum jump dynamics in an optically probed cavity
DEFF Research Database (Denmark)
Gammelmark, S.; Molmer, K.; Alt, W.
2014-01-01
We analyze the quantum jumps of an atom interacting with a cavity field. The strong atom- field interaction makes the cavity transmission depend on the time dependent atomic state, and we present a Hidden Markov Model description of the atomic state dynamics which is conditioned in a Bayesian...... manner on the detected signal. We suggest that small variations in the observed signal may be due to spatial motion of the atom within the cavity, and we represent the atomic system by a number of hidden states to account for both the small variations and the internal state jump dynamics. In our theory...
A brief history of the introduction of generalized ensembles to Markov chain Monte Carlo simulations
Berg, Bernd A.
2017-03-01
The most efficient weights for Markov chain Monte Carlo calculations of physical observables are not necessarily those of the canonical ensemble. Generalized ensembles, which do not exist in nature but can be simulated on computers, lead often to a much faster convergence. In particular, they have been used for simulations of first order phase transitions and for simulations of complex systems in which conflicting constraints lead to a rugged free energy landscape. Starting off with the Metropolis algorithm and Hastings' extension, I present a minireview which focuses on the explosive use of generalized ensembles in the early 1990s. Illustrations are given, which range from spin models to peptides.
Beyond Markov: Accounting for independence violations in causal reasoning.
Rehder, Bob
2018-06-01
Although many theories of causal cognition are based on causal graphical models, a key property of such models-the independence relations stipulated by the Markov condition-is routinely violated by human reasoners. This article presents three new accounts of those independence violations, accounts that share the assumption that people's understanding of the correlational structure of data generated from a causal graph differs from that stipulated by causal graphical model framework. To distinguish these models, experiments assessed how people reason with causal graphs that are larger than those tested in previous studies. A traditional common cause network (Y 1 ←X→Y 2 ) was extended so that the effects themselves had effects (Z 1 ←Y 1 ←X→Y 2 →Z 2 ). A traditional common effect network (Y 1 →X←Y 2 ) was extended so that the causes themselves had causes (Z 1 →Y 1 →X←Y 2 ←Z 2 ). Subjects' inferences were most consistent with the beta-Q model in which consistent states of the world-those in which variables are either mostly all present or mostly all absent-are viewed as more probable than stipulated by the causal graphical model framework. Substantial variability in subjects' inferences was also observed, with the result that substantial minorities of subjects were best fit by one of the other models (the dual prototype or a leaky gate models). The discrepancy between normative and human causal cognition stipulated by these models is foundational in the sense that they locate the error not in people's causal reasoning but rather in their causal representations. As a result, they are applicable to any cognitive theory grounded in causal graphical models, including theories of analogy, learning, explanation, categorization, decision-making, and counterfactual reasoning. Preliminary evidence that independence violations indeed generalize to other judgment types is presented. Copyright © 2018 Elsevier Inc. All rights reserved.
The explicit form of the rate function for semi-Markov processes and its contractions
Sughiyama, Yuki; Kobayashi, Testuya J.
2018-03-01
We derive the explicit form of the rate function for semi-Markov processes. Here, the ‘random time change trick’ plays an essential role. Also, by exploiting the contraction principle of large deviation theory to the explicit form, we show that the fluctuation theorem (Gallavotti-Cohen symmetry) holds for semi-Markov cases. Furthermore, we elucidate that our rate function is an extension of the level 2.5 rate function for Markov processes to semi-Markov cases.
Ensemble Learning Method for Hidden Markov Models
2014-12-01
dm (Or) ∂ dm (Or) ∂gOrc ∂gOrc ∂a (c) ij ∂a (c) ij ∂ã (c) ij , and ∂L(Λ) ∂b̃ (c) ij = R...r=1 C∑ m=1 ∂lm(Or) ∂ dm (Or) ∂ dm (Or) ∂gOrc ∂gOrc ∂b (c) ij ∂b (c) ij ∂b̃ (c) ij . Substituting the partial derivatives, we get the gradient direction of... crisp and fuzzy character neural networks in handwritten word recognition,” Fuzzy Systems, IEEE Transactions on, vol. 3, no. 3, pp. 357 –363,
Berman-Konsowa principle for reversible Markov jump processes
Hollander, den W.Th.F.; Jansen, S.
2013-01-01
In this paper we prove a version of the Berman-Konsowa principle for reversible Markov jump processes on Polish spaces. The Berman-Konsowa principle provides a variational formula for the capacity of a pair of disjoint measurable sets. There are two versions, one involving a class of probability
Evaluating The Markov Assumption For Web Usage Mining
DEFF Research Database (Denmark)
Jespersen, S.; Pedersen, Torben Bach; Thorhauge, J.
2003-01-01
) model~\\cite{borges99data}. These techniques typically rely on the \\textit{Markov assumption with history depth} $n$, i.e., it is assumed that the next requested page is only dependent on the last $n$ pages visited. This is not always valid, i.e. false browsing patterns may be discovered. However, to our...
Testing the Adequacy of a Semi-Markov Process
2015-09-17
classical Brownian motion are two common examples of martingales. Submartingales and supermartingales are two extended classes of martingales. They... movements using Semi-Markov processes,” Tourism Management, Vol. 32, No. 4, 2011, pp. 844–851. [4] Titman, A. C. and Sharples, L. D., “Model
Portfolio Optimization in a Semi-Markov Modulated Market
International Nuclear Information System (INIS)
Ghosh, Mrinal K.; Goswami, Anindya; Kumar, Suresh K.
2009-01-01
We address a portfolio optimization problem in a semi-Markov modulated market. We study both the terminal expected utility optimization on finite time horizon and the risk-sensitive portfolio optimization on finite and infinite time horizon. We obtain optimal portfolios in relevant cases. A numerical procedure is also developed to compute the optimal expected terminal utility for finite horizon problem
Harmonic spectral components in time sequences of Markov correlated events
Mazzetti, Piero; Carbone, Anna
2017-07-01
The paper concerns the analysis of the conditions allowing time sequences of Markov correlated events give rise to a line power spectrum having a relevant physical interest. It is found that by specializing the Markov matrix in order to represent closed loop sequences of events with arbitrary distribution, generated in a steady physical condition, a large set of line spectra, covering all possible frequency values, is obtained. The amplitude of the spectral lines is given by a matrix equation based on a generalized Markov matrix involving the Fourier transform of the distribution functions representing the time intervals between successive events of the sequence. The paper is a complement of a previous work where a general expression for the continuous power spectrum was given. In that case the Markov matrix was left in a more general form, thus preventing the possibility of finding line spectra of physical interest. The present extension is also suggested by the interest of explaining the emergence of a broad set of waves found in the electro and magneto-encephalograms, whose frequency ranges from 0.5 to about 40Hz, in terms of the effects produced by chains of firing neurons within the complex neural network of the brain. An original model based on synchronized closed loop sequences of firing neurons is proposed, and a few numerical simulations are reported as an application of the above cited equation.
Elements of the theory of Markov processes and their applications
Bharucha-Reid, A T
2010-01-01
This graduate-level text and reference in probability, with numerous applications to several fields of science, presents nonmeasure-theoretic introduction to theory of Markov processes. The work also covers mathematical models based on the theory, employed in various applied fields. Prerequisites are a knowledge of elementary probability theory, mathematical statistics, and analysis. Appendixes. Bibliographies. 1960 edition.
A Parallel Solver for Large-Scale Markov Chains
Czech Academy of Sciences Publication Activity Database
Benzi, M.; Tůma, Miroslav
2002-01-01
Roč. 41, - (2002), s. 135-153 ISSN 0168-9274 R&D Projects: GA AV ČR IAA2030801; GA ČR GA101/00/1035 Keywords : parallel preconditioning * iterative methods * discrete Markov chains * generalized inverses * singular matrices * graph partitioning * AINV * Bi-CGSTAB Subject RIV: BA - General Mathematics Impact factor: 0.504, year: 2002
Markov chain model for demersal fish catch analysis in Indonesia
Firdaniza; Gusriani, N.
2018-03-01
As an archipelagic country, Indonesia has considerable potential fishery resources. One of the fish resources that has high economic value is demersal fish. Demersal fish is a fish with a habitat in the muddy seabed. Demersal fish scattered throughout the Indonesian seas. Demersal fish production in each Indonesia’s Fisheries Management Area (FMA) varies each year. In this paper we have discussed the Markov chain model for demersal fish yield analysis throughout all Indonesia’s Fisheries Management Area. Data of demersal fish catch in every FMA in 2005-2014 was obtained from Directorate of Capture Fisheries. From this data a transition probability matrix is determined by the number of transitions from the catch that lie below the median or above the median. The Markov chain model of demersal fish catch data was an ergodic Markov chain model, so that the limiting probability of the Markov chain model can be determined. The predictive value of demersal fishing yields was obtained by calculating the combination of limiting probability with average catch results below the median and above the median. The results showed that for 2018 and long-term demersal fishing results in most of FMA were below the median value.
Shape Modelling Using Markov Random Field Restoration of Point Correspondences
DEFF Research Database (Denmark)
Paulsen, Rasmus Reinhold; Hilger, Klaus Baggesen
2003-01-01
A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized sh...
On a saddlepoint approximation to the Markov binomial distribution
DEFF Research Database (Denmark)
Jensen, Jens Ledet
A nonstandard saddlepoint approximation to the distribution of a sum of Markov dependent trials is introduced. The relative error of the approximation is studied, not only for the number of summands tending to infinity, but also for the parameter approaching the boundary of its definition range...
Envelopes of Sets of Measures, Tightness, and Markov Control Processes
International Nuclear Information System (INIS)
Gonzalez-Hernandez, J.; Hernandez-Lerma, O.
1999-01-01
We introduce upper and lower envelopes for sets of measures on an arbitrary topological space, which are then used to give a tightness criterion. These concepts are applied to show the existence of optimal policies for a class of Markov control processes
A Constraint Model for Constrained Hidden Markov Models
DEFF Research Database (Denmark)
Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp
2009-01-01
A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we extend HMMs with constraints and show how the familiar Viterbi algorithm can be generalized, based on constraint solving ...
The How and Why of Interactive Markov Chains
Hermanns, H.; Katoen, Joost P.; de Boer, F.S; Bonsangue, S.H.; Leuschel, M
2010-01-01
This paper reviews the model of interactive Markov chains (IMCs, for short), an extension of labelled transition systems with exponentially delayed transitions. We show that IMCs are closed under parallel composition and hiding, and show how IMCs can be compositionally aggregated prior to analysis
Performance criteria for graph clustering and Markov cluster experiments
S. van Dongen
2000-01-01
textabstractIn~[1] a cluster algorithm for graphs was introduced called the Markov cluster algorithm or MCL~algorithm. The algorithm is based on simulation of (stochastic) flow in graphs by means of alternation of two operators, expansion and inflation. The results in~[2] establish an intrinsic
Estimation of the workload correlation in a Markov fluid queue
Kaynar, B.; Mandjes, M.R.H.
2013-01-01
This paper considers a Markov fluid queue, focusing on the correlation function of the stationary workload process. A simulation-based computation technique is proposed, which relies on a coupling idea. Then an upper bound on the variance of the resulting estimator is given, which reveals how the
Cascade probabilistic function and the Markov's processes. Chapter 1
International Nuclear Information System (INIS)
2002-01-01
In the Chapter 1 the physical and mathematical descriptions of radiation processes are carried out. The relation of the cascade probabilistic functions (CPF) for electrons, protons, alpha-particles and ions with Markov's chain is shown. The algorithms for CPF calculation with accounting energy losses are given
Indefinite metric, quantum axiomatics, and the Markov property
International Nuclear Information System (INIS)
Brownell, F.H.
1978-01-01
In answer to a remark of Jauch, a set of axioms for an 'indefinite metric' formulation of quantum electro-dynamics is presented, and the connection with orthocomplementation noted. Here a strict version of the Markov property apparently fails, leading to a novel interpretation. (Auth.)
Optimisation of Hidden Markov Model using Baum–Welch algorithm
Indian Academy of Sciences (India)
Home; Journals; Journal of Earth System Science; Volume 126; Issue 1. Optimisation of Hidden Markov Model using Baum–Welch algorithm for prediction of maximum and minimum temperature over Indian Himalaya. J C Joshi Tankeshwar Kumar Sunita Srivastava Divya Sachdeva. Volume 126 Issue 1 February 2017 ...
Hidden Markov Model for quantitative prediction of snowfall
Indian Academy of Sciences (India)
A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...
Social security as Markov equilibrium in OLG models: A note
DEFF Research Database (Denmark)
Gonzalez Eiras, Martin
2011-01-01
I refine and extend the Markov perfect equilibrium of the social security policy game in Forni (2005) for the special case of logarithmic utility. Under the restriction that the policy function be continuous, instead of differentiable, the equilibrium is globally well defined and its dynamics...
Efficient Approximation of Optimal Control for Markov Games
DEFF Research Database (Denmark)
Fearnley, John; Rabe, Markus; Schewe, Sven
2011-01-01
We study the time-bounded reachability problem for continuous-time Markov decision processes (CTMDPs) and games (CTMGs). Existing techniques for this problem use discretisation techniques to break time into discrete intervals, and optimal control is approximated for each interval separately...
The deviation matrix of a continuous-time Markov chain
Coolen-Schrijner, P.; van Doorn, E.A.
2001-01-01
The deviation matrix of an ergodic, continuous-time Markov chain with transition probability matrix $P(.)$ and ergodic matrix $\\Pi$ is the matrix $D \\equiv \\int_0^{\\infty} (P(t)-\\Pi)dt$. We give conditions for $D$ to exist and discuss properties and a representation of $D$. The deviation matrix of a
The deviation matrix of a continuous-time Markov chain
Coolen-Schrijner, Pauline; van Doorn, Erik A.
2002-01-01
he deviation matrix of an ergodic, continuous-time Markov chain with transition probability matrix $P(.)$ and ergodic matrix $\\Pi$ is the matrix $D \\equiv \\int_0^{\\infty} (P(t)-\\Pi)dt$. We give conditions for $D$ to exist and discuss properties and a representation of $D$. The deviation matrix of a
Performability assessment by model checking of Markov reward models
Baier, Christel; Cloth, L.; Haverkort, Boudewijn R.H.M.; Hermanns, H.; Katoen, Joost P.
2010-01-01
This paper describes efficient procedures for model checking Markov reward models, that allow us to evaluate, among others, the performability of computer-communication systems. We present the logic CSRL (Continuous Stochastic Reward Logic) to specify performability measures. It provides flexibility
A Markov Model for Commen-Cause Failures
DEFF Research Database (Denmark)
Platz, Ole
1984-01-01
A continuous time four-state Markov chain is shown to cover several of the models that have been used for describing dependencies between failures of components in redundant systems. Among these are the models derived by Marshall and Olkin and by Freund and models for one-out-of-three and two...
Exact goodness-of-fit tests for Markov chains.
Besag, J; Mondal, D
2013-06-01
Goodness-of-fit tests are useful in assessing whether a statistical model is consistent with available data. However, the usual χ² asymptotics often fail, either because of the paucity of the data or because a nonstandard test statistic is of interest. In this article, we describe exact goodness-of-fit tests for first- and higher order Markov chains, with particular attention given to time-reversible ones. The tests are obtained by conditioning on the sufficient statistics for the transition probabilities and are implemented by simple Monte Carlo sampling or by Markov chain Monte Carlo. They apply both to single and to multiple sequences and allow a free choice of test statistic. Three examples are given. The first concerns multiple sequences of dry and wet January days for the years 1948-1983 at Snoqualmie Falls, Washington State, and suggests that standard analysis may be misleading. The second one is for a four-state DNA sequence and lends support to the original conclusion that a second-order Markov chain provides an adequate fit to the data. The last one is six-state atomistic data arising in molecular conformational dynamics simulation of solvated alanine dipeptide and points to strong evidence against a first-order reversible Markov chain at 6 picosecond time steps. © 2013, The International Biometric Society.
Some remarks about the thermodynamics of discrete finite Markov chains
Energy Technology Data Exchange (ETDEWEB)
Siboni, S. [Trento Univ. (Italy). Facolta` di Ingegneria, Dip. di Ingegneria dei Materiali
1998-08-01
The Author propose a simple way to define a Hamiltonian for aperiodic Markov chains and to apply these chains in a thermodynamical context. The basic thermodynamic functions are correspondingly calculated. A quite intriguing and nontrivial application to stochastic automata is also pointed out.