Abdulla, Parosh Aziz; Henda, Noomene Ben; Mayr, Richard
2007-01-01
We consider qualitative and quantitative verification problems for infinite-state Markov chains. We call a Markov chain decisive w.r.t. a given set of target states F if it almost certainly eventually reaches either F or a state from which F can no longer be reached. While all finite Markov chains are trivially decisive (for every set F), this also holds for many classes of infinite Markov chains. Infinite Markov chains which contain a finite attractor are decisive w.r.t. every set F. In part...
Markov decision processes in artificial intelligence
Sigaud, Olivier
2013-01-01
Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision problems under uncertainty as well as Reinforcement Learning problems. Written by experts in the field, this book provides a global view of current research using MDPs in Artificial Intelligence. It starts with an introductory presentation of the fundamental aspects of MDPs (planning in MDPs, Reinforcement Learning, Partially Observable MDPs, Markov games and the use of non-classical criteria). Then it presents more advanced research trends in the domain and gives some concrete examples using illustr
Markov Decision Processes in Practice
Boucherie, Richardus J.; van Dijk, N.M.
2017-01-01
It is over 30 years ago since D.J. White started his series of surveys on practical applications of Markov decision processes (MDP), over 20 years after the phenomenal book by Martin Puterman on the theory of MDP, and over 10 years since Eugene A. Feinberg and Adam Shwartz published their Handbook
Markov Decision Process Measurement Model.
LaMar, Michelle M
2018-03-01
Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.
The Markov moment problem and extremal problems
Kreĭn, M G; Louvish, D
1977-01-01
In this book, an extensive circle of questions originating in the classical work of P. L. Chebyshev and A. A. Markov is considered from the more modern point of view. It is shown how results and methods of the generalized moment problem are interlaced with various questions of the geometry of convex bodies, algebra, and function theory. From this standpoint, the structure of convex and conical hulls of curves is studied in detail and isoperimetric inequalities for convex hulls are established; a theory of orthogonal and quasiorthogonal polynomials is constructed; problems on limiting values of integrals and on least deviating functions (in various metrics) are generalized and solved; problems in approximation theory and interpolation and extrapolation in various function classes (analytic, absolutely monotone, almost periodic, etc.) are solved, as well as certain problems in optimal control of linear objects.
Second Order Optimality in Markov Decision Chains
Czech Academy of Sciences Publication Activity Database
Sladký, Karel
2017-01-01
Roč. 53, č. 6 (2017), s. 1086-1099 ISSN 0023-5954 R&D Projects: GA ČR GA15-10331S Institutional support: RVO:67985556 Keywords : Markov decision chains * second order optimality * optimalilty conditions for transient, discounted and average models * policy and value iterations Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 0.379, year: 2016 http://library.utia.cas.cz/separaty/2017/E/sladky-0485146.pdf
Markov decision processes: a tool for sequential decision making under uncertainty.
Alagoz, Oguzhan; Hsu, Heather; Schaefer, Andrew J; Roberts, Mark S
2010-01-01
We provide a tutorial on the construction and evaluation of Markov decision processes (MDPs), which are powerful analytical tools used for sequential decision making under uncertainty that have been widely used in many industrial and manufacturing applications but are underutilized in medical decision making (MDM). We demonstrate the use of an MDP to solve a sequential clinical treatment problem under uncertainty. Markov decision processes generalize standard Markov models in that a decision process is embedded in the model and multiple decisions are made over time. Furthermore, they have significant advantages over standard decision analysis. We compare MDPs to standard Markov-based simulation models by solving the problem of the optimal timing of living-donor liver transplantation using both methods. Both models result in the same optimal transplantation policy and the same total life expectancies for the same patient and living donor. The computation time for solving the MDP model is significantly smaller than that for solving the Markov model. We briefly describe the growing literature of MDPs applied to medical decisions.
A Partially Observed Markov Decision Process for Dynamic Pricing
Yossi Aviv; Amit Pazgal
2005-01-01
In this paper, we develop a stylized partially observed Markov decision process (POMDP) framework to study a dynamic pricing problem faced by sellers of fashion-like goods. We consider a retailer that plans to sell a given stock of items during a finite sales season. The objective of the retailer is to dynamically price the product in a way that maximizes expected revenues. Our model brings together various types of uncertainties about the demand, some of which are resolvable through sales ob...
Active Learning of Markov Decision Processes for System Verification
DEFF Research Database (Denmark)
Chen, Yingke; Nielsen, Thomas Dyhre
2012-01-01
deterministic Markov decision processes from data by actively guiding the selection of input actions. The algorithm is empirically analyzed by learning system models of slot machines, and it is demonstrated that the proposed active learning procedure can significantly reduce the amount of data required...... demanding process, and this shortcoming has motivated the development of algorithms for automatically learning system models from observed system behaviors. Recently, algorithms have been proposed for learning Markov decision process representations of reactive systems based on alternating sequences...... of input/output observations. While alleviating the problem of manually constructing a system model, the collection/generation of observed system behaviors can also prove demanding. Consequently we seek to minimize the amount of data required. In this paper we propose an algorithm for learning...
A hierarchical Markov decision process modeling feeding and marketing decisions of growing pigs
DEFF Research Database (Denmark)
Pourmoayed, Reza; Nielsen, Lars Relund; Kristensen, Anders Ringgaard
2016-01-01
Feeding is the most important cost in the production of growing pigs and has a direct impact on the marketing decisions, growth and the final quality of the meat. In this paper, we address the sequential decision problem of when to change the feed-mix within a finisher pig pen and when to pick pigs...... for marketing. We formulate a hierarchical Markov decision process with three levels representing the decision process. The model considers decisions related to feeding and marketing and finds the optimal decision given the current state of the pen. The state of the system is based on information from on...
A Markov decision model for optimising economic production lot size ...
African Journals Online (AJOL)
Adopting such a Markov decision process approach, the states of a Markov chain represent possible states of demand. The decision of whether or not to produce additional inventory units is made using dynamic programming. This approach demonstrates the existence of an optimal state-dependent EPL size, and produces ...
Learning Markov Decision Processes for Model Checking
DEFF Research Database (Denmark)
Mao, Hua; Chen, Yingke; Jaeger, Manfred
2012-01-01
. The proposed learning algorithm is adapted from algorithms for learning deterministic probabilistic finite automata, and extended to include both probabilistic and nondeterministic transitions. The algorithm is empirically analyzed and evaluated by learning system models of slot machines. The evaluation......Constructing an accurate system model for formal model verification can be both resource demanding and time-consuming. To alleviate this shortcoming, algorithms have been proposed for automatically learning system models based on observed system behaviors. In this paper we extend the algorithm...... on learning probabilistic automata to reactive systems, where the observed system behavior is in the form of alternating sequences of inputs and outputs. We propose an algorithm for automatically learning a deterministic labeled Markov decision process model from the observed behavior of a reactive system...
A Method for Speeding Up Value Iteration in Partially Observable Markov Decision Processes
Zhang, Nevin Lianwen; Lee, Stephen S.; Zhang, Weihong
2013-01-01
We present a technique for speeding up the convergence of value iteration for partially observable Markov decisions processes (POMDPs). The underlying idea is similar to that behind modified policy iteration for fully observable Markov decision processes (MDPs). The technique can be easily incorporated into any existing POMDP value iteration algorithms. Experiments have been conducted on several test problems with one POMDP value iteration algorithm called incremental pruning. We find that th...
Simulation-based algorithms for Markov decision processes
Chang, Hyeong Soo; Fu, Michael C; Marcus, Steven I
2013-01-01
Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. Many real-world problems modeled by MDPs have huge state and/or action spaces, giving an opening to the curse of dimensionality and so making practical solution of the resulting models intractable. In other cases, the system of interest is too complex to allow explicit specification of some of the MDP model parameters, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based algorithms have been developed to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function. Specific approaches include adaptive sampling, evolutionary policy iteration, evolutionary random policy search, and model reference adaptive search. This substantially enlarged new edition reflects the latest developments in novel ...
van Otterlo, M.
2008-01-01
Learning and reasoning in large, structured, probabilistic worlds is at the heart of artificial intelligence. Markov decision processes have become the de facto standard in modeling and solving sequential decision making problems under uncertainty. Many efficient reinforcement learning and dynamic
Continuous-time Markov decision processes theory and applications
Guo, Xianping
2009-01-01
This volume provides the first book entirely devoted to recent developments on the theory and applications of continuous-time Markov decision processes (MDPs). The MDPs presented here include most of the cases that arise in applications.
The application of Markov decision process in restaurant delivery robot
Wang, Yong; Hu, Zhen; Wang, Ying
2017-05-01
As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional path planning algorithm is not very ideal. To solve this problem, this paper proposes the Markov dynamic state immediate reward (MDR) path planning algorithm according to the traditional Markov decision process. First of all, it uses MDR to plan a global path, then navigates along this path. When the sensor detects there is no obstructions in front state, increase its immediate state reward value; when the sensor detects there is an obstacle in front, plan a global path that can avoid obstacle with the current position as the new starting point and reduce its state immediate reward value. This continues until the target is reached. When the robot learns for a period of time, it can avoid those places where obstacles are often present when planning the path. By analyzing the simulation experiment, the algorithm has achieved good results in the global path planning under the dynamic environment.
Embedding a State Space Model Into a Markov Decision Process
DEFF Research Database (Denmark)
Nielsen, Lars Relund; Jørgensen, Erik; Højsgaard, Søren
2011-01-01
In agriculture Markov decision processes (MDPs) with finite state and action space are often used to model sequential decision making over time. For instance, states in the process represent possible levels of traits of the animal and transition probabilities are based on biological models...
Accelerated decomposition techniques for large discounted Markov decision processes
Larach, Abdelhadi; Chafik, S.; Daoui, C.
2017-12-01
Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorithm, which is a variant of Tarjan's algorithm that simultaneously finds the SCCs and their belonging levels. Second, a new definition of the restricted MDPs is presented to ameliorate some hierarchical solutions in discounted MDPs using value iteration (VI) algorithm based on a list of state-action successors. Finally, a robotic motion-planning example and the experiment results are presented to illustrate the benefit of the proposed decomposition algorithms.
The application of Markov decision process with penalty function in restaurant delivery robot
Wang, Yong; Hu, Zhen; Wang, Ying
2017-05-01
As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional Markov decision process path planning algorithm is not save, the robot is very close to the table and chairs. To solve this problem, this paper proposes the Markov Decision Process with a penalty term called MDPPT path planning algorithm according to the traditional Markov decision process (MDP). For MDP, if the restaurant delivery robot bumps into an obstacle, the reward it receives is part of the current status reward. For the MDPPT, the reward it receives not only the part of the current status but also a negative constant term. Simulation results show that the MDPPT algorithm can plan a more secure path.
Strategy Complexity of Finite-Horizon Markov Decision Processes and Simple Stochastic Games
DEFF Research Database (Denmark)
Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu
2012-01-01
Markov decision processes (MDPs) and simple stochastic games (SSGs) provide a rich mathematical framework to study many important problems related to probabilistic systems. MDPs and SSGs with finite-horizon objectives, where the goal is to maximize the probability to reach a target state in a given...
Identification of Optimal Policies in Markov Decision Processes
Czech Academy of Sciences Publication Activity Database
Sladký, Karel
46 2010, č. 3 (2010), s. 558-570 ISSN 0023-5954. [International Conference on Mathematical Methods in Economy and Industry. České Budějovice, 15.06.2009-18.06.2009] R&D Projects: GA ČR(CZ) GA402/08/0107; GA ČR GA402/07/1113 Institutional research plan: CEZ:AV0Z10750506 Keywords : finite state Markov decision processes * discounted and average costs * elimination of suboptimal policies Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/E/sladky-identification of optimal policies in markov decision processes.pdf
Mean-Variance Optimization in Markov Decision Processes
Mannor, Shie; Tsitsiklis, John N.
2011-01-01
We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.
Monitoring as a partially observable decision problem
Paul L. Fackler; Robert G. Haight
2014-01-01
Monitoring is an important and costly activity in resource man-agement problems such as containing invasive species, protectingendangered species, preventing soil erosion, and regulating con-tracts for environmental services. Recent studies have viewedoptimal monitoring as a Partially Observable Markov Decision Pro-cess (POMDP), which provides a framework for...
Hamiltonian cycle problem and Markov chains
Borkar, Vivek S; Filar, Jerzy A; Nguyen, Giang T
2014-01-01
This book summarizes a line of research that maps certain classical problems of discrete mathematics and operations research - such as the Hamiltonian cycle and the Travelling Salesman problems - into convex domains where continuum analysis can be carried out.
On Characterisation of Markov Processes Via Martingale Problems
Indian Academy of Sciences (India)
This extension is used to improve on a criterion for a probability measure to be invariant for the semigroup associated with the Markov process. We also give examples of martingale problems that are well-posed in the class of solutions which are continuous in probability but for which no r.c.l.l. solution exists.
Optimum equipment maintenance/replacement policy. Part 2: Markov decision approach
Charng, T.
1982-01-01
Dynamic programming was utilized as an alternative optimization technique to determine an optimal policy over a given time period. According to a joint effect of the probabilistic transition of states and the sequence of decision making, the optimal policy is sought such that a set of decisions optimizes the long-run expected average cost (or profit) per unit time. Provision of an alternative measure for the expected long-run total discounted costs is also considered. A computer program based on the concept of the Markov Decision Process was developed and tested. The program code listing, the statement of a sample problem, and the computed results are presented.
Scalable approximate policies for Markov decision process models of hospital elective admissions.
Zhu, George; Lizotte, Dan; Hoey, Jesse
2014-05-01
To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.
2017-03-23
POLICIES THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering and Management Air Force Institute of Technology...dispatching policy and three practitioner-friendly myopic baseline policies. Two computational experiments, a two-level, five-factor screening design and a...over, an open question exists concerning the best exact solution approach for solving Markov decision problems due to recent advances in performance by
Directory of Open Access Journals (Sweden)
Rajesh P N Rao
2010-11-01
Full Text Available A fundamental problem faced by animals is learning to select actions based on noisy sensory information and incomplete knowledge of the world. It has been suggested that the brain engages in Bayesian inference during perception but how such probabilistic representations are used to select actions has remained unclear. Here we propose a neural model of action selection and decision making based on the theory of partially observable Markov decision processes (POMDPs. Actions are selected based not on a single optimal estimate of state but on the posterior distribution over states (the belief state. We show how such a model provides a unified framework for explaining experimental results in decision making that involve both information gathering and overt actions. The model utilizes temporal difference (TD learning for maximizing expected reward. The resulting neural architecture posits an active role for the neocortex in belief computation while ascribing a role to the basal ganglia in belief representation, value computation, and action selection. When applied to the random dots motion discrimination task, model neurons representing belief exhibit responses similar to those of LIP neurons in primate neocortex. The appropriate threshold for switching from information gathering to overt actions emerges naturally during reward maximization. Additionally, the time course of reward prediction error in the model shares similarities with dopaminergic responses in the basal ganglia during the random dots task. For tasks with a deadline, the model learns a decision making strategy that changes with elapsed time, predicting a collapsing decision threshold consistent with some experimental studies. The model provides a new framework for understanding neural decision making and suggests an important role for interactions between the neocortex and the basal ganglia in learning the mapping between probabilistic sensory representations and actions that maximize
Control Design for Untimed Petri Nets Using Markov Decision Processes
Directory of Open Access Journals (Sweden)
Cherki Daoui
2017-01-01
Full Text Available Design of control sequences for discrete event systems (DESs has been presented modelled by untimed Petri nets (PNs. PNs are well-known mathematical and graphical models that are widely used to describe distributed DESs, including choices, synchronizations and parallelisms. The domains of application include, but are not restricted to, manufacturing systems, computer science and transportation networks. We are motivated by the observation that such systems need to plan their production or services. The paper is more particularly concerned with control issues in uncertain environments when unexpected events occur or when control errors disturb the behaviour of the system. To deal with such uncertainties, a new approach based on discrete time Markov decision processes (MDPs has been proposed that associates the modelling power of PNs with the planning power of MDPs. Finally, the simulation results illustrate the benefit of our method from the computational point of view. (original abstract
Pavement maintenance optimization model using Markov Decision Processes
Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.
2017-09-01
This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.
Learning to maximize reward rate: a model based on semi-Markov decision processes.
Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R
2014-01-01
WHEN ANIMALS HAVE TO MAKE A NUMBER OF DECISIONS DURING A LIMITED TIME INTERVAL, THEY FACE A FUNDAMENTAL PROBLEM: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible "conditions." A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each "condition" being a "state" and the value of decision thresholds being the "actions" taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values.
Decision-Making in Critical Limb Ischemia: A Markov Simulation.
Deutsch, Aaron J; Jain, C Charles; Blumenthal, Kimberly G; Dickinson, Mark W; Neilan, Anne M
2017-11-01
Critical limb ischemia (CLI) is a feared complication of peripheral vascular disease that often requires surgical management and may require amputation of the affected limb. We developed a decision model to inform clinical management for a 63-year-old woman with CLI and multiple medical comorbidities, including advanced heart failure and diabetes. We developed a Markov decision model to evaluate 4 strategies: amputation, surgical bypass, endovascular therapy (e.g. stent or revascularization), and medical management. We measured the impact of parameter uncertainty using 1-way, 2-way, and multiway sensitivity analyses. In the base case, endovascular therapy yielded similar discounted quality-adjusted life months (26.50 QALMs) compared with surgical bypass (26.34 QALMs). Both endovascular and surgical therapies were superior to amputation (18.83 QALMs) and medical management (11.08 QALMs). This finding was robust to a wide range of periprocedural mortality weights and was most sensitive to long-term mortality associated with endovascular and surgical therapies. Utility weights were not stratified by patient comorbidities; nonetheless, our conclusion was robust to a range of utility weight values. For a patient with CLI, endovascular therapy and surgical bypass provided comparable clinical outcomes. However, this finding was sensitive to long-term mortality rates associated with each procedure. Both endovascular and surgical therapies were superior to amputation or medical management in a range of scenarios. Copyright © 2017 Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Institut de Mathématiques de Bordeaux, INRIA Bordeaux Sud Ouest, Team: CQFD, and IMB (France); Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es [UNED, Department of Statistics and Operations Research (Spain)
2016-08-15
We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.
Availability Control for Means of Transport in Decisive Semi-Markov Models of Exploitation Process
Migawa, Klaudiusz
2012-12-01
The issues presented in this research paper refer to problems connected with the control process for exploitation implemented in the complex systems of exploitation for technical objects. The article presents the description of the method concerning the control availability for technical objects (means of transport) on the basis of the mathematical model of the exploitation process with the implementation of the decisive processes by semi-Markov. The presented method means focused on the preparing the decisive for the exploitation process for technical objects (semi-Markov model) and after that specifying the best control strategy (optimal strategy) from among possible decisive variants in accordance with the approved criterion (criteria) of the activity evaluation of the system of exploitation for technical objects. In the presented method specifying the optimal strategy for control availability in the technical objects means a choice of a sequence of control decisions made in individual states of modelled exploitation process for which the function being a criterion of evaluation reaches the extreme value. In order to choose the optimal control strategy the implementation of the genetic algorithm was chosen. The opinions were presented on the example of the exploitation process of the means of transport implemented in the real system of the bus municipal transport. The model of the exploitation process for the means of transports was prepared on the basis of the results implemented in the real transport system. The mathematical model of the exploitation process was built taking into consideration the fact that the model of the process constitutes the homogenous semi-Markov process.
Modeling treatment of ischemic heart disease with partially observable Markov decision processes.
Hauskrecht, M; Fraser, H
1998-01-01
Diagnosis of a disease and its treatment are not separate, one-shot activities. Instead they are very often dependent and interleaved over time, mostly due to uncertainty about the underlying disease, uncertainty associated with the response of a patient to the treatment and varying cost of different diagnostic (investigative) and treatment procedures. The framework of Partially observable Markov decision processes (POMDPs) developed and used in operations research, control theory and artificial intelligence communities is particularly suitable for modeling such a complex decision process. In the paper, we show how the POMDP framework could be used to model and solve the problem of the management of patients with ischemic heart disease, and point out modeling advantages of the framework over standard decision formalisms.
Planning treatment of ischemic heart disease with partially observable Markov decision processes.
Hauskrecht, M; Fraser, H
2000-03-01
Diagnosis of a disease and its treatment are not separate, one-shot activities. Instead, they are very often dependent and interleaved over time. This is mostly due to uncertainty about the underlying disease, uncertainty associated with the response of a patient to the treatment and varying cost of different diagnostic (investigative) and treatment procedures. The framework of partially observable Markov decision processes (POMDPs) developed and used in the operations research, control theory and artificial intelligence communities is particularly suitable for modeling such a complex decision process. In this paper, we show how the POMDP framework can be used to model and solve the problem of the management of patients with ischemic heart disease (IHD), and demonstrate the modeling advantages of the framework over standard decision formalisms.
Discounted semi-Markov decision processes : linear programming and policy iteration
Wessels, J.; van Nunen, J.A.E.E.
1975-01-01
For semi-Markov decision processes with discounted rewards we derive the well known results regarding the structure of optimal strategies (nonrandomized, stationary Markov strategies) and the standard algorithms (linear programming, policy iteration). Our analysis is completely based on a primal
Discounted semi-Markov decision processes : linear programming and policy iteration
Wessels, J.; van Nunen, J.A.E.E.
1974-01-01
For semi-Markov decision processes with discounted rewards we derive the well known results regarding the structure of optimal strategies (nonrandomized, stationary Markov strategies) and the standard algorithms (linear programming, policy iteration). Our analysis is completely based on a primal
Bennett, Casey C; Hauser, Kris
2013-01-01
In the modern healthcare system, rapidly expanding costs/complexity, the growing myriad of treatment options, and exploding information streams that often do not effectively reach the front lines hinder the ability to choose optimal treatment decisions over time. The goal in this paper is to develop a general purpose (non-disease-specific) computational/artificial intelligence (AI) framework to address these challenges. This framework serves two potential functions: (1) a simulation environment for exploring various healthcare policies, payment methodologies, etc., and (2) the basis for clinical artificial intelligence - an AI that can "think like a doctor". This approach combines Markov decision processes and dynamic decision networks to learn from clinical data and develop complex plans via simulation of alternative sequential decision paths while capturing the sometimes conflicting, sometimes synergistic interactions of various components in the healthcare system. It can operate in partially observable environments (in the case of missing observations or data) by maintaining belief states about patient health status and functions as an online agent that plans and re-plans as actions are performed and new observations are obtained. This framework was evaluated using real patient data from an electronic health record. The results demonstrate the feasibility of this approach; such an AI framework easily outperforms the current treatment-as-usual (TAU) case-rate/fee-for-service models of healthcare. The cost per unit of outcome change (CPUC) was $189 vs. $497 for AI vs. TAU (where lower is considered optimal) - while at the same time the AI approach could obtain a 30-35% increase in patient outcomes. Tweaking certain AI model parameters could further enhance this advantage, obtaining approximately 50% more improvement (outcome change) for roughly half the costs. Given careful design and problem formulation, an AI simulation framework can approximate optimal
The exit-time problem for a Markov jump process
Burch, N.; D'Elia, M.; Lehoucq, R. B.
2014-12-01
The purpose of this paper is to consider the exit-time problem for a finite-range Markov jump process, i.e, the distance the particle can jump is bounded independent of its location. Such jump diffusions are expedient models for anomalous transport exhibiting super-diffusion or nonstandard normal diffusion. We refer to the associated deterministic equation as a volume-constrained nonlocal diffusion equation. The volume constraint is the nonlocal analogue of a boundary condition necessary to demonstrate that the nonlocal diffusion equation is well-posed and is consistent with the jump process. A critical aspect of the analysis is a variational formulation and a recently developed nonlocal vector calculus. This calculus allows us to pose nonlocal backward and forward Kolmogorov equations, the former equation granting the various moments of the exit-time distribution.
Dynamic Request Routing for Online Video-on-Demand Service: A Markov Decision Process Approach
Directory of Open Access Journals (Sweden)
Jianxiong Wan
2014-01-01
Full Text Available We investigate the request routing problem in the CDN-based Video-on-Demand system. We model the system as a controlled queueing system including a dispatcher and several edge servers. The system is formulated as a Markov decision process (MDP. Since the MDP formulation suffers from the so-called “the curse of dimensionality” problem, we then develop a greedy heuristic algorithm, which is simple and can be implemented online, to approximately solve the MDP model. However, we do not know how far it deviates from the optimal solution. To address this problem, we further aggregate the state space of the original MDP model and use the bounded-parameter MDP (BMDP to reformulate the system. This allows us to obtain a suboptimal solution with a known performance bound. The effectiveness of two approaches is evaluated in a simulation study.
Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach
Energy Technology Data Exchange (ETDEWEB)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Bordeaux INP, IMB, UMR CNRS 5251 (France); Piunovskiy, A. B., E-mail: piunov@liv.ac.uk [University of Liverpool, Department of Mathematical Sciences (United Kingdom)
2016-08-15
In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures of the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.
The Consensus String Problem and the Complexity of Comparing Hidden Markov Models
DEFF Research Database (Denmark)
Lyngsø, Rune Bang; Pedersen, Christian Nørgaard Storm
2002-01-01
The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960s, and has since then been applied to numerous problems, e.g. biological sequence analysis. Most applications of hidden Markov models are based on efficient algorithms for computing...... the probability of generating a given string, or computing the most likely path generating a given string. In this paper we consider the problem of computing the most likely string, or consensus string, generated by a given model, and its implications on the complexity of comparing hidden Markov models. We show...... that computing the consensus string, and approximating its probability within any constant factor, is NP-hard, and that the same holds for the closely related labeling problem for class hidden Markov models. Furthermore, we establish the NP-hardness of comparing two hidden Markov models under the L∞- and L1...
Moolenaar, Lobke M.; Broekmans, Frank J. M.; van Disseldorp, Jeroen; Fauser, Bart C. J. M.; Eijkemans, Marinus J. C.; Hompes, Peter G. A.; van der Veen, Fulco; Mol, Ben Willem J.
2011-01-01
To compare the cost effectiveness of ovarian reserve testing in in vitro fertilization (IVF). A Markov decision model based on data from the literature and original patient data. Decision analytic framework. Computer-simulated cohort of subfertile women aged 20 to 45 years who are eligible for IVF.
Moolenaar, Lobke M.; Broekmans, Frank J. M.; van Disseldorp, Jeroen; Fauser, Bart C. J. M.; Eijkemans, Marinus J. C.; Hompes, Peter G. A.; van der Veen, Fulco; Mol, Ben Willem J.
2011-01-01
Objective: To compare the cost effectiveness of ovarian reserve testing in in vitro fertilization (IVF). Design: A Markov decision model based on data from the literature and original patient data. Setting: Decision analytic framework. Patient(s): Computer-simulated cohort of subfertile women aged
Joseph Buongiorno; Mo Zhou; Craig Johnston
2017-01-01
Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value.Â The other method used the certainty...
Mo Zhou; Joseph Buongiorno
2011-01-01
Most economic studies of forest decision making under risk assume a fixed interest rate. This paper investigated some implications of this stochastic nature of interest rates. Markov decision process (MDP) models, used previously to integrate stochastic stand growth and prices, can be extended to include variable interest rates as well. This method was applied to...
The Consensus String Problem and the Complexity of Comparing Hidden Markov Models
DEFF Research Database (Denmark)
Lyngsø, Rune Bang; Pedersen, Christian Nørgaard Storm
2002-01-01
The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960s, and has since then been applied to numerous problems, e.g. biological sequence analysis. Most applications of hidden Markov models are based on efficient algorithms for computing......-norms. We discuss the applicability of the technique used for proving the hardness of comparing two hidden Markov models under the L1-norm to other measures of distance between probability distributions. In particular, we show that it cannot be used for proving NP-hardness of determining the Kullback...
Markov LIMID processes for representing and solving renewal problems
DEFF Research Database (Denmark)
Jørgensen, Erik; Kristensen, Anders Ringgaard; Nilsson, Dennis
2014-01-01
to model a Markov Limid Process, where each TemLimid represents a macro action. Algorithms are presented to find optimal plans for a sequence of such macro actions. Use of algorithms is illustrated based on an extended version of an example from pig production originally used to introduce the Limid concept...
Composition of web services using Markov decision processes and dynamic programming.
Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael
2015-01-01
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.
Decision problems for groups and semigroups
International Nuclear Information System (INIS)
Adian, S I; Durnev, V G
2000-01-01
The paper presents a detailed survey of results concerning the main decision problems of group theory and semigroup theory, including the word problem, the isomorphism problem, recognition problems, and other algorithmic questions related to them. The well-known theorems of Markov-Post, P.S. Novikov, Adian-Rabin, Higman, Magnus, and Lyndon are given with complete proofs. As a rule, the proofs presented in this survey are substantially simpler than those given in the original papers. For the sake of completeness, we first prove the insolubility of the halting problem for Turing machines, on which the insolubility of the word problem for semigroups is based. Specific attention is also paid to the simplest examples of semigroups with insoluble word problem. We give a detailed proof of the significant result of Lyndon that, in the class of groups presented by a system of defining relations for which the maximum mutual overlapping of any two relators is strictly less than one fifth of their lengths, the word problem is soluble, while insoluble word problems can occur when non-strict inequality is allowed. A proof of the corresponding result for finitely presented semigroups is also given, when the corresponding fraction is one half
An integrated Markov decision process and nested logit consumer response model of air ticket pricing
Lu, J.; Feng, T.; Timmermans, H.P.J.; Yang, Z.
2017-01-01
The paper attempts to propose an optimal air ticket pricing model during the booking horizon by taking into account passengers' purchasing behavior of air tickets. A Markov decision process incorporating a nested logit consumer response model is established to modeling the dynamic pricing process.
Road maintenance optimization through a discrete-time semi-Markov decision process
International Nuclear Information System (INIS)
Zhang Xueqing; Gao Hui
2012-01-01
Optimization models are necessary for efficient and cost-effective maintenance of a road network. In this regard, road deterioration is commonly modeled as a discrete-time Markov process such that an optimal maintenance policy can be obtained based on the Markov decision process, or as a renewal process such that an optimal maintenance policy can be obtained based on the renewal theory. However, the discrete-time Markov process cannot capture the real time at which the state transits while the renewal process considers only one state and one maintenance action. In this paper, road deterioration is modeled as a semi-Markov process in which the state transition has the Markov property and the holding time in each state is assumed to follow a discrete Weibull distribution. Based on this semi-Markov process, linear programming models are formulated for both infinite and finite planning horizons in order to derive optimal maintenance policies to minimize the life-cycle cost of a road network. A hypothetical road network is used to illustrate the application of the proposed optimization models. The results indicate that these linear programming models are practical for the maintenance of a road network having a large number of road segments and that they are convenient to incorporate various constraints on the decision process, for example, performance requirements and available budgets. Although the optimal maintenance policies obtained for the road network are randomized stationary policies, the extent of this randomness in decision making is limited. The maintenance actions are deterministic for most states and the randomness in selecting actions occurs only for a few states.
Zhang, Yuanhui; Wu, Haipeng; Denton, Brian T; Wilson, James R; Lobo, Jennifer M
2017-10-27
Markov models are commonly used for decision-making studies in many application domains; however, there are no widely adopted methods for performing sensitivity analysis on such models with uncertain transition probability matrices (TPMs). This article describes two simulation-based approaches for conducting probabilistic sensitivity analysis on a given discrete-time, finite-horizon, finite-state Markov model using TPMs that are sampled over a specified uncertainty set according to a relevant probability distribution. The first approach assumes no prior knowledge of the probability distribution, and each row of a TPM is independently sampled from the uniform distribution on the row's uncertainty set. The second approach involves random sampling from the (truncated) multivariate normal distribution of the TPM's maximum likelihood estimators for its rows subject to the condition that each row has nonnegative elements and sums to one. The two sampling methods are easily implemented and have reasonable computation times. A case study illustrates the application of these methods to a medical decision-making problem involving the evaluation of treatment guidelines for glycemic control of patients with type 2 diabetes, where natural variation in a patient's glycated hemoglobin (HbA1c) is modeled as a Markov chain, and the associated TPMs are subject to uncertainty.
Directory of Open Access Journals (Sweden)
Eric A Zilli
2008-12-01
Full Text Available Behavioral tasks are often used to study the different memory systems present in humans and animals. Such tasks are usually designed to isolate and measure some aspect of a single memory system. However, it is not necessarily clear that any given task actually does isolate a system or that the strategy used by a subject in the experiment is the one desired by the experimenter. We have previously shown that when tasks are written mathematically as a form of partially-observable Markov decision processes, the structure of the tasks provide information regarding the possible utility of certain memory systems. These previous analyses dealt with the disambiguation problem: given a specific ambiguous observation of the environment, is there information provided by a given memory strategy that can disambiguate that observation to allow a correct decisionµ Here we extend this approach to cases where multiple memory systems can be strategically combined in different ways. Specifically, we analyze the disambiguation arising from three ways by which episodic-like memory retrieval might be cued (by another episodic-like memory, by a semantic association, or by working memory for some earlier observation. We also consider the disambiguation arising from holding earlier working memories, episodic-like memories or semantic associations in working memory. From these analyses we can begin to develop a quantitative hierarchy among memory systems in which stimulus-response memories and semantic associations provide no disambiguation while the episodic memory system provides the most flexible
A markov decision process model for the optimal dispatch of military medical evacuation assets.
Keneally, Sean K; Robbins, Matthew J; Lunday, Brian J
2016-06-01
We develop a Markov decision process (MDP) model to examine aerial military medical evacuation (MEDEVAC) dispatch policies in a combat environment. The problem of deciding which aeromedical asset to dispatch to each service request is complicated by the threat conditions at the service locations and the priority class of each casualty event. We assume requests for MEDEVAC support arrive sequentially, with the location and the priority of each casualty known upon initiation of the request. The United States military uses a 9-line MEDEVAC request system to classify casualties as being one of three priority levels: urgent, priority, and routine. Multiple casualties can be present at a single casualty event, with the highest priority casualty determining the priority level for the casualty event. Moreover, an armed escort may be required depending on the threat level indicated by the 9-line MEDEVAC request. The proposed MDP model indicates how to optimally dispatch MEDEVAC helicopters to casualty events in order to maximize steady-state system utility. The utility gained from servicing a specific request depends on the number of casualties, the priority class for each of the casualties, and the locations of both the servicing ambulatory helicopter and casualty event. Instances of the dispatching problem are solved using a relative value iteration dynamic programming algorithm. Computational examples are used to investigate optimal dispatch policies under different threat situations and armed escort delays; the examples are based on combat scenarios in which United States Army MEDEVAC units support ground operations in Afghanistan.
Solution of the Markov chain for the dead time problem
International Nuclear Information System (INIS)
Degweker, S.B.
1997-01-01
A method for solving the equation for the Markov chain, describing the effect of a non-extendible dead time on the statistics of time correlated pulses, is discussed. The equation, which was derived in an earlier paper, describes a non-linear process and is not amenable to exact solution. The present method consists of representing the probability generating function as a factorial cumulant expansion and neglecting factorial cumulants beyond the second. This results in a closed set of non-linear equations for the factorial moments. Stationary solutions of these equations, which are of interest for calculating the count rate, are obtained iteratively. The method is applied to the variable dead time counter technique for estimation of system parameters in passive neutron assay of Pu and reactor noise analysis. Comparisons of results by this method with Monte Carlo calculations are presented. (author)
Robust path planning for flexible needle insertion using Markov decision processes.
Tan, Xiaoyu; Yu, Pengqian; Lim, Kah-Bin; Chui, Chee-Kong
2018-05-11
Flexible needle has the potential to accurately navigate to a treatment region in the least invasive manner. We propose a new planning method using Markov decision processes (MDPs) for flexible needle navigation that can perform robust path planning and steering under the circumstance of complex tissue-needle interactions. This method enhances the robustness of flexible needle steering from three different perspectives. First, the method considers the problem caused by soft tissue deformation. The method then resolves the common needle penetration failure caused by patterns of targets, while the last solution addresses the uncertainty issues in flexible needle motion due to complex and unpredictable tissue-needle interaction. Computer simulation and phantom experimental results show that the proposed method can perform robust planning and generate a secure control policy for flexible needle steering. Compared with a traditional method using MDPs, the proposed method achieves higher accuracy and probability of success in avoiding obstacles under complicated and uncertain tissue-needle interactions. Future work will involve experiment with biological tissue in vivo. The proposed robust path planning method can securely steer flexible needle within soft phantom tissues and achieve high adaptability in computer simulation.
Balancing Long Lifetime and Satisfying Fairness in WBAN Using a Constrained Markov Decision Process
Directory of Open Access Journals (Sweden)
Yingqi Yin
2015-01-01
Full Text Available As an important part of the Internet of Things (IOT and the special case of device-to-device (D2D communication, wireless body area network (WBAN gradually becomes the focus of attention. Since WBAN is a body-centered network, the energy of sensor nodes is strictly restrained since they are supplied by battery with limited power. In each data collection, only one sensor node is scheduled to transmit its measurements directly to the access point (AP through the fading channel. We formulate the problem of dynamically choosing which sensor should communicate with the AP to maximize network lifetime under the constraint of fairness as a constrained markov decision process (CMDP. The optimal lifetime and optimal policy are obtained by Bellman equation in dynamic programming. The proposed algorithm defines the limiting performance in WBAN lifetime under different degrees of fairness constraints. Due to the defect of large implementation overhead in acquiring global channel state information (CSI, we put forward a distributed scheduling algorithm that adopts local CSI, which saves the network overhead and simplifies the algorithm. It was demonstrated via simulation that this scheduling algorithm can allocate time slot reasonably under different channel conditions to balance the performances of network lifetime and fairness.
Belief Bisimulation for Hidden Markov Models Logical Characterisation and Decision Algorithm
DEFF Research Database (Denmark)
Jansen, David N.; Nielson, Flemming; Zhang, Lijun
2012-01-01
This paper establishes connections between logical equivalences and bisimulation relations for hidden Markov models (HMM). Both standard and belief state bisimulations are considered. We also present decision algorithms for the bisimilarities. For standard bisimilarity, an extension of the usual...... partition refinement algorithm is enough. Belief bisimilarity, being a relation on the continuous space of belief states, cannot be described directly. Instead, we show how to generate a linear equation system in time cubic in the number of states....
Assistive system for people with Apraxia using a Markov decision process.
Jean-Baptiste, Emilie M D; Russell, Martin; Rothstein, Pia
2014-01-01
CogWatch is an assistive system to re-train stroke survivors suffering from Apraxia or Action Disorganization Syndrome (AADS) to complete activities of daily living (ADLs). This paper describes the approach to real-time planning based on a Markov Decision Process (MDP), and demonstrates its ability to improve task's performance via user simulation. The paper concludes with a discussion of the remaining challenges and future enhancements.
Markov Modeling with Soft Aggregation for Safety and Decision Analysis; TOPICAL
International Nuclear Information System (INIS)
COOPER, J. ARLIN
1999-01-01
The methodology in this report improves on some of the limitations of many conventional safety assessment and decision analysis methods. A top-down mathematical approach is developed for decomposing systems and for expressing imprecise individual metrics as possibilistic or fuzzy numbers. A ''Markov-like'' model is developed that facilitates combining (aggregating) inputs into overall metrics and decision aids, also portraying the inherent uncertainty. A major goal of Markov modeling is to help convey the top-down system perspective. One of the constituent methodologies allows metrics to be weighted according to significance of the attribute and aggregated nonlinearly as to contribution. This aggregation is performed using exponential combination of the metrics, since the accumulating effect of such factors responds less and less to additional factors. This is termed ''soft'' mathematical aggregation. Dependence among the contributing factors is accounted for by incorporating subjective metrics on ''overlap'' of the factors as well as by correspondingly reducing the overall contribution of these combinations to the overall aggregation. Decisions corresponding to the meaningfulness of the results are facilitated in several ways. First, the results are compared to a soft threshold provided by a sigmoid function. Second, information is provided on input ''Importance'' and ''Sensitivity,'' in order to know where to place emphasis on considering new controls that may be necessary. Third, trends in inputs and outputs are tracked in order to obtain significant information% including cyclic information for the decision process. A practical example from the air transportation industry is used to demonstrate application of the methodology. Illustrations are given for developing a structure (along with recommended inputs and weights) for air transportation oversight at three different levels, for developing and using cycle information, for developing Importance and
Detection of Text Lines of Handwritten Arabic Manuscripts using Markov Decision Processes
Directory of Open Access Journals (Sweden)
Youssef Boulid
2016-09-01
Full Text Available In a character recognition systems, the segmentation phase is critical since the accuracy of the recognition depend strongly on it. In this paper we present an approach based on Markov Decision Processes to extract text lines from binary images of Arabic handwritten documents. The proposed approach detects the connected components belonging to the same line by making use of knowledge about features and arrangement of those components. The initial results show that the system is promising for extracting Arabic handwritten lines.
Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes
Czech Academy of Sciences Publication Activity Database
Sladký, Karel
2013-01-01
Roč. 7, č. 3 (2013), s. 146-161 ISSN 0572-3043 R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150 Grant - others:AVČR a CONACyT(CZ) 171396 Institutional support: RVO:67985556 Keywords : Discrete-time Markov decision chains * exponential utility functions * certainty equivalent * mean-variance optimality * connections between risk -sensitive and risk -neutral models Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/sladky-0399099.pdf
Data-Driven Markov Decision Process Approximations for Personalized Hypertension Treatment Planning
Directory of Open Access Journals (Sweden)
Greggory J. Schell PhD
2016-10-01
Full Text Available Background: Markov decision process (MDP models are powerful tools. They enable the derivation of optimal treatment policies but may incur long computational times and generate decision rules that are challenging to interpret by physicians. Methods: In an effort to improve usability and interpretability, we examined whether Poisson regression can approximate optimal hypertension treatment policies derived by an MDP for maximizing a patient’s expected discounted quality-adjusted life years. Results: We found that our Poisson approximation to the optimal treatment policy matched the optimal policy in 99% of cases. This high accuracy translates to nearly identical health outcomes for patients. Furthermore, the Poisson approximation results in 104 additional quality-adjusted life years per 1000 patients compared to the Seventh Joint National Committee’s treatment guidelines for hypertension. The comparative health performance of the Poisson approximation was robust to the cardiovascular disease risk calculator used and calculator calibration error. Limitations: Our results are based on Markov chain modeling. Conclusions: Poisson model approximation for blood pressure treatment planning has high fidelity to optimal MDP treatment policies, which can improve usability and enhance transparency of more personalized treatment policies.
Basic problems solving for two-dimensional discrete 3 × 4 order hidden markov model
International Nuclear Information System (INIS)
Wang, Guo-gang; Gan, Zong-liang; Tang, Gui-jin; Cui, Zi-guan; Zhu, Xiu-chang
2016-01-01
A novel model is proposed to overcome the shortages of the classical hypothesis of the two-dimensional discrete hidden Markov model. In the proposed model, the state transition probability depends on not only immediate horizontal and vertical states but also on immediate diagonal state, and the observation symbol probability depends on not only current state but also on immediate horizontal, vertical and diagonal states. This paper defines the structure of the model, and studies the three basic problems of the model, including probability calculation, path backtracking and parameters estimation. By exploiting the idea that the sequences of states on rows or columns of the model can be seen as states of a one-dimensional discrete 1 × 2 order hidden Markov model, several algorithms solving the three questions are theoretically derived. Simulation results further demonstrate the performance of the algorithms. Compared with the two-dimensional discrete hidden Markov model, there are more statistical characteristics in the structure of the proposed model, therefore the proposed model theoretically can more accurately describe some practical problems.
Kirkwood, James R
2015-01-01
Review of ProbabilityShort HistoryReview of Basic Probability DefinitionsSome Common Probability DistributionsProperties of a Probability DistributionProperties of the Expected ValueExpected Value of a Random Variable with Common DistributionsGenerating FunctionsMoment Generating FunctionsExercisesDiscrete-Time, Finite-State Markov ChainsIntroductionNotationTransition MatricesDirected Graphs: Examples of Markov ChainsRandom Walk with Reflecting BoundariesGamblerâ€™s RuinEhrenfest ModelCentral Problem of Markov ChainsCondition to Ensure a Unique Equilibrium StateFinding the Equilibrium StateTransient and Recurrent StatesIndicator FunctionsPerron-Frobenius TheoremAbsorbing Markov ChainsMean First Passage TimeMean Recurrence Time and the Equilibrium StateFundamental Matrix for Regular Markov ChainsDividing a Markov Chain into Equivalence ClassesPeriodic Markov ChainsReducible Markov ChainsSummaryExercisesDiscrete-Time, Infinite-State Markov ChainsRenewal ProcessesDelayed Renewal ProcessesEquilibrium State f...
Karmarkar, Taruja D; Maurer, Anne; Parks, Michael L; Mason, Thomas; Bejinez-Eastman, Ana; Harrington, Melvyn; Morgan, Randall; O'Connor, Mary I; Wood, James E; Gaskin, Darrell J
2017-12-01
Disparities in the presentation of knee osteoarthritis (OA) and in the utilization of treatment across sex, racial, and ethnic groups in the United States are well documented. We used a Markov model to calculate lifetime costs of knee OA treatment. We then used the model results to compute costs of disparities in treatment by race, ethnicity, sex, and socioeconomic status. We used the literature to construct a Markov Model of knee OA and publicly available data to create the model parameters and patient populations of interest. An expert panel of physicians, who treated a large number of patients with knee OA, constructed treatment pathways. Direct costs were based on the literature and indirect costs were derived from the Medical Expenditure Panel Survey. We found that failing to obtain effective treatment increased costs and limited benefits for all groups. Delaying treatment imposed a greater cost across all groups and decreased benefits. Lost income because of lower labor market productivity comprised a substantial proportion of the lifetime costs of knee OA. Population simulations demonstrated that as the diversity of the US population increases, the societal costs of racial and ethnic disparities in treatment utilization for knee OA will increase. Our results show that disparities in treatment of knee OA are costly. All stakeholders involved in treatment decisions for knee OA patients should consider costs associated with delaying and forgoing treatment, especially for disadvantaged populations. Such decisions may lead to higher costs and worse health outcomes.
The Markov Latent Effects Approach to Safety and Decision -Making; TOPICAL
International Nuclear Information System (INIS)
COOPER, J. ARLIN
2001-01-01
The methodology in this report addresses the safety effects of organizational and operational factors that can be measured through ''inspection.'' The investigation grew out of a preponderance of evidence that the safety ''culture'' (attitude of employees and management toward safety) was frequently one of the major root causes behind accidents or safety-relevant failures. The approach is called ''Markov latent effects'' analysis. Since safety also depends on a multitude of factors that are best measured through well known risk analysis methods (e.g., fault trees, event trees, FMECA, physical response modeling, etc.), the Markov latent effects approach supplements conventional safety assessment and decision analysis methods. A top-down mathematical approach is developed for decomposing systems, for determining the most appropriate items to be measured, and for expressing the measurements as imprecise subjective metrics through possibilistic or fuzzy numbers. A mathematical model is developed that facilitates combining (aggregating) inputs into overall metrics and decision aids, also portraying the inherent uncertainty. A major goal of the modeling is to help convey the top-down system perspective. Metrics are weighted according to significance of the attribute with respect to subsystems and are aggregated nonlinearly. Since the accumulating effect responds less and less to additional contribution, it is termed ''soft'' mathematical aggregation, which is analogous to how humans frequently make decisions. Dependence among the contributing factors is accounted for by incorporating subjective metrics on commonality and by reducing the overall contribution of these combinations to the overall aggregation. Decisions derived from the results are facilitated in several ways. First, information is provided on input ''Importance'' and ''Sensitivity'' (both Primary and Secondary) in order to know where to place emphasis on investigation of root causes and in considering new
Portfolio allocation under the vendor managed inventory: A Markov ...
African Journals Online (AJOL)
Portfolio allocation under the vendor managed inventory: A Markov decision process. ... Journal of Applied Sciences and Environmental Management ... This study provides a review of Markov decision processes and investigates its suitability for solutions to portfolio allocation problems under vendor managed inventory in ...
Block-accelerated aggregation multigrid for Markov chains with application to PageRank problems
Shen, Zhao-Li; Huang, Ting-Zhu; Carpentieri, Bruno; Wen, Chun; Gu, Xian-Ming
2018-06-01
Recently, the adaptive algebraic aggregation multigrid method has been proposed for computing stationary distributions of Markov chains. This method updates aggregates on every iterative cycle to keep high accuracies of coarse-level corrections. Accordingly, its fast convergence rate is well guaranteed, but often a large proportion of time is cost by aggregation processes. In this paper, we show that the aggregates on each level in this method can be utilized to transfer the probability equation of that level into a block linear system. Then we propose a Block-Jacobi relaxation that deals with the block system on each level to smooth error. Some theoretical analysis of this technique is presented, meanwhile it is also adapted to solve PageRank problems. The purpose of this technique is to accelerate the adaptive aggregation multigrid method and its variants for solving Markov chains and PageRank problems. It also attempts to shed some light on new solutions for making aggregation processes more cost-effective for aggregation multigrid methods. Numerical experiments are presented to illustrate the effectiveness of this technique.
Bennett, Casey C.; Hauser, Kris
2013-01-01
In the modern healthcare system, rapidly expanding costs/complexity, the growing myriad of treatment options, and exploding information streams that often do not effectively reach the front lines hinder the ability to choose optimal treatment decisions over time. The goal in this paper is to develop a general purpose (non-disease-specific) computational/artificial intelligence (AI) framework to address these challenges. This serves two potential functions: 1) a simulation environment for expl...
Kana, A.A.; Harrison, B.M.
2017-01-01
A Monte Carlo approach to the ship-centric Markov decision process (SC-MDP) is presented for analyzing whether a container ship should convert to LNG power in the face of evolving Emission Control Area regulations. The SC-MDP model was originally developed as a means to analyze uncertain,
Gedik, Ridvan; Zhang, Shengfan; Rainwater, Chase
2017-06-01
A relatively new consideration in proton therapy planning is the requirement that the mix of patients treated from different categories satisfy desired mix percentages. Deviations from these percentages and their impacts on operational capabilities are of particular interest to healthcare planners. In this study, we investigate intelligent ways of admitting patients to a proton therapy facility that maximize the total expected number of treatment sessions (fractions) delivered to patients in a planning period with stochastic patient arrivals and penalize the deviation from the patient mix restrictions. We propose a Markov Decision Process (MDP) model that provides very useful insights in determining the best patient admission policies in the case of an unexpected opening in the facility (i.e., no-shows, appointment cancellations, etc.). In order to overcome the curse of dimensionality for larger and more realistic instances, we propose an aggregate MDP model that is able to approximate optimal patient admission policies using the worded weight aggregation technique. Our models are applicable to healthcare treatment facilities throughout the United States, but are motivated by collaboration with the University of Florida Proton Therapy Institute (UFPTI).
Simari, Gerardo I
2011-01-01
In this work, we provide a treatment of the relationship between two models that have been widely used in the implementation of autonomous agents: the Belief DesireIntention (BDI) model and Markov Decision Processes (MDPs). We start with an informal description of the relationship, identifying the common features of the two approaches and the differences between them. Then we hone our understanding of these differences through an empirical analysis of the performance of both models on the TileWorld testbed. This allows us to show that even though the MDP model displays consistently better behavior than the BDI model for small worlds, this is not the case when the world becomes large and the MDP model cannot be solved exactly. Finally we present a theoretical analysis of the relationship between the two approaches, identifying mappings that allow us to extract a set of intentions from a policy (a solution to an MDP), and to extract a policy from a set of intentions.
Moolenaar, Lobke M; Broekmans, Frank J M; van Disseldorp, Jeroen; Fauser, Bart C J M; Eijkemans, Marinus J C; Hompes, Peter G A; van der Veen, Fulco; Mol, Ben Willem J
2011-10-01
To compare the cost effectiveness of ovarian reserve testing in in vitro fertilization (IVF). A Markov decision model based on data from the literature and original patient data. Decision analytic framework. Computer-simulated cohort of subfertile women aged 20 to 45 years who are eligible for IVF. [1] No treatment, [2] up to three cycles of IVF limited to women under 41 years and no ovarian reserve testing, [3] up to three cycles of IVF with dose individualization of gonadotropins according to ovarian reserve, and [4] up to three cycles of IVF with ovarian reserve testing and exclusion of expected poor responders after the first cycle, with no treatment scenario as the reference scenario. Cumulative live birth over 1 year, total costs, and incremental cost-effectiveness ratios. The cumulative live birth was 9.0% in the no treatment scenario, 54.8% for scenario 2, 70.6% for scenario 3 and 51.9% for scenario 4. Absolute costs per woman for these scenarios were €0, €6,917, €6,678, and €5,892 for scenarios 1, 2, 3, and 4, respectively. Incremental cost-effectiveness ratios (ICER) for scenarios 2, 3, and 4 were €15,166, €10,837, and €13,743 per additional live birth. Sensitivity analysis showed the model to be robust over a wide range of values. Individualization of the follicle-stimulating hormone dose according to ovarian reserve is likely to be cost effective in women who are eligible for IVF, but this effectiveness needs to be confirmed in randomized clinical trials. Copyright © 2011 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Kinjo, Ken; Uchibe, Eiji; Doya, Kenji
2013-01-01
Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.
Markov processes and controlled Markov chains
Filar, Jerzy; Chen, Anyue
2002-01-01
The general theory of stochastic processes and the more specialized theory of Markov processes evolved enormously in the second half of the last century. In parallel, the theory of controlled Markov chains (or Markov decision processes) was being pioneered by control engineers and operations researchers. Researchers in Markov processes and controlled Markov chains have been, for a long time, aware of the synergies between these two subject areas. However, this may be the first volume dedicated to highlighting these synergies and, almost certainly, it is the first volume that emphasizes the contributions of the vibrant and growing Chinese school of probability. The chapters that appear in this book reflect both the maturity and the vitality of modern day Markov processes and controlled Markov chains. They also will provide an opportunity to trace the connections that have emerged between the work done by members of the Chinese school of probability and the work done by the European, US, Central and South Ameri...
A competitive Markov decision process model for the energy–water–climate change nexus
International Nuclear Information System (INIS)
Nanduri, Vishnu; Saavedra-Antolínez, Ivan
2013-01-01
Highlights: • Developed a CMDP model for the energy–water–climate change nexus. • Solved the model using a reinforcement learning algorithm. • Study demonstrated on 30-bus IEEE electric power network using DCOPF formulation. • Sixty percentage drop in CO 2 and 40% drop in H 2 O use when coal replaced by wind (over 10 years). • Higher profits for nuclear and wind as well as higher LMPs under CO 2 and H 2 O taxes. - Abstract: Drought-like conditions in some parts of the US and around the world are causing water shortages that lead to power failures, becoming a source of concern to independent system operators. Water shortages can cause significant challenges in electricity production and thereby a direct socioeconomic impact on the surrounding region. Our paper presents a new, comprehensive quantitative model that examines the electricity–water–climate change nexus. We investigate the impact of a joint water and carbon tax proposal on the operation of a transmission-constrained power network operating in a wholesale power market setting. We develop a competitive Markov decision process (CMDP) model for the dynamic competition in wholesale electricity markets, and solve the model using reinforcement learning. Several cases, including the impact of different tax schemes, integration of stochastic wind energy resources, and capacity disruptions due to droughts are investigated. Results from the analysis on the sample power network show that electricity prices increased with the adoption of water and carbon taxes compared with locational marginal prices without taxes. As expected, wind energy integration reduced both CO 2 emissions and water usage. Capacity disruptions also caused locational marginal prices to increase. Other detailed analyses and results obtained using a 30-bus IEEE network are discussed in detail
A Markov decision process for managing habitat for Florida scrub-jays
Johnson, Fred A.; Breininger, David R.; Duncan, Brean W.; Nichols, James D.; Runge, Michael C.; Williams, B. Ken
2011-01-01
Florida scrub-jays Aphelocoma coerulescens are listed as threatened under the Endangered Species Act due to loss and degradation of scrub habitat. This study concerned the development of an optimal strategy for the restoration and management of scrub habitat at Merritt Island National Wildlife Refuge, which contains one of the few remaining large populations of scrub-jays in Florida. There are documented differences in the reproductive and survival rates of scrubjays among discrete classes of scrub height (Markov models to estimate annual transition probabilities among the four scrub-height classes under three possible management actions: scrub restoration (mechanical cutting followed by burning), a prescribed burn, or no intervention. A strategy prescribing the optimal management action for management units exhibiting different proportions of scrub-height classes was derived using dynamic programming. Scrub restoration was the optimal management action only in units dominated by mixed and tall scrub, and burning tended to be the optimal action for intermediate levels of short scrub. The optimal action was to do nothing when the amount of short scrub was greater than 30%, because short scrub mostly transitions to optimal height scrub (i.e., that state with the highest demographic success of scrub-jays) in the absence of intervention. Monte Carlo simulation of the optimal policy suggested that some form of management would be required every year. We note, however, that estimates of scrub-height transition probabilities were subject to several sources of uncertainty, and so we explored the management implications of alternative sets of transition probabilities. Generally, our analysis demonstrated the difficulty of managing for a species that requires midsuccessional habitat, and suggests that innovative management tools may be needed to help ensure the persistence of scrub-jays at Merritt Island National Wildlife Refuge. The development of a tailored monitoring
Directory of Open Access Journals (Sweden)
Ken eKinjo
2013-04-01
Full Text Available Linearly solvable Markov Decision Process (LMDP is a class of optimal control problem in whichthe Bellman’s equation can be converted into a linear equation by an exponential transformation ofthe state value function (Todorov, 2009. In an LMDP, the optimal value function and the correspondingcontrol policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunctionproblem in a continuous state using the knowledge of the system dynamics and the action, state, andterminal cost functions.In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in whichthe dynamics of the body and the environment have to be learned from experience. We first perform asimulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynam-ics model on the derived the action policy. The result shows that a crude linear approximation of thenonlinear dynamics can still allow solution of the task, despite with a higher total cost.We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robotplatform. The state is given by the position and the size of a battery in its camera view and two neck jointangles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servocontroller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state costfunctions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics modelperformed equivalently with the optimal linear quadratic controller (LQR. In the non-quadratic task, theLMDP controller with a linear dynamics model showed the best performance. The results demonstratethe usefulness of the LMDP framework in real robot control even when simple linear models are usedfor dynamics learning.
Limits of performance for the model reduction problem of hidden Markov models
Kotsalis, Georgios
2015-12-15
We introduce system theoretic notions of a Hankel operator, and Hankel norm for hidden Markov models. We show how the related Hankel singular values provide lower bounds on the norm of the difference between a hidden Markov model of order n and any lower order approximant of order n̂ < n.
Limits of performance for the model reduction problem of hidden Markov models
Kotsalis, Georgios; Shamma, Jeff S.
2015-01-01
We introduce system theoretic notions of a Hankel operator, and Hankel norm for hidden Markov models. We show how the related Hankel singular values provide lower bounds on the norm of the difference between a hidden Markov model of order n and any lower order approximant of order n̂ < n.
Al-Ma'shumah, Fathimah; Permana, Dony; Sidarto, Kuntjoro Adji
2015-12-01
Customer Lifetime Value is an important and useful concept in marketing. One of its benefits is to help a company for budgeting marketing expenditure for customer acquisition and customer retention. Many mathematical models have been introduced to calculate CLV considering the customer retention/migration classification scheme. A fairly new class of these models which will be described in this paper uses Markov Chain Models (MCM). This class of models has the major advantage for its flexibility to be modified to several different cases/classification schemes. In this model, the probabilities of customer retention and acquisition play an important role. From Pfeifer and Carraway, 2000, the final formula of CLV obtained from MCM usually contains nonlinear form of the transition probability matrix. This nonlinearity makes the inverse problem of CLV difficult to solve. This paper aims to solve this inverse problem, yielding the approximate transition probabilities for the customers, by applying metaheuristic optimization algorithm developed by Yang, 2013, Flower Pollination Algorithm. The major interpretation of obtaining the transition probabilities are to set goals for marketing teams in keeping the relative frequencies of customer acquisition and customer retention.
Basic problems and solution methods for two-dimensional continuous 3 × 3 order hidden Markov model
International Nuclear Information System (INIS)
Wang, Guo-gang; Tang, Gui-jin; Gan, Zong-liang; Cui, Zi-guan; Zhu, Xiu-chang
2016-01-01
A novel model referred to as two-dimensional continuous 3 × 3 order hidden Markov model is put forward to avoid the disadvantages of the classical hypothesis of two-dimensional continuous hidden Markov model. This paper presents three equivalent definitions of the model, in which the state transition probability relies on not only immediate horizontal and vertical states but also immediate diagonal state, and in which the probability density of the observation relies on not only current state but also immediate horizontal and vertical states. The paper focuses on the three basic problems of the model, namely probability density calculation, parameters estimation and path backtracking. Some algorithms solving the questions are theoretically derived, by exploiting the idea that the sequences of states on rows or columns of the model can be viewed as states of a one-dimensional continuous 1 × 2 order hidden Markov model. Simulation results further demonstrate the performance of the algorithms. Because there are more statistical characteristics in the structure of the proposed new model, it can more accurately describe some practical problems, as compared to two-dimensional continuous hidden Markov model.
Bayesian decision theory : A simple toy problem
van Erp, H.R.N.; Linger, R.O.; van Gelder, P.H.A.J.M.
2016-01-01
We give here a comparison of the expected outcome theory, the expected utility theory, and the Bayesian decision theory, by way of a simple numerical toy problem in which we look at the investment willingness to avert a high impact low probability event. It will be found that for this toy problem
A Markov decision process for managing habitat for Florida scrub-jays
Johnson, Fred A.; Breininger, David R.; Duncan, Brean W.; Nichols, James D.; Runge, Michael C.; Williams, B. Ken
2011-01-01
Florida scrub-jays Aphelocoma coerulescens are listed as threatened under the Endangered Species Act due to loss and degradation of scrub habitat. This study concerned the development of an optimal strategy for the restoration and management of scrub habitat at Merritt Island National Wildlife Refuge, which contains one of the few remaining large populations of scrub-jays in Florida. There are documented differences in the reproductive and survival rates of scrubjays among discrete classes of scrub height (strategy that would maximize the long-term growth rate of the resident scrub-jay population. We used aerial imagery with multistate Markov models to estimate annual transition probabilities among the four scrub-height classes under three possible management actions: scrub restoration (mechanical cutting followed by burning), a prescribed burn, or no intervention. A strategy prescribing the optimal management action for management units exhibiting different proportions of scrub-height classes was derived using dynamic programming. Scrub restoration was the optimal management action only in units dominated by mixed and tall scrub, and burning tended to be the optimal action for intermediate levels of short scrub. The optimal action was to do nothing when the amount of short scrub was greater than 30%, because short scrub mostly transitions to optimal height scrub (i.e., that state with the highest demographic success of scrub-jays) in the absence of intervention. Monte Carlo simulation of the optimal policy suggested that some form of management would be required every year. We note, however, that estimates of scrub-height transition probabilities were subject to several sources of uncertainty, and so we explored the management implications of alternative sets of transition probabilities. Generally, our analysis demonstrated the difficulty of managing for a species that requires midsuccessional habitat, and suggests that innovative management tools may be needed to
Numerical research of the optimal control problem in the semi-Markov inventory model
Energy Technology Data Exchange (ETDEWEB)
Gorshenin, Andrey K. [Institute of Informatics Problems, Russian Academy of Sciences, Vavilova str., 44/2, Moscow, Russia MIREA, Faculty of Information Technology (Russian Federation); Belousov, Vasily V. [Institute of Informatics Problems, Russian Academy of Sciences, Vavilova str., 44/2, Moscow (Russian Federation); Shnourkoff, Peter V.; Ivanov, Alexey V. [National research university Higher school of economics, Moscow (Russian Federation)
2015-03-10
This paper is devoted to the numerical simulation of stochastic system for inventory management products using controlled semi-Markov process. The results of a special software for the system’s research and finding the optimal control are presented.
Numerical research of the optimal control problem in the semi-Markov inventory model
International Nuclear Information System (INIS)
Gorshenin, Andrey K.; Belousov, Vasily V.; Shnourkoff, Peter V.; Ivanov, Alexey V.
2015-01-01
This paper is devoted to the numerical simulation of stochastic system for inventory management products using controlled semi-Markov process. The results of a special software for the system’s research and finding the optimal control are presented
Learning classifier systems with memory condition to solve non-Markov problems
Zang, Zhaoxiang; Li, Dehua; Wang, Junying
2012-01-01
In the family of Learning Classifier Systems, the classifier system XCS has been successfully used for many applications. However, the standard XCS has no memory mechanism and can only learn optimal policy in Markov environments, where the optimal action is determined solely by the state of current sensory input. In practice, most environments are partially observable environments on agent's sensation, which are also known as non-Markov environments. Within these environments, XCS either fail...
Haesaert, S.; Cauchi, N.; Abate, A.
2017-01-01
In this paper, we present an industrial application of new approximate similarity relations for Markov models, and show that they are key for the synthesis of control strategies. Typically, modern engineering systems are modelled using complex and high-order models which make the correct-by-design
International Nuclear Information System (INIS)
Olivieri, E.; Scoppola, E.
1996-01-01
In this paper we consider aperiodic ergodic Markov chains with transition probabilities exponentially small in a large parameter β. We extend to the general, not necessarily reversible case the analysis, started in part I of this work, of the first exit problem from a general domain Q containing many stable equilibria (attracting equilibrium points for the β = ∞ dynamics). In particular we describe the tube of typical trajectories during the first excursion outside Q
Multilevel markov chain monte carlo method for high-contrast single-phase flow problems
Efendiev, Yalchin R.
2014-12-19
In this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates. © Global Science Press Limited 2015.
Cost-effectiveness of seven IVF strategies: results of a Markov decision-analytic model.
Fiddelers, Audrey A A; Dirksen, Carmen D; Dumoulin, John C M; van Montfoort, Aafke P A; Land, Jolande A; Janssen, J Marij; Evers, Johannes L H; Severens, Johan L
2009-07-01
A selective switch to elective single embryo transfer (eSET) in IVF has been suggested to prevent complications of fertility treatment for both mother and infants. We compared seven IVF strategies concerning their cost-effectiveness using a Markov model. The model was based on a three IVF-attempts time horizon and a societal perspective using real world strategies and data, comparing seven IVF strategies, concerning costs, live births and incremental cost-effectiveness ratios (ICERs). In order to increase pregnancy probability, one cycle of eSET + one cycle of standard treatment policy [STP, i.e. eSET in patients IVF treatment, combining several transfer policies was not cost-effective. A choice has to be made between three cycles of eSET, STP or DET. It depends, however, on society's willingness to pay which strategy is to be preferred from a cost-effectiveness point of view.
Timing of bariatric surgery for severely obese adolescents: a Markov decision-analysis.
Stroud, Andrea M; Parker, Devin; Croitoru, Daniel P
2016-05-01
Although controversial, bariatric surgery is increasingly being performed in adolescents. We developed a model to simulate the effect of timing of gastric bypass in obese adolescents on quantity and quality of life. A Markov state-transition model was constructed comparing two treatment strategies: gastric bypass surgery at age 16 versus delayed surgery in adulthood. The model simulated a hypothetical cohort of adolescents with body mass index of 45kg/m(2). Model inputs were derived from current literature. The main outcome measure was quality and quantity of life, measured using quality-adjusted life-years (QALYs). For females, early gastric bypass surgery was favored by 2.02 QALYs compared to delaying surgery until age 35 (48.91 vs. 46.89 QALYs). The benefit was even greater for males, where early surgery was favored by 2.9 QALYs (48.30 vs. 45.40 QALYs). The absolute benefit of surgery at age 16 increased; the later surgery was delayed into adulthood. Sensitivity analyses demonstrated that adult surgery was favored only when the values for adverse events were unrealistically high. In our model, early gastric bypass in obese adolescents improved both quality and quantity of life. These findings are useful for surgeons and pediatricians when counseling adolescents considering weight loss surgery. Copyright © 2016 Elsevier Inc. All rights reserved.
[Utilities: a solution of a decision problem?].
Koller, Michael; Ohmann, Christian; Lorenz, Wilfried
2008-01-01
Utility is a concept that originates from utilitarianism, a highly influential philosophical school in the Anglo-American world. The cornerstone of utilitarianism is the principle of maximum happiness or utility. In the medical sciences, this utility approach has been adopted and developed within the field of medical decision making. On an operational level, utility is the evaluation of a health state or an outcome on a one-dimensional scale ranging from 0 (death) to 1 (perfect health). By adding the concept of expectancy, the graphic representation of both concepts in a decision tree results in the specification of expected utilities and helps to resolve complex medical decision problems. Criticism of the utility approach relates to the rational perspective on humans (which is rejected by a considerable fraction of research in psychology) and to the artificial methods used in the evaluation of utility, such as Standard Gamble or Time Trade Off. These may well be the reason why the utility approach has never been accepted in Germany. Nevertheless, innovative concepts for defining goals in health care are urgently required, as the current debate in Germany on "Nutzen" (interestingly translated as 'benefit' instead of as 'utility') and integrated outcome models indicates. It remains to be seen whether this discussion will lead to a re-evaluation of the utility approach.
Jain, Madhu; Meena, Rakesh Kumar
2018-03-01
Markov model of multi-component machining system comprising two unreliable heterogeneous servers and mixed type of standby support has been studied. The repair job of broken down machines is done on the basis of bi-level threshold policy for the activation of the servers. The server returns back to render repair job when the pre-specified workload of failed machines is build up. The first (second) repairman turns on only when the work load of N1 (N2) failed machines is accumulated in the system. The both servers may go for vacation in case when all the machines are in good condition and there are no pending repair jobs for the repairmen. Runge-Kutta method is implemented to solve the set of governing equations used to formulate the Markov model. Various system metrics including the mean queue length, machine availability, throughput, etc., are derived to determine the performance of the machining system. To provide the computational tractability of the present investigation, a numerical illustration is provided. A cost function is also constructed to determine the optimal repair rate of the server by minimizing the expected cost incurred on the system. The hybrid soft computing method is considered to develop the adaptive neuro-fuzzy inference system (ANFIS). The validation of the numerical results obtained by Runge-Kutta approach is also facilitated by computational results generated by ANFIS.
Data-driven Markov models and their application in the evaluation of adverse events in radiotherapy
Abler, Daniel; Davies, Jim; Dosanjh, Manjit; Jena, Raj; Kirkby, Norman; Peach, Ken
2013-01-01
Decision-making processes in medicine rely increasingly on modelling and simulation techniques; they are especially useful when combining evidence from multiple sources. Markov models are frequently used to synthesize the available evidence for such simulation studies, by describing disease and treatment progress, as well as associated factors such as the treatment's effects on a patient's life and the costs to society. When the same decision problem is investigated by multiple stakeholders, differing modelling assumptions are often applied, making synthesis and interpretation of the results difficult. This paper proposes a standardized approach towards the creation of Markov models. It introduces the notion of ‘general Markov models’, providing a common definition of the Markov models that underlie many similar decision problems, and develops a language for their specification. We demonstrate the application of this language by developing a general Markov model for adverse event analysis in radiotherapy ...
Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe
2016-01-01
Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.
Directory of Open Access Journals (Sweden)
Jian Jiao
2017-09-01
Full Text Available The Ka-band and higher Q/V band channels can provide an appealing capacity for the future deep-space communications and Space Information Networks (SIN, which are viewed as a primary solution to satisfy the increasing demands for high data rate services. However, Ka-band channel is much more sensitive to the weather conditions than the conventional communication channels. Moreover, due to the huge distance and long propagation delay in SINs, the transmitter can only obtain delayed Channel State Information (CSI from feedback. In this paper, the noise temperature of time-varying rain attenuation at Ka-band channels is modeled to a two-state Gilbert–Elliot channel, to capture the channel capacity that randomly ranging from good to bad state. An optimal transmission scheme based on Partially Observable Markov Decision Processes (POMDP is proposed, and the key thresholds for selecting the optimal transmission method in the SIN communications are derived. Simulation results show that our proposed scheme can effectively improve the throughput.
Directory of Open Access Journals (Sweden)
Eric A Zilli
2008-07-01
Full Text Available Researchers use a variety of behavioral tasks to analyze the effect of biological manipulations on memory function. This research will benefit from a systematic mathematical method for analyzing memory demands in behavioral tasks. In the framework of reinforcement learning theory, these tasks can be mathematically described as partially-observable Markov decision processes. While a wealth of evidence collected over the past 15 years relates the basal ganglia to the reinforcement learning framework, only recently has much attention been paid to including psychological concepts such as working memory or episodic memory in these models. This paper presents an analysis that provides a quantitative description of memory states sufficient for correct choices at specific decision points. Using information from the mathematical structure of the task descriptions, we derive measures that indicate whether working memory (for one or more cues or episodic memory can provide strategically useful information to an agent. In particular, the analysis determines which observed states must be maintained in or retrieved from memory to perform these specific tasks. We demonstrate the analysis on three simplified tasks as well as eight more complex memory tasks drawn from the animal and human literature (two alternation tasks, two sequence disambiguation tasks, two non-matching tasks, the 2-back task, and the 1-2-AX task. The results of these analyses agree with results from quantitative simulations of the task reported in previous publications and provide simple indications of the memory demands of the tasks which can require far less computation than a full simulation of the task. This may provide a basis for a quantitative behavioral stoichiometry of memory tasks.
Zilli, Eric A; Hasselmo, Michael E
2008-07-23
Researchers use a variety of behavioral tasks to analyze the effect of biological manipulations on memory function. This research will benefit from a systematic mathematical method for analyzing memory demands in behavioral tasks. In the framework of reinforcement learning theory, these tasks can be mathematically described as partially-observable Markov decision processes. While a wealth of evidence collected over the past 15 years relates the basal ganglia to the reinforcement learning framework, only recently has much attention been paid to including psychological concepts such as working memory or episodic memory in these models. This paper presents an analysis that provides a quantitative description of memory states sufficient for correct choices at specific decision points. Using information from the mathematical structure of the task descriptions, we derive measures that indicate whether working memory (for one or more cues) or episodic memory can provide strategically useful information to an agent. In particular, the analysis determines which observed states must be maintained in or retrieved from memory to perform these specific tasks. We demonstrate the analysis on three simplified tasks as well as eight more complex memory tasks drawn from the animal and human literature (two alternation tasks, two sequence disambiguation tasks, two non-matching tasks, the 2-back task, and the 1-2-AX task). The results of these analyses agree with results from quantitative simulations of the task reported in previous publications and provide simple indications of the memory demands of the tasks which can require far less computation than a full simulation of the task. This may provide a basis for a quantitative behavioral stoichiometry of memory tasks.
A Tentative Organizational Schema for Decision-Making Problems.
Osborn, William C.; Goodman, Barbara Ettinger
This report presents the results of research that examined widely diverse decision problems and attempted to specify their common behavior elements. To take into account the psychological complexity of most real-life decision problems, and to develop a tentative organization of decision behavior that will embrace the many, highly diverse types of…
Structuring and assessing large and complex decision problems using MCDA
DEFF Research Database (Denmark)
Barfod, Michael Bruhn
This paper presents an approach for the structuring and assessing of large and complex decision problems using multi-criteria decision analysis (MCDA). The MCDA problem is structured in a decision tree and assessed using the REMBRANDT technique featuring a procedure for limiting the number of pair...
The two-model problem in rational decision making
Boumans, Marcel
2011-01-01
A model of a decision problem frames that problem in three dimensions: sample space, target probability and information structure. Each specific model imposes a specific rational decision. As a result, different models may impose different, even contradictory, rational decisions, creating choice
Efficient Approximation of Optimal Control for Markov Games
DEFF Research Database (Denmark)
Fearnley, John; Rabe, Markus; Schewe, Sven
2011-01-01
We study the time-bounded reachability problem for continuous-time Markov decision processes (CTMDPs) and games (CTMGs). Existing techniques for this problem use discretisation techniques to break time into discrete intervals, and optimal control is approximated for each interval separately...
Common ground, complex problems and decision making
Beers, P.J.; Boshuizen, H.P.A.; Kirschner, P.A.; Gijselaers, W.H.
2006-01-01
Organisations increasingly have to deal with complex problems. They often use multidisciplinary teams to cope with such problems where different team members have different perspectives on the problem, different individual knowledge and skills, and different approaches on how to solve the problem.
Decision problems in management of construction projects
Szafranko, E.
2017-10-01
In a construction business, one must oftentimes make decisions during all stages of a building process, from planning a new construction project through its execution to the stage of using a ready structure. As a rule, the decision making process is made more complicated due to certain conditions specific for civil engineering. With such diverse decision situations, it is recommended to apply various decision making support methods. Both, literature and hands-on experience suggest several methods based on analytical and computational procedures, some less and some more complex. This article presents the methods which can be helpful in supporting decision making processes in the management of civil engineering projects. These are multi-criteria methods, such as MCE, AHP or indicator methods. Because the methods have different advantages and disadvantages, whereas decision situations have their own specific nature, a brief summary of the methods alongside some recommendations regarding their practical applications has been given at the end of the paper. The main aim of this article is to review the methods of decision support and their analysis for possible use in the construction industry.
Markov Chains and Markov Processes
Ogunbayo, Segun
2016-01-01
Markov chain, which was named after Andrew Markov is a mathematical system that transfers a state to another state. Many real world systems contain uncertainty. This study helps us to understand the basic idea of a Markov chain and how is been useful in our daily lives. For some times there had been suspense on distinct predictions and future existences. Also in different games there had been different expectations or results involved. That is the reason why we need Markov chains to predict o...
Decision-Making Styles and Problem-Solving Appraisal.
Phillips, Susan D.; And Others
1984-01-01
Compared decision-making style and problem-solving appraisal in 243 undergraduates. Results suggested that individuals who employ rational decision-making strategies approach problematic situations, while individuals who endorse dependent decisional strategies approach problematic situations without confidence in their problem-solving abilities.…
The Students Decision Making in Solving Discount Problem
Abdillah; Nusantara, Toto; Subanji; Susanto, Hery; Abadyo
2016-01-01
This research is reviewing students' process of decision making intuitively, analytically, and interactively. The research done by using discount problem which specially created to explore student's intuition, analytically, and interactively. In solving discount problems, researcher exploring student's decision in determining their attitude which…
Modal and Mixed Specifications: Key Decision Problems and their Complexities
DEFF Research Database (Denmark)
Antonik, Adam; Huth, Michael; Larsen, Kim Guldstrand
2010-01-01
, and whether all implementations of one specification are implementations of another one. For each of these decision problems we investigate the worst-case computational complexity for the modal and mixed case. We show that the first decision problem is EXPTIME-complete for modal as well as for mixed......Modal and mixed transition systems are specification formalisms that allow mixing of over- and under-approximation. We discuss three fundamental decision problems for such specifications: whether a set of specifications has a common implementation, whether a sole specification has an implementation...... specifications. We prove that the second decision problem is EXPTIME-complete for mixed specifications (while it is known to be trivial for modal ones). The third decision problem is furthermore demonstrated to be EXPTIME-complete for mixed specifications....
Code Calibration as a Decision Problem
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Kroon, I. B.; Faber, Michael Havbro
1993-01-01
Calibration of partial coefficients for a class of structures where no code exists is considered. The partial coefficients are determined such that the difference between the reliability for the different structures in the class considered and a target reliability level is minimized. Code...... calibration on a decision theoretical basis is discussed. Results from code calibration for rubble mound breakwater designs are shown....
Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe
Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease
Integrating routing decisions in public transportation problems
Schmidt, Marie E
2014-01-01
This book treats three planning problems arising in public railway transportation planning: line planning, timetabling, and delay management, with the objective to minimize passengers’ travel time. While many optimization approaches simplify these problems by assuming that passengers’ route choice is independent of the solution, this book focuses on models which take into account that passengers will adapt their travel route to the implemented planning solution. That is, a planning solution and passengers’ routes are determined and evaluated simultaneously. This work is technically deep, with insightful findings regarding complexity and algorithmic approaches to public transportation problems with integrated passenger routing. It is intended for researchers in the fields of mathematics, computer science, or operations research, working in the field of public transportation from an optimization standpoint. It is also ideal for students who want to gain intuition and experience in doing complexity proofs ...
Effective decision making 10 steps to better decision making and problem solving
Kourdi, Jeremy
2011-01-01
Decisions and problems can often leave people with a dilemma: knowing that a decision is required, but uncertain how to ensure that it is the best one and that it will be successfully executed. The paradox is that the very pressure for a decision often breeds indecisiveness.
Haijema, R.
2008-01-01
Published data revealed that Tagetes spp. suppress polyphagous endoparasitic root nematodes, that the effect varies, perhaps between Tagetes spp. and cultivars, certainly between nematode genera and perhaps between species and strains. The effect is sometimes striking but the picture in general is far from complete and not clear. This situation determined the three objectives of our investigation: occurrence and significance of Tagetes effect, interpretation, and possibilities of application ...
Haijema, R.
2008-01-01
Published data revealed that Tagetes spp. suppress polyphagous endoparasitic root nematodes, that the effect varies, perhaps between Tagetes spp. and cultivars, certainly between nematode genera and perhaps between species and strains. The effect is sometimes striking but the picture in general is
Data-driven Markov models and their application in the evaluation of adverse events in radiotherapy
Abler, Daniel; Kanellopoulos, Vassiliki; Davies, Jim; Dosanjh, Manjit; Jena, Raj; Kirkby, Norman; Peach, Ken
2013-01-01
Decision-making processes in medicine rely increasingly on modelling and simulation techniques; they are especially useful when combining evidence from multiple sources. Markov models are frequently used to synthesize the available evidence for such simulation studies, by describing disease and treatment progress, as well as associated factors such as the treatment's effects on a patient's life and the costs to society. When the same decision problem is investigated by multiple stakeholders, differing modelling assumptions are often applied, making synthesis and interpretation of the results difficult. This paper proposes a standardized approach towards the creation of Markov models. It introduces the notion of ‘general Markov models’, providing a common definition of the Markov models that underlie many similar decision problems, and develops a language for their specification. We demonstrate the application of this language by developing a general Markov model for adverse event analysis in radiotherapy and argue that the proposed method can automate the creation of Markov models from existing data. The approach has the potential to support the radiotherapy community in conducting systematic analyses involving predictive modelling of existing and upcoming radiotherapy data. We expect it to facilitate the application of modelling techniques in medical decision problems beyond the field of radiotherapy, and to improve the comparability of their results. PMID:23824126
Data-driven Markov models and their application in the evaluation of adverse events in radiotherapy
International Nuclear Information System (INIS)
Abler, Daniel; Kanellopoulos, Vassiliki; Dosanjh, Manjit; Davies, Jim; Peach, Ken; Jena, Raj; Kirkby, Norman
2013-01-01
Decision-making processes in medicine rely increasingly on modelling and simulation techniques; they are especially useful when combining evidence from multiple sources. Markov models are frequently used to synthesize the available evidence for such simulation studies, by describing disease and treatment progress, as well as associated factors such as the treatment's effects on a patient's life and the costs to society. When the same decision problem is investigated by multiple stakeholders, differing modelling assumptions are often applied, making synthesis and interpretation of the results difficult. This paper proposes a standardized approach towards the creation of Markov models. It introduces the notion of 'general Markov models', providing a common definition of the Markov models that underlie many similar decision problems, and develops a language for their specification. We demonstrate the application of this language by developing a general Markov model for adverse event analysis in radiotherapy and argue that the proposed method can automate the creation of Markov models from existing data. The approach has the potential to support the radiotherapy community in conducting systematic analyses involving predictive modelling of existing and upcoming radiotherapy data. We expect it to facilitate the application of modelling techniques in medical decision problems beyond the field of radiotherapy, and to improve the comparability of their results. (author)
International Nuclear Information System (INIS)
Gorbachev, D V; Ivanov, V I
2015-01-01
Gauss and Markov quadrature formulae with nodes at zeros of eigenfunctions of a Sturm-Liouville problem, which are exact for entire functions of exponential type, are established. They generalize quadrature formulae involving zeros of Bessel functions, which were first designed by Frappier and Olivier. Bessel quadratures correspond to the Fourier-Hankel integral transform. Some other examples, connected with the Jacobi integral transform, Fourier series in Jacobi orthogonal polynomials and the general Sturm-Liouville problem with regular weight are also given. Bibliography: 39 titles
Clarification process: Resolution of decision-problem conditions
Dieterly, D. L.
1980-01-01
A model of a general process which occurs in both decisionmaking and problem-solving tasks is presented. It is called the clarification model and is highly dependent on information flow. The model addresses the possible constraints of individual indifferences and experience in achieving success in resolving decision-problem conditions. As indicated, the application of the clarification process model is only necessary for certain classes of the basic decision-problem condition. With less complex decision problem conditions, certain phases of the model may be omitted. The model may be applied across a wide range of decision problem conditions. The model consists of two major components: (1) the five-phase prescriptive sequence (based on previous approaches to both concepts) and (2) the information manipulation function (which draws upon current ideas in the areas of information processing, computer programming, memory, and thinking). The two components are linked together to provide a structure that assists in understanding the process of resolving problems and making decisions.
Indian Academy of Sciences (India)
be obtained as a limiting value of a sample path of a suitable ... makes a mathematical model of chance and deals with the problem by .... Is the Markov chain aperiodic? It is! Here is how you can see it. Suppose that after you do the cut, you hold the top half in your right hand, and the bottom half in your left. Then there.
[Patient expectations about decision-making for various health problems].
Delgado, Ana; López-Fernández, Luis Andrés; de Dios Luna, Juan; Saletti Cuesta, Lorena; Gil Garrido, Natalia; Puga González, Almudena
2010-01-01
To identify patient expectations of clinical decision-making at consultations with their general practitioners for distinct health problems and to determine the patient and general practitioner characteristics related to these expectations, with special focus on gender. We performed a multicenter cross-sectional study in 360 patients who were interviewed at home. Data on patients' sociodemographic, clinical characteristics and satisfaction were gathered. General practitioners supplied information on their gender and postgraduate training in family medicine. A questionnaire was used to collect data on patients' expectations that their general practitioner account of their opinion and on expectations of clinical decision making> at consultations with their general practitioner for five problems or hypothetical clinical scenarios (strong chest pain/cold with fever/abnormal discharge/depression or sadness/severe family problem). Patients were asked to indicate their preference that decisions on diagnosis and treatment be taken by: a) the general practitioner alone; b) the general practitioner, taking account of the patient's opinion; c) the patient, taking account of the general practitioner's opinion and d) the patient alone. A logistic regression was performed for clinical decision-making. The response rate was 90%. The mean age was 47.3 + or - 16.5 years and 51% were female. Patients' expectations that their general practitioner listen, explain and take account of their opinions were higher than their expectations of participating in decision-making, depending on the problem in question: 32% wished to participate in chest pain and 49% in family problems. Women had lower expectations of participating in depression and family problems. Patients with female general practitioners had higher expectations of participating in family problems and colds. Most patients wished to be listened to, informed and taken into account by their general practitioners and, to a lesser
Directory of Open Access Journals (Sweden)
Olariu E
2017-09-01
Full Text Available Elena Olariu,1 Kevin K Cadwell,1 Elizabeth Hancock,1 David Trueman,1 Helene Chevrou-Severac2 1PHMR Ltd, London, UK; 2Takeda Pharmaceuticals International AG, Zurich, Switzerland Objective: Although Markov cohort models represent one of the most common forms of decision-analytic models used in health care decision-making, correct implementation of such models requires reliable estimation of transition probabilities. This study sought to identify consensus statements or guidelines that detail how such transition probability matrices should be estimated. Methods: A literature review was performed to identify relevant publications in the following databases: Medline, Embase, the Cochrane Library, and PubMed. Electronic searches were supplemented by manual-searches of health technology assessment (HTA websites in Australia, Belgium, Canada, France, Germany, Ireland, Norway, Portugal, Sweden, and the UK. One reviewer assessed studies for eligibility. Results: Of the 1,931 citations identified in the electronic searches, no studies met the inclusion criteria for full-text review, and no guidelines on transition probabilities in Markov models were identified. Manual-searching of the websites of HTA agencies identified ten guidelines on economic evaluations (Australia, Belgium, Canada, France, Germany, Ireland, Norway, Portugal, Sweden, and UK. All identified guidelines provided general guidance on how to develop economic models, but none provided guidance on the calculation of transition probabilities. One relevant publication was identified following review of the reference lists of HTA agency guidelines: the International Society for Pharmacoeconomics and Outcomes Research taskforce guidance. This provided limited guidance on the use of rates and probabilities. Conclusions: There is limited formal guidance available on the estimation of transition probabilities for use in decision-analytic models. Given the increasing importance of cost
El Yazid Boudaren, Mohamed; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar
2014-11-01
Hidden Markov chains have been shown to be inadequate for data modeling under some complex conditions. In this work, we address the problem of statistical modeling of phenomena involving two heterogeneous system states. Such phenomena may arise in biology or communications, among other fields. Namely, we consider that a sequence of meaningful words is to be searched within a whole observation that also contains arbitrary one-by-one symbols. Moreover, a word may be interrupted at some site to be carried on later. Applying plain hidden Markov chains to such data, while ignoring their specificity, yields unsatisfactory results. The Phasic triplet Markov chain, proposed in this paper, overcomes this difficulty by means of an auxiliary underlying process in accordance with the triplet Markov chains theory. Related Bayesian restoration techniques and parameters estimation procedures according to the new model are then described. Finally, to assess the performance of the proposed model against the conventional hidden Markov chain model, experiments are conducted on synthetic and real data.
Transformative decision rules, permutability, and non-sequential framing of decision problems
Peterson, M.B.
2004-01-01
The concept of transformative decision rules provides auseful tool for analyzing what is often referred to as the`framing', or `problem specification', or `editing' phase ofdecision making. In the present study we analyze a fundamentalaspect of transformative decision rules, viz. permutability. A
The database search problem: a question of rational decision making.
Gittelson, S; Biedermann, A; Bozza, S; Taroni, F
2012-10-10
This paper applies probability and decision theory in the graphical interface of an influence diagram to study the formal requirements of rationality which justify the individualization of a person found through a database search. The decision-theoretic part of the analysis studies the parameters that a rational decision maker would use to individualize the selected person. The modeling part (in the form of an influence diagram) clarifies the relationships between this decision and the ingredients that make up the database search problem, i.e., the results of the database search and the different pairs of propositions describing whether an individual is at the source of the crime stain. These analyses evaluate the desirability associated with the decision of 'individualizing' (and 'not individualizing'). They point out that this decision is a function of (i) the probability that the individual in question is, in fact, at the source of the crime stain (i.e., the state of nature), and (ii) the decision maker's preferences among the possible consequences of the decision (i.e., the decision maker's loss function). We discuss the relevance and argumentative implications of these insights with respect to recent comments in specialized literature, which suggest points of view that are opposed to the results of our study. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Assessing ethical problem solving by reasoning rather than decision making.
Tsai, Tsuen-Chiuan; Harasym, Peter H; Coderre, Sylvain; McLaughlin, Kevin; Donnon, Tyrone
2009-12-01
The assessment of ethical problem solving in medicine has been controversial and challenging. The purposes of this study were: (i) to create a new instrument to measure doctors' decisions on and reasoning approach towards resolving ethical problems; (ii) to evaluate the scores generated by the new instrument for their reliability and validity, and (iii) to compare doctors' ethical reasoning abilities between countries and among medical students, residents and experts. This study used 15 clinical vignettes and the think-aloud method to identify the processes and components involved in ethical problem solving. Subjects included volunteer ethics experts, postgraduate Year 2 residents and pre-clerkship medical students. The interview data were coded using the instruments of the decision score and Ethical Reasoning Inventory (ERI). The ERI assessed the quality of ethical reasoning for a particular case (Part I) and for an individual globally across all the vignettes (Part II). There were 17 Canadian and 32 Taiwanese subjects. Based on the Canadian standard, the decision scores between Taiwanese and Canadian subjects differed significantly, but made no discrimination among the three levels of expertise. Scores on the ERI Parts I and II, which reflect doctors' reasoning quality, differed between countries and among different levels of expertise in Taiwan, providing evidence of construct validity. In addition, experts had a greater organised knowledge structure and considered more relevant variables in the process of arriving at ethical decisions than did residents or students. The reliability of ERI scores was 0.70-0.99 on Part I and 0.75-0.80 on Part II. Expertise in solving ethical problems could not be differentiated by the decisions made, but could be differentiated according to the reasoning used to make those decisions. The difference between Taiwanese and Canadian experts suggests that cultural considerations come into play in the decisions that are made in the
Information and Intertemporal Choices in Multi-Agent Decision Problems
Mariagrazia Olivieri; Massimo Squillante; Viviana Ventre
2016-01-01
Psychological evidences of impulsivity and false consensus effect lead results far from rationality. It is shown that impulsivitymodifies the discount function of each individual, and false consensus effect increases the degree of consensus in a multi-agent decision problem. Analyzing them together we note that in strategic interactions these two human factors involve choices which change equilibriums expected by rational individuals.
PROBLEM DESCRIPTIONS FOR THE DECISION SUPPORT SOFTWARE DEMONSTRATION
Energy Technology Data Exchange (ETDEWEB)
SULLIVAN,T.; ARMSTRONG,A.; OSLEEB,J.
1998-09-14
This demonstration is focused on evaluating the utility of decision support software in addressing environmental problems. Three endpoints have been selected for evaluation: (1) Visualization, (2) Sample Optimization, and (3) Cost/Benefit Analysis. The definitions for these three areas in this program are listed.
The Markov chain method for solving dead time problems in the space dependent model of reactor noise
International Nuclear Information System (INIS)
Degweker, S.B.
1997-01-01
The discrete time Markov chain approach for deriving the statistics of time-correlated pulses, in the presence of a non-extending dead time, is extended to include the effect of space energy distribution of the neutron field. Equations for the singlet and doublet densities of follower neutrons are derived by neglecting correlations beyond the second order. These equations are solved by the modal method. It is shown that in the unimodal approximation, the equations reduce to the point model equations with suitably defined parameters. (author)
Hosaka, Hiromi; Aoyagi, Kakuro; Kaga, Yoshimi; Kanemura, Hideaki; Sugita, Kanji; Aihara, Masao
2017-08-01
Autonomic nervous system activity is recognized as a major component of emotional responses. Future reward/punishment expectations depend upon the process of decision making in the frontal lobe, which is considered to play an important role in executive function. The aim of this study was to investigate the relationship between autonomic responses and decision making during reinforcement tasks using sympathetic skin responses (SSR). Nine adult and 9 juvenile (mean age, 10.2years) volunteers were enrolled in this study. SSRs were measured during the Markov decision task (MDT), which is a reinforcement task. In this task, subjects must endure a small immediate loss to ultimately get a large reward. The subjects had to undergo three sets of tests and their scores in these tests were assessed and evaluated. All adults showed gradually increasing scores for the MDT from the first to third set. As the trial progressed from the first to second set in adults, SSR appearance ratios remarkably increased for both punishment and reward expectations. In comparison with adults, children showed decreasing scores from the first to second set. There were no significant inter-target differences in the SSR appearance ratio in the first and second set in children. In the third set, the SSR appearance ratio for reward expectations was higher than that in the neutral condition. In reinforcement tasks, such as MDT, autonomic responses play an important role in decision making. We assume that SSRs are elicited during efficient decision making tasks associated with future reward/punishment expectations, which demonstrates the importance of autonomic function. In contrast, in children around the age of 10years, the autonomic system does not react as an organized response specific to reward/punishment expectations. This suggests the immaturity of the future reward/punishment expectations process in children. Copyright © 2017 The Japanese Society of Child Neurology. Published by Elsevier B
Transfer of learning in binary decision making problems.
Robotti, O. P.
2007-01-01
Transfer, the use of acquired knowledge, skills and abilities across tasks and contexts, is a key and elusive goal of learning. Most evidence available in literature is based on a limited number of tasks, predominantly open-ended problems, game-like problems and taught school subjects (e.g. maths, physics, algebra). It is not obvious that findings from this work can be extended to the domain of decision making problems. This thesis, which aims to broaden the understanding of enhancing and lim...
Markov chains theory and applications
Sericola, Bruno
2013-01-01
Markov chains are a fundamental class of stochastic processes. They are widely used to solve problems in a large number of domains such as operational research, computer science, communication networks and manufacturing systems. The success of Markov chains is mainly due to their simplicity of use, the large number of available theoretical results and the quality of algorithms developed for the numerical evaluation of many metrics of interest.The author presents the theory of both discrete-time and continuous-time homogeneous Markov chains. He carefully examines the explosion phenomenon, the
Heuristic Method for Decision-Making in Common Scheduling Problems
Directory of Open Access Journals (Sweden)
Edyta Kucharska
2017-10-01
Full Text Available The aim of the paper is to present a heuristic method for decision-making regarding an NP-hard scheduling problem with limitations related to tasks and the resources dependent on the current state of the process. The presented approach is based on the algebraic-logical meta-model (ALMM, which enables making collective decisions in successive process stages, not separately for individual objects or executors. Moreover, taking into account the limitations of the problem, it involves constructing only an acceptable solution and significantly reduces the amount of calculations. A general algorithm based on the presented method is composed of the following elements: preliminary analysis of the problem, techniques for the choice of decision at a given state, the pruning non-perspective trajectory, selection technique of the initial state for the trajectory final part, and the trajectory generation parameters modification. The paper includes applications of the presented approach to scheduling problems on unrelated parallel machines with a deadline and machine setup time dependent on the process state, where the relationship between tasks is defined by the graph. The article also presents the results of computational experiments.
Distinguishing Hidden Markov Chains
Kiefer, Stefan; Sistla, A. Prasad
2015-01-01
Hidden Markov Chains (HMCs) are commonly used mathematical models of probabilistic systems. They are employed in various fields such as speech recognition, signal processing, and biological sequence analysis. We consider the problem of distinguishing two given HMCs based on an observation sequence that one of the HMCs generates. More precisely, given two HMCs and an observation sequence, a distinguishing algorithm is expected to identify the HMC that generates the observation sequence. Two HM...
Compromise decision support problems for hierarchical design involving uncertainty
Vadde, S.; Allen, J. K.; Mistree, F.
1994-08-01
In this paper an extension to the traditional compromise Decision Support Problem (DSP) formulation is presented. Bayesian statistics is used in the formulation to model uncertainties associated with the information being used. In an earlier paper a compromise DSP that accounts for uncertainty using fuzzy set theory was introduced. The Bayesian Decision Support Problem is described in this paper. The method for hierarchical design is demonstrated by using this formulation to design a portal frame. The results are discussed and comparisons are made with those obtained using the fuzzy DSP. Finally, the efficacy of incorporating Bayesian statistics into the traditional compromise DSP formulation is discussed and some pending research issues are described. Our emphasis in this paper is on the method rather than the results per se.
COUGH IN CHILDREN: NEW DECISION OF OLD PROBLEM
Directory of Open Access Journals (Sweden)
T.V. Spichak
2008-01-01
Full Text Available Cough in children: new decision of old problem the mechanism of development of cough, classification of its types and main reasons of it are described in this article. Special attention was given to the problem of diagnostics of chronic cough, to peculiarities of modern instrumental diagnostic methods and to principles of therapeutic tactics. The results of treatment with anti inflammatory medication fenspiride (eurespal are presented. Russian and foreign literature data and information from american guideline in cough treatment was used in this article.Key words: cough, children, fenspiride.
Decoding Problem Gamblers' Signals: A Decision Model for Casino Enterprises.
Ifrim, Sandra
2015-12-01
The aim of the present study is to offer a validated decision model for casino enterprises. The model enables those users to perform early detection of problem gamblers and fulfill their ethical duty of social cost minimization. To this end, the interpretation of casino customers' nonverbal communication is understood as a signal-processing problem. Indicators of problem gambling recommended by Delfabbro et al. (Identifying problem gamblers in gambling venues: final report, 2007) are combined with Viterbi algorithm into an interdisciplinary model that helps decoding signals emitted by casino customers. Model output consists of a historical path of mental states and cumulated social costs associated with a particular client. Groups of problem and non-problem gamblers were simulated to investigate the model's diagnostic capability and its cost minimization ability. Each group consisted of 26 subjects and was subsequently enlarged to 100 subjects. In approximately 95% of the cases, mental states were correctly decoded for problem gamblers. Statistical analysis using planned contrasts revealed that the model is relatively robust to the suppression of signals performed by casino clientele facing gambling problems as well as to misjudgments made by staff regarding the clients' mental states. Only if the last mentioned source of error occurs in a very pronounced manner, i.e. judgment is extremely faulty, cumulated social costs might be distorted.
Team decision problems with classical and quantum signals.
Brandenburger, Adam; La Mura, Pierfrancesco
2016-01-13
We study team decision problems where communication is not possible, but coordination among team members can be realized via signals in a shared environment. We consider a variety of decision problems that differ in what team members know about one another's actions and knowledge. For each type of decision problem, we investigate how different assumptions on the available signals affect team performance. Specifically, we consider the cases of perfectly correlated, i.i.d., and exchangeable classical signals, as well as the case of quantum signals. We find that, whereas in perfect-recall trees (Kuhn 1950 Proc. Natl Acad. Sci. USA 36, 570-576; Kuhn 1953 In Contributions to the theory of games, vol. II (eds H Kuhn, A Tucker), pp. 193-216) no type of signal improves performance, in imperfect-recall trees quantum signals may bring an improvement. Isbell (Isbell 1957 In Contributions to the theory of games, vol. III (eds M Drescher, A Tucker, P Wolfe), pp. 79-96) proved that, in non-Kuhn trees, classical i.i.d. signals may improve performance. We show that further improvement may be possible by use of classical exchangeable or quantum signals. We include an example of the effect of quantum signals in the context of high-frequency trading. © 2015 The Authors.
Continuous-Variable Quantum Computation of Oracle Decision Problems
Adcock, Mark R. A.
Quantum information processing is appealing due its ability to solve certain problems quantitatively faster than classical information processing. Most quantum algorithms have been studied in discretely parameterized systems, but many quantum systems are continuously parameterized. The field of quantum optics in particular has sophisticated techniques for manipulating continuously parameterized quantum states of light, but the lack of a code-state formalism has hindered the study of quantum algorithms in these systems. To address this situation, a code-state formalism for the solution of oracle decision problems in continuously-parameterized quantum systems is developed. Quantum information processing is appealing due its ability to solve certain problems quantitatively faster than classical information processing. Most quantum algorithms have been studied in discretely parameterized systems, but many quantum systems are continuously parameterized. The field of quantum optics in particular has sophisticated techniques for manipulating continuously parameterized quantum states of light, but the lack of a code-state formalism has hindered the study of quantum algorithms in these systems. To address this situation, a code-state formalism for the solution of oracle decision problems in continuously-parameterized quantum systems is developed. In the infinite-dimensional case, we study continuous-variable quantum algorithms for the solution of the Deutsch--Jozsa oracle decision problem implemented within a single harmonic-oscillator. Orthogonal states are used as the computational bases, and we show that, contrary to a previous claim in the literature, this implementation of quantum information processing has limitations due to a position-momentum trade-off of the Fourier transform. We further demonstrate that orthogonal encoding bases are not unique, and using the coherent states of the harmonic oscillator as the computational bases, our formalism enables quantifying
Probabilistic Reachability for Parametric Markov Models
DEFF Research Database (Denmark)
Hahn, Ernst Moritz; Hermanns, Holger; Zhang, Lijun
2011-01-01
Given a parametric Markov model, we consider the problem of computing the rational function expressing the probability of reaching a given set of states. To attack this principal problem, Daws has suggested to first convert the Markov chain into a finite automaton, from which a regular expression...
Evolving neural networks for strategic decision-making problems.
Kohl, Nate; Miikkulainen, Risto
2009-04-01
Evolution of neural networks, or neuroevolution, has been a successful approach to many low-level control problems such as pole balancing, vehicle control, and collision warning. However, certain types of problems-such as those involving strategic decision-making-have remained difficult for neuroevolution to solve. This paper evaluates the hypothesis that such problems are difficult because they are fractured: The correct action varies discontinuously as the agent moves from state to state. A method for measuring fracture using the concept of function variation is proposed and, based on this concept, two methods for dealing with fracture are examined: neurons with local receptive fields, and refinement based on a cascaded network architecture. Experiments in several benchmark domains are performed to evaluate how different levels of fracture affect the performance of neuroevolution methods, demonstrating that these two modifications improve performance significantly. These results form a promising starting point for expanding neuroevolution to strategic tasks.
Decision support system for the operating room rescheduling problem.
van Essen, J Theresia; Hurink, Johann L; Hartholt, Woutske; van den Akker, Bernd J
2012-12-01
Due to surgery duration variability and arrivals of emergency surgeries, the planned Operating Room (OR) schedule is disrupted throughout the day which may lead to a change in the start time of the elective surgeries. These changes may result in undesirable situations for patients, wards or other involved departments, and therefore, the OR schedule has to be adjusted. In this paper, we develop a decision support system (DSS) which assists the OR manager in this decision by providing the three best adjusted OR schedules. The system considers the preferences of all involved stakeholders and only evaluates the OR schedules that satisfy the imposed resource constraints. The decision rules used for this system are based on a thorough analysis of the OR rescheduling problem. We model this problem as an Integer Linear Program (ILP) which objective is to minimize the deviation from the preferences of the considered stakeholders. By applying this ILP to instances from practice, we determined that the given preferences mainly lead to (i) shifting a surgery and (ii) scheduling a break between two surgeries. By using these changes in the DSS, the performed simulation study shows that less surgeries are canceled and patients and wards are more satisfied, but also that the perceived workload of several departments increases to compensate this. The system can also be used to judge the acceptability of a proposed initial OR schedule.
Decision criteria under uncertainty and the climate problem
Energy Technology Data Exchange (ETDEWEB)
Bretteville, Camilla
1999-11-01
This working paper examines some of the decision criteria suggested by theories on decision making under uncertainty. This is done by applying the criteria to the problem of global warming. It is shown that even if there was a benevolent planner who is both supranational and supra generational and even if he had a well defined inter generational welfare function, there are still remaining problems. The question asked is: If there were a benevolent planner, would he know the best climate policy for the world today. The main discussion abstracts from all other complications and focuses on the lack of certainty regarding impacts of greenhouse gas emissions and the effectiveness of policy. A very simplified example of a game against nature is constructed. It has two possible policy choices. One can either try to prevent global warming, or one can choose to do nothing. The future state of the world is uncertain and the chosen policy might affect the outcome in each state. The framing of the example is such that one should expect a policy of action to be preferred rather than a no-action policy, however this is not always the case. It is shown that the preferred policy choice is very much dependent on the choice of decision criterion, the magnitude of costs and of the framing. 3 tabs., 23 refs
An Elite Decision Making Harmony Search Algorithm for Optimization Problem
Directory of Open Access Journals (Sweden)
Lipu Zhang
2012-01-01
Full Text Available This paper describes a new variant of harmony search algorithm which is inspired by a well-known item “elite decision making.” In the new algorithm, the good information captured in the current global best and the second best solutions can be well utilized to generate new solutions, following some probability rule. The generated new solution vector replaces the worst solution in the solution set, only if its fitness is better than that of the worst solution. The generating and updating steps and repeated until the near-optimal solution vector is obtained. Extensive computational comparisons are carried out by employing various standard benchmark optimization problems, including continuous design variables and integer variables minimization problems from the literature. The computational results show that the proposed new algorithm is competitive in finding solutions with the state-of-the-art harmony search variants.
Indian Academy of Sciences (India)
2School of Water Resources, Indian Institute of Technology,. Kharagpur ... the most accepted method for modelling LULCC using current .... We used UTM coordinate system with zone 45 .... need to develop criteria for making decision about.
Applications of decision analysis and related techniques to industrial engineering problems at KSC
Evans, Gerald W.
1995-01-01
This report provides: (1) a discussion of the origination of decision analysis problems (well-structured problems) from ill-structured problems; (2) a review of the various methodologies and software packages for decision analysis and related problem areas; (3) a discussion of how the characteristics of a decision analysis problem affect the choice of modeling methodologies, thus providing a guide as to when to choose a particular methodology; and (4) examples of applications of decision analysis to particular problems encountered by the IE Group at KSC. With respect to the specific applications at KSC, particular emphasis is placed on the use of the Demos software package (Lumina Decision Systems, 1993).
Markov Networks in Evolutionary Computation
Shakya, Siddhartha
2012-01-01
Markov networks and other probabilistic graphical modes have recently received an upsurge in attention from Evolutionary computation community, particularly in the area of Estimation of distribution algorithms (EDAs). EDAs have arisen as one of the most successful experiences in the application of machine learning methods in optimization, mainly due to their efficiency to solve complex real-world optimization problems and their suitability for theoretical analysis. This book focuses on the different steps involved in the conception, implementation and application of EDAs that use Markov networks, and undirected models in general. It can serve as a general introduction to EDAs but covers also an important current void in the study of these algorithms by explaining the specificities and benefits of modeling optimization problems by means of undirected probabilistic models. All major developments to date in the progressive introduction of Markov networks based EDAs are reviewed in the book. Hot current researc...
Utility Function for modeling Group Multicriteria Decision Making problems as games
Alexandre Bevilacqua Leoneti
2016-01-01
To assist in the decision making process, several multicriteria methods have been proposed. However, the existing methods assume a single decision-maker and do not consider decision under risk, which is better addressed by Game Theory. Hence, the aim of this research is to propose a Utility Function that makes it possible to model Group Multicriteria Decision Making problems as games. The advantage of using Game Theory for solving Group Multicriteria Decision Making problems is to evaluate th...
Markov Models for Handwriting Recognition
Plotz, Thomas
2011-01-01
Since their first inception, automatic reading systems have evolved substantially, yet the recognition of handwriting remains an open research problem due to its substantial variation in appearance. With the introduction of Markovian models to the field, a promising modeling and recognition paradigm was established for automatic handwriting recognition. However, no standard procedures for building Markov model-based recognizers have yet been established. This text provides a comprehensive overview of the application of Markov models in the field of handwriting recognition, covering both hidden
Tourists' mental representations of complex travel decision problems
Dellaert, B.G.C.; Arentze, T.A.; Horeni, O.
2014-01-01
Tourism research has long recognized the complexity of many decisions that tourists make and proposed models to describe and analyze tourist decision processes. This article complements this previous research by proposing a view that moves away from the process of making a decision and instead
International Nuclear Information System (INIS)
Lucka, Felix
2012-01-01
Sparsity has become a key concept for solving of high-dimensional inverse problems using variational regularization techniques. Recently, using similar sparsity-constraints in the Bayesian framework for inverse problems by encoding them in the prior distribution has attracted attention. Important questions about the relation between regularization theory and Bayesian inference still need to be addressed when using sparsity promoting inversion. A practical obstacle for these examinations is the lack of fast posterior sampling algorithms for sparse, high-dimensional Bayesian inversion. Accessing the full range of Bayesian inference methods requires being able to draw samples from the posterior probability distribution in a fast and efficient way. This is usually done using Markov chain Monte Carlo (MCMC) sampling algorithms. In this paper, we develop and examine a new implementation of a single component Gibbs MCMC sampler for sparse priors relying on L1-norms. We demonstrate that the efficiency of our Gibbs sampler increases when the level of sparsity or the dimension of the unknowns is increased. This property is contrary to the properties of the most commonly applied Metropolis–Hastings (MH) sampling schemes. We demonstrate that the efficiency of MH schemes for L1-type priors dramatically decreases when the level of sparsity or the dimension of the unknowns is increased. Practically, Bayesian inversion for L1-type priors using MH samplers is not feasible at all. As this is commonly believed to be an intrinsic feature of MCMC sampling, the performance of our Gibbs sampler also challenges common beliefs about the applicability of sample based Bayesian inference. (paper)
Wollmer, Richard D.; Bond, Nicholas A.
Two computer-assisted instruction programs were written in electronics and trigonometry to test the Wollmer Markov Model for optimizing hierarchial learning; calibration samples totalling 110 students completed these programs. Since the model postulated that transfer effects would be a function of the amount of practice, half of the students were…
Triage: Making a Political Decision to Solve an Environmental Science Problem Through Research
Ridolfi, Thomas
1974-01-01
A description is given of a class project concerned with examining a population problem and making some political decisions to solve it. A list of topics for the students to research as a basis for their decisions is provided. (DT)
Markov stochasticity coordinates
International Nuclear Information System (INIS)
Eliazar, Iddo
2017-01-01
Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.
Markov stochasticity coordinates
Energy Technology Data Exchange (ETDEWEB)
Eliazar, Iddo, E-mail: iddo.eliazar@intel.com
2017-01-15
Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.
Quantum Markov Chain Mixing and Dissipative Engineering
DEFF Research Database (Denmark)
Kastoryano, Michael James
2012-01-01
This thesis is the fruit of investigations on the extension of ideas of Markov chain mixing to the quantum setting, and its application to problems of dissipative engineering. A Markov chain describes a statistical process where the probability of future events depends only on the state...... of the system at the present point in time, but not on the history of events. Very many important processes in nature are of this type, therefore a good understanding of their behaviour has turned out to be very fruitful for science. Markov chains always have a non-empty set of limiting distributions...... (stationary states). The aim of Markov chain mixing is to obtain (upper and/or lower) bounds on the number of steps it takes for the Markov chain to reach a stationary state. The natural quantum extensions of these notions are density matrices and quantum channels. We set out to develop a general mathematical...
Scientific decision of the Chernobyl accident problems (results of 1997)
International Nuclear Information System (INIS)
Konoplya, E.F.; Rolevich, I.V.
1998-12-01
In the publication are summarized the basic results of the researches executed in 1997 in the framework of the 'Scientific maintenance of the decision of problems of the Chernobyl NPP accident consequences' of the State program of Republic of Belarus for minimization and overcoming of the Chernobyl NPP accident consequences on 1996-2000 on the following directions: dose monitoring of the population, estimation and forecast of both collective irradiation dozes and risks of radiation induced diseases; development and ground of the measures for increase of radiation protection of the population of Belarus during of the reducing period after the Chernobyl accident; study of influence of radiological consequences of the Chernobyl accident on health of people, development of methods and means of diagnostics, treatment and preventive maintenance of diseases for various categories of the victims; optimisation of the system of measures for preservation of health of the victim population and development of ways for increase of it effectiveness; creation of the effective both prophylactic means and food additives for treatment and rehabilitation of the persons having suffered after the Chernobyl accident; development of complex system of an estimation and decision-making on problems of radiation protection of the population living on contaminated territories; development and optimization of a complex of measures for effective land use and decrease of radioactive contamination of agricultural production in order to reduce irradiation dozes of the population; development of complex technologies and means of decontamination, treatment and burial of radioactive wastes; study of the radioisotopes behaviour dynamics in environment (air, water, ground), ecosystems and populated areas; optimization of the system of radiation ecological monitoring in the republic and scientific methodical ways of it fulfilling; study of effects of low doze irradiation and combined influences, search
Decision Analysis on Survey and SOil Investigation Problem in Power Engineering Consultant
Setyaman, Amy Maulany; Sunitiyoso, Yos
2013-01-01
The study aims to gather and organize information for decision making against the problems arising in Power Engineering Consultant's survey and soil investigation product due to new policy in production cost efficiency that is implemented in 2012. The study conducted using Kepner and Tragoe's analytical process that consisted of four stages analytical process such as situation analysis, problem analysis, decision making analysis and potential problem analysis. As for the decision making analy...
On categorical approach to derived preference relations in some decision making problems
Rozen, Victor V.; Zhitomirski, Grigori
2005-01-01
A structure called a decision making problem is considered. The set of outcomes (consequences) is partially ordered according to the decision maker's preferences. The problem is how these preferences affect a decision maker to prefer one of his strategies (or acts) to another, i.e. it is to describe so called derived preference relations. This problem is formalized by using category theory approach and reduced to a pure algebraical question. An effective method is suggested to build all reaso...
About Problems of Decision Making in Social and Economic Systems
Voloshyn, Oleksiy
2006-01-01
The reasons of a restricted applicability of the models of decision making in social and economic systems. 3 basic principles of growth of their adequacy are proposed: "localization" of solutions, direct account of influencing of the individual on process of decision making ("subjectivity of objectivity") and reduction of influencing of the individual psychosomatic characteristics of the subject (" objectivity of subjectivity ") are offered. The principles are illustrated on mathe...
Decision making under ambiguity but not under risk is related to problem gambling severity
Brevers, Damien; Cleeremans, Axel; Goudriaan, Anna E.; Bechara, Antoine; Kornreich, Charles; Verbanck, Paul; Noël, Xavier
2012-01-01
The aim of the present study was to examine the relationship between problem gambling severity and decision-making situations that vary in two degrees of uncertainty (probability of outcome is known: decision-making under risk; probability of outcome is unknown: decision-making under ambiguity). For
The problem-oriented system, problem-knowledge coupling, and clinical decision making.
Weed, L L; Zimny, N J
1989-07-01
The information tool to aid us in making the clinical decisions discussed in this presentation is called the PKC. Our goal with patients should be to couple the knowledge of the unique patient to the knowledge in the literature and get the best possible match. This approach requires combinatorial versus probabilistic thinking. In the real world, ideal matches are not found. Therefore, it is critical to exhaust the patient's uniqueness first and only then use probabilities to settle further uncertainties. It is an error to teach people how to deal with uncertainty instead of teaching them to clean up a great deal of the uncertainty first. Patients must be involved in this endeavor. In essence, they have a PhD in their own uniqueness, and it is this uniqueness that is very powerful in solving complex problems. This method of patient evaluation and management cannot be used with the unaided mind. It requires new and powerful information tools like the PKC. All information that is relevant to a problem should be included in the coupler. It should encompass differing points of view, and the rationale should be made explicit to clinician and patient alike. When complete, the coupler should represent an interdisciplinary compilation of questions and tests that are expected to be collected every time in the clinic for the type of problem the coupler represents. This method will provide a basis for quality control because the contents of the coupler now have defined what we expect to occur in every patient encounter.(ABSTRACT TRUNCATED AT 250 WORDS)
Grabski
2014-01-01
Semi-Markov Processes: Applications in System Reliability and Maintenance is a modern view of discrete state space and continuous time semi-Markov processes and their applications in reliability and maintenance. The book explains how to construct semi-Markov models and discusses the different reliability parameters and characteristics that can be obtained from those models. The book is a useful resource for mathematicians, engineering practitioners, and PhD and MSc students who want to understand the basic concepts and results of semi-Markov process theory. Clearly defines the properties and
International Nuclear Information System (INIS)
Frackiewicz, Piotr
2011-01-01
We investigate implementations of the Eisert-Wilkens-Lewenstein (EWL) scheme of playing quantum games beyond strategic games. The scope of our research is decision problems, i.e. one-player extensive games. The research is based on the examination of their features when the decision problems are carried out via the EWL protocol. We prove that unitary operators can be adapted to play the role of strategies in decision problems with imperfect recall. Furthermore, we prove that unitary operators provide the decision maker with possibilities that are inaccessible for classical strategies.
Energy Technology Data Exchange (ETDEWEB)
Frackiewicz, Piotr, E-mail: P.Frackiewicz@impan.gov.pl [Institute of Mathematics of the Polish Academy of Sciences, 00-956 Warsaw (Poland)
2011-08-12
We investigate implementations of the Eisert-Wilkens-Lewenstein (EWL) scheme of playing quantum games beyond strategic games. The scope of our research is decision problems, i.e. one-player extensive games. The research is based on the examination of their features when the decision problems are carried out via the EWL protocol. We prove that unitary operators can be adapted to play the role of strategies in decision problems with imperfect recall. Furthermore, we prove that unitary operators provide the decision maker with possibilities that are inaccessible for classical strategies.
Optimization-based decision support systems for planning problems in processing industries
Claassen, G.D.H.
2014-01-01
Summary
Optimization-based decision support systems for planning problems in processing industries
Nowadays, efficient planning of material flows within and between supply chains is of vital importance and has become one of the most challenging problems for decision support in
A New GMRES(m Method for Markov Chains
Directory of Open Access Journals (Sweden)
Bing-Yuan Pu
2013-01-01
Full Text Available This paper presents a class of new accelerated restarted GMRES method for calculating the stationary probability vector of an irreducible Markov chain. We focus on the mechanism of this new hybrid method by showing how to periodically combine the GMRES and vector extrapolation method into a much efficient one for improving the convergence rate in Markov chain problems. Numerical experiments are carried out to demonstrate the efficiency of our new algorithm on several typical Markov chain problems.
Optimization-based decision support systems for planning problems in processing industries
Claassen, G.D.H.
2014-01-01
Summary Optimization-based decision support systems for planning problems in processing industries Nowadays, efficient planning of material flows within and between supply chains is of vital importance and has become one of the most challenging problems for decision support in practice. The tremendous progress in hard- and software of the past decades was an important gateway for developing computerized systems that are able to support decision making on different levels within enterprises. T...
International Nuclear Information System (INIS)
Atanassov, Krassimir; Szmidt, Eulalia; Kacprzyk, Janusz; Atanassova, Vassia
2017-01-01
A new multiagent multicriteria decision making procedure is proposed that considerably extends the existing methods by making it possible to intelligently reduce the set of criteria to be accounted for. The method employs elements of the novel Intercriteria Analysis method. The use of new tools, notably the intuitionistic fuzzy pairs and intuitionistic fuzzy index matrices provides additional information about the problem, addressed in the decision making procedure. Key words: decision making, multiagent systems, multicriteria decision making, intercriteria analysis, intuitionistic fuzzy estimation
Monte Carlo Tree Search for Continuous and Stochastic Sequential Decision Making Problems
International Nuclear Information System (INIS)
Couetoux, Adrien
2013-01-01
In this thesis, I studied sequential decision making problems, with a focus on the unit commitment problem. Traditionally solved by dynamic programming methods, this problem is still a challenge, due to its high dimension and to the sacrifices made on the accuracy of the model to apply state of the art methods. I investigated on the applicability of Monte Carlo Tree Search methods for this problem, and other problems that are single player, stochastic and continuous sequential decision making problems. In doing so, I obtained a consistent and anytime algorithm, that can easily be combined with existing strong heuristic solvers. (author)
Interactive operational decision making : Purchasing situations & mutual liability problems
Groote Schaarsberg, M.
2014-01-01
Three chapters of this dissertation deal with three different types of interactive purchasing situations, in which multiple buying organizations interact with similar (or possibly the same) suppliers for the procurement of the same commodity. Decisions to be made in interactive purchasing concern if
Effects of Problem Frame and Gender on Principals' Decision Making
Miller, Paul M.; Fagley, Nancy S.; Casella, Nancy E.
2009-01-01
Research indicates people's decisions can sometimes be influenced by seemingly trivial differences in the "framing" (i.e., wording) of alternative options. The tendency to prefer risk averse options when framed positively and risky options when framed negatively is known as the framing effect. The current study examined the susceptibility of…
DEFF Research Database (Denmark)
Justesen, Jørn
2005-01-01
A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly.......A simple construction of two-dimensional (2-D) fields is presented. Rows and columns are outcomes of the same Markov chain. The entropy can be calculated explicitly....
Problems of engineering education and their decision involving industry
R. P. Simonyants
2014-01-01
In Russia, the problems of engineering education are connected with political and economic upheavals of the late last century. At the same time, some leading engineering universities in Russia, such as the Bauman Moscow State Technical University (BMSTU) were resistant to the damaging effects of the crisis. But the methodology and experience of their effective work are insufficiently known.The problems of international engineering school development are also known. The first UNESCO World Repo...
Problem Solving in Physics: Undergraduates' Framing, Procedures, and Decision Making
Modir, Bahar
In this dissertation I will start with the broad research question of what does problem solving in upper division physics look like? My focus in this study is on students' problem solving in physics theory courses. Some mathematical formalisms are common across all physics core courses such as using the process of separation of variables, doing Taylor series, or using the orthogonality properties of mathematical functions to set terms equal to zero. However, there are slight differences in their use of these mathematical formalisms across different courses, possibly because of how students map different physical systems to these processes. Thus, my first main research question aims to answer how students perform these recurring processes across upper division physics courses. I break this broad question into three particular research questions: What knowledge pieces do students use to make connections between physics and procedural math? How do students use their knowledge pieces coherently to provide reasoning strategies in estimation problems? How do students look ahead into the problem to read the information out of the physical scenario to align their use of math in physics? Building on the previous body of the literature, I will use the theory family of Knowledge in Pieces and provide evidence to expand this theoretical foundation. I will compare my study with previous studies and provide suggestions on how to generalize these theory expansions for future use. My experimental data mostly come from video-based classroom data. Students in groups of 2-4 students solve in-class problems in quantum mechanics and electromagnetic fields 1 courses collaboratively. In addition, I will analyze clinical interviews to demonstrate how a single case study student plays an epistemic game to estimate the total energy in a hurricane. My second research question is more focused on a particular instructional context. How do students frame problem solving in quantum mechanics? I
Systemic decision making fundamentals for addressing problems and messes
Hester, Patrick T
2017-01-01
This expanded second edition of the 2014 textbook features dedicated sections on action and observation, so that the reader can combine the use of the developed theoretical basis with practical guidelines for deployment. It also includes a focus on selection and use of a dedicated modeling paradigm – fuzzy cognitive mapping – to facilitate use of the proposed multi-methodology. The end goal of the text is a holistic, interdisciplinary approach to structuring and assessing complex problems, including a dedicated discussion of thinking, acting, and observing complex problems. The multi-methodology developed is scientifically grounded in systems theory and its accompanying principles, while the process emphasizes the nonlinear nature of all complex problem-solving endeavors. The authors’ clear and consistent chapter structure facilitates the book’s use in the classroom.
Ecology - fundamental bases and way's for decision of problems
International Nuclear Information System (INIS)
Kolev, Bogomil Velikov
2003-01-01
The metallurgical sciences (material science) are closest to the Geology. Therefore there are close to the ultimate explanation and diagnostics of the natural processes and phenomena in the Earth and in the Universe. They are called upon the other sciences (industries) and society to present the general way's for solution, of the global problems including ecological ones. By solution of these problems always has to be take account of the fact that the Earth is a cosmic body on which surface 'Terrestrial' and 'Space' industries and technologies are developed. (Original)
Markov Chain Estimation of Avian Seasonal Fecundity
To explore the consequences of modeling decisions on inference about avian seasonal fecundity we generalize previous Markov chain (MC) models of avian nest success to formulate two different MC models of avian seasonal fecundity that represent two different ways to model renestin...
Institute of Scientific and Technical Information of China (English)
Feng Junwen
2006-01-01
To overcome the limitations of the traditional surrogate worth trade-off (SWT) method and solve the multiple criteria decision making problem more efficiently and interactively, a new method labeled dual worth trade-off (DWT) method is proposed. The DWT method dynamically uses the duality theory related to the multiple criteria decision making problem and analytic hierarchy process technique to obtain the decision maker's solution preference information and finally find the satisfactory compromise solution of the decision maker. Through the interactive process between the analyst and the decision maker, trade-off information is solicited and treated properly, the representative subset of efficient solutions and the satisfactory solution to the problem are found. The implementation procedure for the DWT method is presented. The effectiveness and applicability of the DWT method are shown by a practical case study in the field of production scheduling.
Czech Academy of Sciences Publication Activity Database
Pudil, Pavel; Somol, Petr
2008-01-01
Roč. 16, č. 4 (2008), s. 37-55 ISSN 0572-3043 R&D Projects: GA MŠk 1M0572 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : variable selection * decision making Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2008/RO/pudil-identifying%20the%20most%20informative%20variables%20for%20decision- making %20problems%20a%20survey%20of%20recent%20approaches%20and%20accompanying%20problems.pdf
Strategic and non-strategic problem gamblers differ on decision-making under risk and ambiguity.
Lorains, Felicity K; Dowling, Nicki A; Enticott, Peter G; Bradshaw, John L; Trueblood, Jennifer S; Stout, Julie C
2014-07-01
To analyse problem gamblers' decision-making under conditions of risk and ambiguity, investigate underlying psychological factors associated with their choice behaviour and examine whether decision-making differed in strategic (e.g., sports betting) and non-strategic (e.g., electronic gaming machine) problem gamblers. Cross-sectional study. Out-patient treatment centres and university testing facilities in Victoria, Australia. Thirty-nine problem gamblers and 41 age, gender and estimated IQ-matched controls. Decision-making tasks included the Iowa Gambling Task (IGT) and a loss aversion task. The Prospect Valence Learning (PVL) model was used to provide an explanation of cognitive, motivational and response style factors involved in IGT performance. Overall, problem gamblers performed more poorly than controls on both the IGT (P = 0.04) and the loss aversion task (P = 0.01), and their IGT decisions were associated with heightened attention to gains (P = 0.003) and less consistency (P = 0.002). Strategic problem gamblers did not differ from matched controls on either decision-making task, but non-strategic problem gamblers performed worse on both the IGT (P = 0.006) and the loss aversion task (P = 0.02). Furthermore, we found differences in the PVL model parameters underlying strategic and non-strategic problem gamblers' choices on the IGT. Problem gamblers demonstrated poor decision-making under conditions of risk and ambiguity. Strategic (e.g. sports betting, poker) and non-strategic (e.g. electronic gaming machines) problem gamblers differed in decision-making and the underlying psychological processes associated with their decisions. © 2014 Society for the Study of Addiction.
Analysis of decision alternatives of the deep borehole filter restoration problem
International Nuclear Information System (INIS)
Abdildin, Yerkin G.; Abbas, Ali E.
2016-01-01
The energy problem is one of the biggest challenges facing the world in the 21st century. The nuclear energy is the fastest-growing contributor to the world energy and uranium mining is the primary step in its chain. One of the fundamental problems in the uranium extraction industry is the deep borehole filter restoration problem. This decision problem is very complex due to multiple objectives and various uncertainties. Besides the improvement of uranium production, the decision makers often need to meet internationally recognized standards (ISO 14001) of labor protection, safety measures, and preservation of environment. The problem can be simplified by constructing the multiattribute utility function, but the choice of the appropriate functional form requires the practical evaluation of different methods. In present work, we evaluate the alternatives of this complex problem by two distinct approaches for analyzing decision problems. The decision maker and the assessor is a Deputy Director General of a transnational corporation. - Highlights: • Analyzes 5 borehole recovery methods across the 4 most important attributes (criteria). • Considers financial, technological, environmental, and safety factors. • Compares two decision analysis approaches and the profit analysis. • Illustrates the assessments of the decision maker's preferences. • Determines that the assumption of independence of attributes yields imprecise recommendations.
The intertemporal choice behavior: the role of emotions in a multiagent decision problem
Directory of Open Access Journals (Sweden)
Viviana Ventre
2014-12-01
Full Text Available Traditional Discounted Utility Model assumes an exponential delay discount function, with a constant discount rate: this implies dynamic consistency and stationary intertemporal preferences. Contrary to the normative setting, decision neuroscience stresses a lack of rationality, i.e., inconsistency, in some intertemporal choice behaviors. We deal with both models are dealt with in the framework of some relevant decision problems.
Use of decision trees to value investigation strategies for soil pollution problems
Okx, J.P.; Stein, A.
2000-01-01
Remediation of a contaminated site usually requires costly actions, and several clean-up and sampling strategies may have to be compared by those involved in the decision-making process. In this paper several common environmental pollution problems have been addressed by using probabilistic decision
Children's Use of Meta-Cognition in Solving Everyday Problems: Children's Monetary Decision-Making
Lee, Chwee Beng; Koh, Noi Keng; Cai, Xin Le; Quek, Choon Lang
2012-01-01
The purpose of this study was to understand how children use meta-cognition in their everyday problem-solving, particularly making monetary decisions. A particular focus was to identify components of meta-cognition, such as regulation of cognition and knowledge of cognition observed in children's monetary decision-making process, the roles of…
On a Consensus Measure in a Group Multi-Criteria Decision Making Problem.
Michele Fedrizzi
2010-01-01
A method for consensus measuring in a group decision problem is presented for the multiple criteria case. The decision process is supposed to be carried out according to Saaty's Analytic Hierarchy Process, and hence using pairwise comparison among the alternatives. Using a suitable distance between the experts' judgements, a scale transformation is proposed which allows a fuzzy interpretation of the problem and the definition of a consensus measure by means of fuzzy tools as linguistic quanti...
Solving SAT problem by heuristic polarity decision-making algorithm
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper presents a heuristic polarity decision-making algorithm for solving Boolean satisfiability (SAT). The algorithm inherits many features of the current state-of-the-art SAT solvers, such as fast BCP, clause recording, restarts, etc. In addition, a preconditioning step that calculates the polarities of variables according to the cover distribution of Karnaugh map is introduced into DPLL procedure, which greatly reduces the number of conflicts in the search process. The proposed approach is implemented as a SAT solver named DiffSat. Experiments show that DiffSat can solve many "real-life" instances in a reasonable time while the best existing SAT solvers, such as Zchaff and MiniSat, cannot. In particular, DiffSat can solve every instance of Bart benchmark suite in less than 0.03 s while Zchaff and MiniSat fail under a 900 s time limit. Furthermore, DiffSat even outperforms the outstanding incomplete algorithm DLM in some instances.
janssen, Anja; Segers, Johan
2013-01-01
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in Rd. We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In ...
Problems of engineering education and their decision involving industry
Directory of Open Access Journals (Sweden)
R. P. Simonyants
2014-01-01
Full Text Available In Russia, the problems of engineering education are connected with political and economic upheavals of the late last century. At the same time, some leading engineering universities in Russia, such as the Bauman Moscow State Technical University (BMSTU were resistant to the damaging effects of the crisis. But the methodology and experience of their effective work are insufficiently known.The problems of international engineering school development are also known. The first UNESCO World Report on Engineering (2010 assesses the state of engineering education as follows: worldwide shortage of engineers is a threat to the development of society.Based on the analysis of the current state of engineering education in the world and tendencies of development an urgency of its modernization with the focus on the enhancement of practical component has been shown.Topical problems associated with innovations and modernization in engineering education in the field of aerospace technology were discussed at the first international forum, which was held in Beijing Beyhanskom University (BUAA on 8 - 9 September 2012. The author attended this forum and presented his impressions of its work. It was noted that the role of Russia in the global process to form and develop engineering education is ignored. This opinion sounded, generally, in all speakers' reports, apart from ours.The President BUAA, a Professor Jinpeng Huai, and a Professor Qiushi Li. talked about the problems of building the engineering education system in China. It was emphasized that in China a study of engineering education techniques was motivated by the fact that quality assurance of engineering education at U.S. universities does not meet requirements.Attention is drawn to Dr. David Wisler's report who is a representative of the U.S. aerospace industry (General Electric Aviation corporation, actively promoting networking technology "initiative CDIO».The assessment of the engineering education
THE DECISION OF FORM FOR DIFFRACTIVE STRUCTURES IN THE PROBLEM OF SCATTERING OF RADIO WAVES.
Directory of Open Access Journals (Sweden)
A. P. Preobrazhensky
2017-02-01
Full Text Available This paper considers the problem of scattering of electromagnetic waves in different diffraction structures. The solution of the scattering problem is based on the method of integral equations. On diagrams of backscattering at various frequencies of the incident wave, the decision about the form of the object is carried out.
The computer-aided design of a servo system as a multiple-criteria decision problem
Udink ten Cate, A.J.
1986-01-01
This paper treats the selection of controller gains of a servo system as a multiple-criteria decision problem. In contrast to the usual optimization-based approaches to computer-aided design, inequality constraints are included in the problem as unconstrained objectives. This considerably simplifies
Directory of Open Access Journals (Sweden)
Shao-Li Wang
2015-01-01
Full Text Available Aims. The priority of Chinese herbal medicines (CHMs plus conventional treatment over conventional treatment alone for acute coronary syndrome (ACS after percutaneous coronary intervention (PCI was documented in the 5C trial (chictr.org number: ChiCTR-TRC-07000021. The study was designed to evaluate the 10-year effectiveness of CHMs plus conventional treatment versus conventional treatment alone with decision-analytic model for ACS after PCI. Methods and Results. We constructed a decision-analytic Markov model to compare additional CHMs for 6 months plus conventional treatment versus conventional treatment alone for ACS patients after PCI. Sources of data came from 5C trial and published reports. Outcomes were expressed in terms of quality-adjusted life years (QALYs. Sensitivity analyses were performed to test the robustness of the model. The model predicted that over the 10-year horizon the survival probability was 77.49% in patients with CHMs plus conventional treatment versus 77.29% in patients with conventional treatment alone. In combination with conventional treatment, 6-month CHMs might be associated with a gained 0.20% survival probability and 0.111 accumulated QALYs, respectively. Conclusions. The model suggested that treatment with CHMs, as an adjunctive therapy, in combination with conventional treatment for 6 months might improve the long-term clinical outcome in ACS patients after PCI.
Reconciliation as a tool for decision making within decision tree related to insolvency problems
Directory of Open Access Journals (Sweden)
Tomáš Poláček
2016-05-01
Full Text Available Purpose of the article: The paper draws on the results of previous studies recoverability of creditor’s claims, where it was research from debtor’s point of view and his/her debts on the Czech Republic financial market. The company, which fell into a bankruptcy hearing, has several legislatively supported options how to deal with this situation and repay creditors money. Each of the options has been specified as a variant of a decisionmaking tree. This paper is focused on third option of evaluation – The reconciliation. The heuristic generates all missing information items. The result is then focused on the comparison and evaluation of the best ways to repay the debt, also including solution for the future continuation of the company currently in liquidation and quantification of percentage refund of creditors claim. A realistic case study is presented in full details. Further introduction of decision making with uncerteinties in insolvency proceedings. Methodology/methods: Solving within decision tree with partially ignorance of probability using reconciliation. Scientific aim: Comparison and evaluation of the best ways to repay the debt, also including solution for the future continuation of the company currently in liquidation and quantification of percentage refund of creditors claim. Findings: Predictions of future actions in dealing with insolvency act and bankruptcy hearing, quicker and more effective agreeing on compromises among all creditors and debtor. Conclusions: Finding a best way and solution of repayment and avoiding of termination for both of interested parties (creditor and debtor.
International Nuclear Information System (INIS)
Louie, Alexander V.; Rodrigues, George; Hannouf, Malek; Zaric, Gregory S.; Palma, David A.; Cao, Jeffrey Q.; Yaremko, Brian P.; Malthaner, Richard; Mocanu, Joseph D.
2011-01-01
Purpose: To compare the quality-adjusted life expectancy and overall survival in patients with Stage I non–small-cell lung cancer (NSCLC) treated with either stereotactic body radiation therapy (SBRT) or surgery. Methods and Materials: We constructed a Markov model to describe health states after either SBRT or lobectomy for Stage I NSCLC for a 5-year time frame. We report various treatment strategy survival outcomes stratified by age, sex, and pack-year history of smoking, and compared these with an external outcome prediction tool (Adjuvant! Online). Results: Overall survival, cancer-specific survival, and other causes of death as predicted by our model correlated closely with those predicted by the external prediction tool. Overall survival at 5 years as predicted by baseline analysis of our model is in favor of surgery, with a benefit ranging from 2.2% to 3.0% for all cohorts. Mean quality-adjusted life expectancy ranged from 3.28 to 3.78 years after surgery and from 3.35 to 3.87 years for SBRT. The utility threshold for preferring SBRT over surgery was 0.90. Outcomes were sensitive to quality of life, the proportion of local and regional recurrences treated with standard vs. palliative treatments, and the surgery- and SBRT-related mortalities. Conclusions: The role of SBRT in the medically operable patient is yet to be defined. Our model indicates that SBRT may offer comparable overall survival and quality-adjusted life expectancy as compared with surgical resection. Well-powered prospective studies comparing surgery vs. SBRT in early-stage lung cancer are warranted to further investigate the relative survival, quality of life, and cost characteristics of both treatment paradigms.
Decision-Making and Problem-Solving Approaches in Pharmacy Education.
Martin, Lindsay C; Donohoe, Krista L; Holdford, David A
2016-04-25
Domain 3 of the Center for the Advancement of Pharmacy Education (CAPE) 2013 Educational Outcomes recommends that pharmacy school curricula prepare students to be better problem solvers, but are silent on the type of problems they should be prepared to solve. We identified five basic approaches to problem solving in the curriculum at a pharmacy school: clinical, ethical, managerial, economic, and legal. These approaches were compared to determine a generic process that could be applied to all pharmacy decisions. Although there were similarities in the approaches, generic problem solving processes may not work for all problems. Successful problem solving requires identification of the problems faced and application of the right approach to the situation. We also advocate that the CAPE Outcomes make explicit the importance of different approaches to problem solving. Future pharmacists will need multiple approaches to problem solving to adapt to the complexity of health care.
Decision trees and decision committee applied to star/galaxy separation problem
Vasconcellos, Eduardo Charles
Vasconcellos et al [1] study the efficiency of 13 diferente decision tree algorithms applied to photometric data in the Sloan Digital Sky Digital Survey Data Release Seven (SDSS-DR7) to perform star/galaxy separation. Each algorithm is defined by a set fo parameters which, when varied, produce diferente final classifications trees. In that work we extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. We find that Functional Tree algorithm (FT) yields the best results by the mean completeness function (galaxy true positive rate) in two magnitude intervals:14=19 (82.1%). We compare FT classification to the SDSS parametric, 2DPHOT and Ball et al (2006) classifications. At the faintest magnitudes (r > 19), our classifier is the only one that maintains high completeness (>80%) while simultaneously achieving low contamination ( 2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 train six FT classifiers with random selected objects from the same 884,126 SDSS-DR7 objects with spectroscopic data that we use before. Both, the decision commitee and our previous single FT classifier will be applied to the new ojects from SDSS data releses eight, nine and ten. Finally we will compare peformances of both methods in this new data set. [1] Vasconcellos, E. C.; de Carvalho, R. R.; Gal, R. R.; LaBarbera, F. L.; Capelato, H. V.; Fraga Campos Velho, H.; Trevisan, M.; Ruiz, R. S. R.. Decision Tree Classifiers for Star/Galaxy Separation. The Astronomical Journal, Volume 141, Issue 6, 2011.
Density Control of Multi-Agent Systems with Safety Constraints: A Markov Chain Approach
Demirer, Nazli
The control of systems with autonomous mobile agents has been a point of interest recently, with many applications like surveillance, coverage, searching over an area with probabilistic target locations or exploring an area. In all of these applications, the main goal of the swarm is to distribute itself over an operational space to achieve mission objectives specified by the density of swarm. This research focuses on the problem of controlling the distribution of multi-agent systems considering a hierarchical control structure where the whole swarm coordination is achieved at the high-level and individual vehicle/agent control is managed at the low-level. High-level coordination algorithms uses macroscopic models that describes the collective behavior of the whole swarm and specify the agent motion commands, whose execution will lead to the desired swarm behavior. The low-level control laws execute the motion to follow these commands at the agent level. The main objective of this research is to develop high-level decision control policies and algorithms to achieve physically realizable commanding of the agents by imposing mission constraints on the distribution. We also make some connections with decentralized low-level motion control. This dissertation proposes a Markov chain based method to control the density distribution of the whole system where the implementation can be achieved in a decentralized manner with no communication between agents since establishing communication with large number of agents is highly challenging. The ultimate goal is to guide the overall density distribution of the system to a prescribed steady-state desired distribution while satisfying desired transition and safety constraints. Here, the desired distribution is determined based on the mission requirements, for example in the application of area search, the desired distribution should match closely with the probabilistic target locations. The proposed method is applicable for both
Using Continuous Action Spaces to Solve Discrete Problems
van Hasselt, Hado; Wiering, Marco
2009-01-01
Real-world control problems are often modeled as Markov Decision Processes (MDPs) with discrete action spaces to facilitate the use of the many reinforcement learning algorithms that exist to find solutions for such MDPs. For many of these problems an underlying continuous action space can be
Application of goal programming to decision problem on optimal allocation of radiation workers
International Nuclear Information System (INIS)
Sa, Sangduk; Narita, Masakuni
1993-01-01
This paper is concerned with an optimal planning in a multiple objective decision-making problem of allocating radiation workers to workplaces associated with occupational exposure. The model problem is formulated with the application of goal programming which effectively followed up diverse and conflicting factors influencing the optimal decision. The formulation is based on the data simulating the typical situations encountered at the operating facilities such as nuclear power plants where exposure control is critical to the management. Multiple goals set by the decision-maker/manager who has the operational responsibilities for radiological protection are illustrated in terms of work requirements, exposure constraints of the places, desired allocation of specific personnel and so on. Test results of the model are considered to indicate that the model structure and its solution process can provide the manager with a good set of analysis of his problems in implementing the optimization review of radiation protection during normal operation. (author)
Direct Tests on Individual Behaviour in Small Decision-Making Problems
Directory of Open Access Journals (Sweden)
Takemi Fujikawa
2007-10-01
Full Text Available This paper provides an empirical and experimental analysis of individual decision making in small decision-making problems with a series of laboratory experiments. Two experimental treatments with binary small decision-making problems are implemented: (1 the search treatment with the unknown payoff distribution to the decision makers and (2 the choice treatment with the known payoff distribution. A first observation is that in the search treatment the tendency to select best reply to the past performances, and misestimation of the payoff distribution can lead to robust deviations from expected value maximisation. A second observation is concerned with choice problems with two options with the same expected value: one option is more risky with larger payoff variability; the other option is moderate with less payoff variability. Experimental results show that it is likely that the more the decision makers choose a risky option, the higher they can achieve high points, ex post. Finally, I investigate the exploration tendency. Comparison of results between the search treatment and the choice treatment reveals that the additional information to the decision makers enhances expected value maximisation.
Support for decision making and problem solving in abnormal conditions in nuclear power plants
International Nuclear Information System (INIS)
Embrey, D.; Humphreys, P.
1985-01-01
Under abnormal plant condition effective decision support has to take into account the operator's or other decision maker's mental model of the plant, derived from operating experience. This will be different from the engineering model incorporated in Disturbance Analysis Systems. Recently developed approaches for gaining access to the structure of this mental model provided the basis for the development of an interactive computer system capable of representing and exploring expert knowledge concerning inferences about causal patterns, starting from the information available to the operator in the control room. This system has potential application as an interactive diagnostic aid in support of decision making and problem solving during abnormal conditions. (Auth.)
Optimal dividend distribution under Markov regime switching
Jiang, Z.; Pistorius, M.
2012-01-01
We investigate the problem of optimal dividend distribution for a company in the presence of regime shifts. We consider a company whose cumulative net revenues evolve as a Brownian motion with positive drift that is modulated by a finite state Markov chain, and model the discount rate as a
Markov chains with quasitoeplitz transition matrix
Directory of Open Access Journals (Sweden)
Alexander M. Dukhovny
1989-01-01
Full Text Available This paper investigates a class of Markov chains which are frequently encountered in various applications (e.g. queueing systems, dams and inventories with feedback. Generating functions of transient and steady state probabilities are found by solving a special Riemann boundary value problem on the unit circle. A criterion of ergodicity is established.
Hartfiel, Darald J
1998-01-01
In this study extending classical Markov chain theory to handle fluctuating transition matrices, the author develops a theory of Markov set-chains and provides numerous examples showing how that theory can be applied. Chapters are concluded with a discussion of related research. Readers who can benefit from this monograph are those interested in, or involved with, systems whose data is imprecise or that fluctuate with time. A background equivalent to a course in linear algebra and one in probability theory should be sufficient.
ERDOS 1.0. Emergency response decisions as problems of optimal stopping
International Nuclear Information System (INIS)
Pauwels, N.
1998-11-01
The ERDOS-software is a stochastic dynamic program to support the decision problem of preventively evacuating the workers of an industrial company threatened by a nuclear accident taking place in the near future with a particular probability. ERDOS treats this problem as one of optimal stopping: the governmental decision maker initially holds a call option enabling him to postpone the evacuation decision and observe the further evolution of the alarm situation. As such, he has to decide on the optimal point in time to exercise this option, i.e. to take the irreversible decision to evacuate the threatened workers. ERDOS allows to calculate the expected costs of an optimal intervention strategy and to compare this outcome with the costs resulting from a myopic evacuation decision, ignoring the prospect of more complete information at later stages of the decision process. Furthermore, ERDOS determines the free boundary, giving the critical severity as a function of time that will trigger immediate evacuation in case it is exceeded. Finally, the software provides useful insights in the financial implications of loosing time during the initial stages of the decision process (due to the gathering of information, discussions on the intervention strategy and so on)
Markov bridges, bisection and variance reduction
DEFF Research Database (Denmark)
Asmussen, Søren; Hobolth, Asger
. In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented......Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints...
Confluence reduction for Markov automata
Timmer, Mark; Katoen, Joost P.; van de Pol, Jaco; Stoelinga, Mariëlle Ida Antoinette
2016-01-01
Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. As expected, the state space explosion threatens the analysability of these models. We therefore introduce confluence reduction for Markov automata, a powerful reduction
Process Algebra and Markov Chains
Brinksma, Hendrik; Hermanns, H.; Brinksma, Hendrik; Hermanns, H.; Katoen, Joost P.
This paper surveys and relates the basic concepts of process algebra and the modelling of continuous time Markov chains. It provides basic introductions to both fields, where we also study the Markov chains from an algebraic perspective, viz. that of Markov chain algebra. We then proceed to study
Process algebra and Markov chains
Brinksma, E.; Hermanns, H.; Brinksma, E.; Hermanns, H.; Katoen, J.P.
2001-01-01
This paper surveys and relates the basic concepts of process algebra and the modelling of continuous time Markov chains. It provides basic introductions to both fields, where we also study the Markov chains from an algebraic perspective, viz. that of Markov chain algebra. We then proceed to study
Decision-making and problem-solving methods in automation technology
Hankins, W. W.; Pennington, J. E.; Barker, L. K.
1983-01-01
The state of the art in the automation of decision making and problem solving is reviewed. The information upon which the report is based was derived from literature searches, visits to university and government laboratories performing basic research in the area, and a 1980 Langley Research Center sponsored conferences on the subject. It is the contention of the authors that the technology in this area is being generated by research primarily in the three disciplines of Artificial Intelligence, Control Theory, and Operations Research. Under the assumption that the state of the art in decision making and problem solving is reflected in the problems being solved, specific problems and methods of their solution are often discussed to elucidate particular aspects of the subject. Synopses of the following major topic areas comprise most of the report: (1) detection and recognition; (2) planning; and scheduling; (3) learning; (4) theorem proving; (5) distributed systems; (6) knowledge bases; (7) search; (8) heuristics; and (9) evolutionary programming.
Composable Markov Building Blocks
Evers, S.; Fokkinga, M.M.; Apers, Peter M.G.; Prade, H.; Subrahmanian, V.S.
2007-01-01
In situations where disjunct parts of the same process are described by their own first-order Markov models and only one model applies at a time (activity in one model coincides with non-activity in the other models), these models can be joined together into one. Under certain conditions, nearly all
Composable Markov Building Blocks
Evers, S.; Fokkinga, M.M.; Apers, Peter M.G.
2007-01-01
In situations where disjunct parts of the same process are described by their own first-order Markov models, these models can be joined together under the constraint that there can only be one activity at a time, i.e. the activities of one model coincide with non-activity in the other models. Under
Solan, Eilon; Vieille, Nicolas
2015-01-01
We study irreducible time-homogenous Markov chains with finite state space in discrete time. We obtain results on the sensitivity of the stationary distribution and other statistical quantities with respect to perturbations of the transition matrix. We define a new closeness relation between transition matrices, and use graph-theoretic techniques, in contrast with the matrix analysis techniques previously used.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.
Partially Hidden Markov Models
DEFF Research Database (Denmark)
Forchhammer, Søren Otto; Rissanen, Jorma
1996-01-01
Partially Hidden Markov Models (PHMM) are introduced. They differ from the ordinary HMM's in that both the transition probabilities of the hidden states and the output probabilities are conditioned on past observations. As an illustration they are applied to black and white image compression where...
ERCAN, Merve; YILDIRIM, Meral; OTURAK, Çiğdem; EREN, Tamer
2018-01-01
Strategy games occupy a very important place in gaming industry. In this study, characterselection is made according to the rival team which is created by developing scenarios for Summoner's Rift inLeague of Legends (LOL) strategy game and Howling Abyss modes. In order to solve the problem, AnalyticalHierarchy Process (AHP), TOPSIS and PROMETHEE methods are utilized from multicriteria decision makingmethods in Summoner's Rift and Howling Abyss mode. For the problem, five alternative...
Integrated assessment of the global warming problem. A decision-analytical approach
International Nuclear Information System (INIS)
Van Lenthe, J.; Hendrickx, L.; Vlek, C.A.J.
1995-01-01
The project on the title subject aims at developing a policy-oriented methodology for the integrated assessment of the global warming problem. Decision analysis in general and influence diagrams in particular appear to constitute an appropriate integrated assessment methodology. The influence-diagram approach is illustrated at a preliminary integrated modeling of the global warming problem. In next stages of the research, attention will be shifted from the methodology of integrated assessment to the contents of integrated models. 4 figs., 5 refs
Integrated assessment of the global warming problem: A decision-analytical approach
International Nuclear Information System (INIS)
Van Lenthe, J.; Hendrickx, L.; Vlek, C.A.J.
1994-12-01
The multi-disciplinary character of the global warming problem asks for an integrated assessment approach for ordering and combining the various physical, ecological, economical, and sociological results. The Netherlands initiated their own National Research Program (NRP) on Global Air Pollution and Climate Change (NRP). The first phase (NRP-1) identified the integration theme as one of five central research themes. The second phase (NRP-2) shows a growing concern for integrated assessment issues. The current two-year research project 'Characterizing the risks: a comparative analysis of the risks of global warming and of relevant policy options, which started in September 1993, comes under the integrated assessment part of the Dutch NRP. The first part of the interim report describes the search for an integrated assessment methodology. It starts with emphasizing the need for integrated assessment at a relatively high level of aggregation and from a policy point of view. The conclusion will be that a decision-analytical approach might fit the purpose of a policy-oriented integrated modeling of the global warming problem. The discussion proceeds with an account on decision analysis and its explicit incorporation and analysis of uncertainty. Then influence diagrams, a relatively recent development in decision analysis, are introduced as a useful decision-analytical approach for integrated assessment. Finally, a software environment for creating and analyzing complex influence diagram models is discussed. The second part of the interim report provides a first, provisional integrated modeling of the global warming problem, emphasizing on the illustration of the decision-analytical approach. Major problem elements are identified and an initial problem structure is developed. The problem structure is described in terms of hierarchical influence diagrams. At some places the qualitative structure is filled with quantitative data
Expert Team Decision-Making and Problem Solving: Development and Learning
Directory of Open Access Journals (Sweden)
Simona Tancig
2009-12-01
Full Text Available Traditional research of decision-making has not significantly contributed towards better understanding of professional judgment and decisions in practice. Researchers dealing with decision-making in various professions and natural settings initiated new perspectives called naturalistic, which put the expert in the focus of research and the expertise thus entered the core of decision-making research in natural situations.Expert team is more than a group of experts. It is defined as a group of interdependent team members with a high level of task related expertise and the mastering of team processes.There have been several advances in understanding of expertise and the team. By combining theories, models, and empirical evidence we are trying to explain effectiveness and adaptation of expert teams in problem-solving and decision-making in complex and dynamic situations.A considerable research has been devoted to finding out what are the characteristics of experts and expert teams during their optimal functioning. These characteristics are discussed as input, process and output factors. As input variables the cognitive, social-affective, and motivational characteristics are presented. Process variables encompass individual and team learning, problem solving and decision-making as presented in Kolb’s cycle of learning, in deeper structures of dialogue and discussion, and in phenomena of collaboration, alignment, and distributed cognition. Outcome variables deal with task performance – activities.
Archbald, Doug
2010-01-01
This article offers lessons from an initiative refashioning the doctoral thesis in an education leadership program. The program serves a practitioner clientele; most are teachers and administrators. The new model for the thesis emphasizes leadership, problem solving, decision making, and organizational improvement. The former model was a…
DEFF Research Database (Denmark)
Toman, David; Weddel, Grant Edwin
2001-01-01
We present a decision procedure for the logical implication problem of a boolean complete DL dialect that includes attributes roles inverse roles and a new concept constructor that is capable of expressing a variety of equality and order generating dependencies The procedure underlies a mapping o...
Student Debt, Problem-Solving, and Decision-Making of Adult Learners: A Basic Qualitative Study
Brooks, William J.
2013-01-01
A basic qualitative research study was conducted to develop insights into how adult learners employ problem-solving and decision-making (PSDM), when considering college financing, student loans, and student debt. Using the social media Website Facebook, eight qualified participants were recruited. Participants were interviewed via telephone, and…
Myopic Loss Aversion: Demystifying the Key Factors Influencing Decision Problem Framing
Hardin, Andrew M.; Looney, Clayton Arlen
2012-01-01
Advancement of myopic loss aversion theory has been hamstrung by conflicting results, methodological inconsistencies, and a piecemeal approach toward understanding the key factors influencing decision problem framing. A series of controlled experiments provides a more holistic view of the variables promoting myopia. Extending the information…
Introduction to the numerical solutions of Markov chains
Stewart, Williams J
1994-01-01
A cornerstone of applied probability, Markov chains can be used to help model how plants grow, chemicals react, and atoms diffuse - and applications are increasingly being found in such areas as engineering, computer science, economics, and education. To apply the techniques to real problems, however, it is necessary to understand how Markov chains can be solved numerically. In this book, the first to offer a systematic and detailed treatment of the numerical solution of Markov chains, William Stewart provides scientists on many levels with the power to put this theory to use in the actual world, where it has applications in areas as diverse as engineering, economics, and education. His efforts make for essential reading in a rapidly growing field. Here, Stewart explores all aspects of numerically computing solutions of Markov chains, especially when the state is huge. He provides extensive background to both discrete-time and continuous-time Markov chains and examines many different numerical computing metho...
Directory of Open Access Journals (Sweden)
Nikša Jajac
2013-02-01
Full Text Available The aim of this paper is to present Decision Support Concept (DSC for management of construction projects. Focus of our research is in application of multicritera methods (MCM to decision making in planning phase of construction projects (related to the problem of construction sites selection. The problem is identified as a significant one from many different aspects such as economic aspect, civil engineering aspect, etc. what indicates the necessity for evaluation of multiple sites by several different criteria. Therefore, DSC for construction site selection based on PROMETHEE method is designed. In order to define the appropriate criteria, their weights and preference functions for the concept, three groups of stakeholders are involved (investors, construction experts and experts for real estate market in its design. AHP method has been used for determination of criteria weights. The model has been tested on the problem of site selection for construction of residential-commercial building in four largest cities in Croatia.
Optimal Time-Abstract Schedulers for CTMDPs and Markov Games
Directory of Open Access Journals (Sweden)
Markus Rabe
2010-06-01
Full Text Available We study time-bounded reachability in continuous-time Markov decision processes for time-abstract scheduler classes. Such reachability problems play a paramount role in dependability analysis and the modelling of manufacturing and queueing systems. Consequently, their analysis has been studied intensively, and techniques for the approximation of optimal control are well understood. From a mathematical point of view, however, the question of approximation is secondary compared to the fundamental question whether or not optimal control exists. We demonstrate the existence of optimal schedulers for the time-abstract scheduler classes for all CTMDPs. Our proof is constructive: We show how to compute optimal time-abstract strategies with finite memory. It turns out that these optimal schedulers have an amazingly simple structure---they converge to an easy-to-compute memoryless scheduling policy after a finite number of steps. Finally, we show that our argument can easily be lifted to Markov games: We show that both players have a likewise simple optimal strategy in these more general structures.
The intertemporal choice behaviour. The role of emotions in a multi-agent decision problem
Directory of Open Access Journals (Sweden)
Angelarosa Longo
2015-12-01
Full Text Available Decision Neuroscience has shown positive and negative side of emotions in intertemporal choices. Psychological evidences, indeed, point out anomalies (impulsivity modifies the discount function of each individual and the false consensus effect which increases the degree of consensus in a multi-agent decision problem. An experiment (Engelmann and Strobel 2004 demonstrates that the relevance of the false consensus effect depends on the difficulty of the information retrieval, so the underlying mechanism is an information processing deficiency rather than egocentricity. We demonstrate that emotions can not cause anomalies in a cooperative strategic interaction because information is explicit.
Generalized Markov branching models
Li, Junping
2005-01-01
In this thesis, we first considered a modified Markov branching process incorporating both state-independent immigration and resurrection. After establishing the criteria for regularity and uniqueness, explicit expressions for the extinction probability and mean extinction time are presented. The criteria for recurrence and ergodicity are also established. In addition, an explicit expression for the equilibrium distribution is presented.\\ud \\ud We then moved on to investigate the basic proper...
Ragain, Stephen; Ugander, Johan
2016-01-01
As datasets capturing human choices grow in richness and scale---particularly in online domains---there is an increasing need for choice models that escape traditional choice-theoretic axioms such as regularity, stochastic transitivity, and Luce's choice axiom. In this work we introduce the Pairwise Choice Markov Chain (PCMC) model of discrete choice, an inferentially tractable model that does not assume any of the above axioms while still satisfying the foundational axiom of uniform expansio...
Fannes, Mark; Wouters, Jeroen
2012-01-01
We study a quantum process that can be considered as a quantum analogue for the classical Markov process. We specifically construct a version of these processes for free Fermions. For such free Fermionic processes we calculate the entropy density. This can be done either directly using Szeg\\"o's theorem for asymptotic densities of functions of Toeplitz matrices, or through an extension of said theorem to rates of functions, which we present in this article.
Pemodelan Markov Switching Autoregressive
Ariyani, Fiqria Devi; Warsito, Budi; Yasin, Hasbi
2014-01-01
Transition from depreciation to appreciation of exchange rate is one of regime switching that ignored by classic time series model, such as ARIMA, ARCH, or GARCH. Therefore, economic variables are modeled by Markov Switching Autoregressive (MSAR) which consider the regime switching. MLE is not applicable to parameters estimation because regime is an unobservable variable. So that filtering and smoothing process are applied to see the regime probabilities of observation. Using this model, tran...
Approximate quantum Markov chains
Sutter, David
2018-01-01
This book is an introduction to quantum Markov chains and explains how this concept is connected to the question of how well a lost quantum mechanical system can be recovered from a correlated subsystem. To achieve this goal, we strengthen the data-processing inequality such that it reveals a statement about the reconstruction of lost information. The main difficulty in order to understand the behavior of quantum Markov chains arises from the fact that quantum mechanical operators do not commute in general. As a result we start by explaining two techniques of how to deal with non-commuting matrices: the spectral pinching method and complex interpolation theory. Once the reader is familiar with these techniques a novel inequality is presented that extends the celebrated Golden-Thompson inequality to arbitrarily many matrices. This inequality is the key ingredient in understanding approximate quantum Markov chains and it answers a question from matrix analysis that was open since 1973, i.e., if Lieb's triple ma...
A relation between non-Markov and Markov processes
International Nuclear Information System (INIS)
Hara, H.
1980-01-01
With the aid of a transformation technique, it is shown that some memory effects in the non-Markov processes can be eliminated. In other words, some non-Markov processes are rewritten in a form obtained by the random walk process; the Markov process. To this end, two model processes which have some memory or correlation in the random walk process are introduced. An explanation of the memory in the processes is given. (orig.)
A hybrid multiple attribute decision making method for solving problems of industrial environment
Directory of Open Access Journals (Sweden)
Dinesh Singh
2011-01-01
Full Text Available The selection of appropriate alternative in the industrial environment is an important but, at the same time, a complex and difficult problem because of the availability of a wide range of alternatives and similarity among them. Therefore, there is a need for simple, systematic, and logical methods or mathematical tools to guide decision makers in considering a number of selection attributes and their interrelations. In this paper, a hybrid decision making method of graph theory and matrix approach (GTMA and analytical hierarchy process (AHP is proposed. Three examples are presented to illustrate the potential of the proposed GTMA-AHP method and the results are compared with the results obtained using other decision making methods.
Effects of Information Retrieval Process on Decision Making and Problem Solving: An Emprical Study
Directory of Open Access Journals (Sweden)
Burcu Keten
2012-09-01
Full Text Available Individuals who are unaware of a need for information and/or who have not experienced the information retrieval process while meeting such a need cannot be a part of information society. Only those individuals with an awareness that information is essential to the problem-solving and decision-making processes, who are equipped with information retrieval and utilization skills and who can further integrate such skills into their daily lives, can be a part of an information society and attain the capability of performing properly in their societal roles and thus ultimately of shaping their society. Moving from this context, this article defines the elements of the information retrieval process, starting with the concept of information, and studies the influences of the information retrieval process on problem solving and decision making.
Directory of Open Access Journals (Sweden)
Dheeraj Kumar Joshi
2018-03-01
Full Text Available Uncertainties due to randomness and fuzziness comprehensively exist in control and decision support systems. In the present study, we introduce notion of occurring probability of possible values into hesitant fuzzy linguistic element (HFLE and define hesitant probabilistic fuzzy linguistic set (HPFLS for ill structured and complex decision making problem. HPFLS provides a single framework where both stochastic and non-stochastic uncertainties can be efficiently handled along with hesitation. We have also proposed expected mean, variance, score and accuracy function and basic operations for HPFLS. Weighted and ordered weighted aggregation operators for HPFLS are also defined in the present study for its applications in multi-criteria group decision making (MCGDM problems. We propose a MCGDM method with HPFL information which is illustrated by an example. A real case study is also taken in the present study to rank State Bank of India, InfoTech Enterprises, I.T.C., H.D.F.C. Bank, Tata Steel, Tata Motors and Bajaj Finance using real data. Proposed HPFLS-based MCGDM method is also compared with two HFL-based decision making methods.
Markov's theorem and algorithmically non-recognizable combinatorial manifolds
International Nuclear Information System (INIS)
Shtan'ko, M A
2004-01-01
We prove the theorem of Markov on the existence of an algorithmically non-recognizable combinatorial n-dimensional manifold for every n≥4. We construct for the first time a concrete manifold which is algorithmically non-recognizable. A strengthened form of Markov's theorem is proved using the combinatorial methods of regular neighbourhoods and handle theory. The proofs coincide for all n≥4. We use Borisov's group with insoluble word problem. It has two generators and twelve relations. The use of this group forms the base for proving the strengthened form of Markov's theorem
Mayasari, Ruth; Mawengkang, Herman; Gomar Purba, Ronal
2018-02-01
Land revitalization refers to comprehensive renovation of farmland, waterways, roads, forest or villages to improve the quality of plantation, raise the productivity of the plantation area and improve agricultural production conditions and the environment. The objective of sustainable land revitalization planning is to facilitate environmentally, socially, and economically viable land use. Therefore it is reasonable to use participatory approach to fullfil the plan. This paper addresses a multicriteria decision aid to model such planning problem, then we develop an interactive approach for solving the problem.
NP-completeness of weakly convex and convex dominating set decision problems
Directory of Open Access Journals (Sweden)
Joanna Raczek
2004-01-01
Full Text Available The convex domination number and the weakly convex domination number are new domination parameters. In this paper we show that the decision problems of convex and weakly convex dominating sets are \\(NP\\-complete for bipartite and split graphs. Using a modified version of Warshall algorithm we can verify in polynomial time whether a given subset of vertices of a graph is convex or weakly convex.
Rustamov, Samir; Mustafayev, Elshan; Clements, Mark A.
2018-04-01
The context analysis of customer requests in a natural language call routing problem is investigated in the paper. One of the most significant problems in natural language call routing is a comprehension of client request. With the aim of finding a solution to this issue, the Hybrid HMM and ANFIS models become a subject to an examination. Combining different types of models (ANFIS and HMM) can prevent misunderstanding by the system for identification of user intention in dialogue system. Based on these models, the hybrid system may be employed in various language and call routing domains due to nonusage of lexical or syntactic analysis in classification process.
Directory of Open Access Journals (Sweden)
Rustamov Samir
2018-04-01
Full Text Available The context analysis of customer requests in a natural language call routing problem is investigated in the paper. One of the most significant problems in natural language call routing is a comprehension of client request. With the aim of finding a solution to this issue, the Hybrid HMM and ANFIS models become a subject to an examination. Combining different types of models (ANFIS and HMM can prevent misunderstanding by the system for identification of user intention in dialogue system. Based on these models, the hybrid system may be employed in various language and call routing domains due to nonusage of lexical or syntactic analysis in classification process.
A Novel Multiperson Game Approach for Linguistic Multicriteria Decision Making Problems
Directory of Open Access Journals (Sweden)
Ching-San Lin
2014-01-01
Full Text Available Game theory is considered as an efficient framework in dealing with decision making problems for two players in the competitive environment. In general, the evaluation values of payoffs matrix are expressed by crisp values in a game model. However, many uncertainties and vagueness should be considered due to the qualitative criteria and the subjective judgment of decision makers in the decision making process. The aim of this study is to develop an effective methodology for solving the payoffs matrix with linguistic variables by multiple decision makers in a game model. Based on the linguistic variables, the decision makers can easily express their opinions with respect to criteria for each alternative. By using the linear programming method, we can find the optimal solution of a game matrix in accordance with the combination of strategies of each player effectively. In addition, the expected performance value (EPV index is defined in this paper to compare the competition ability of each player based on the optimal probability of each strategy combination. And then, numerical example will be implemented to illustrate the computation process of the proposed model. The conclusion and future research are discussed at the end of this paper.
Breast compression – An exploration of problem solving and decision-making in mammography
International Nuclear Information System (INIS)
Nightingale, J.M.; Murphy, F.J.; Robinson, L.; Newton-Hughes, A.; Hogg, P.
2015-01-01
Objective: Breast compression decreases radiation dose and reduces potential for motion and geometric unsharpness, yet there is variability in applied compression force within and between some centres. This article explores the problem solving process applied to the application of breast compression force from the mammography practitioners' perspective. Methods: A qualitative analysis was undertaken using an existing full data set of transcribed qualitative data collected in a phenomenological study of mammography practitioner values, behaviours and beliefs. The data emerged from focus groups conducted at six NHS breast screening centres in England (participant n = 41), and semi-structured interviews with mammography educators (n = 6). A researcher followed a thematic content analysis process to extract data related to mammography compression problem solving, developing a series of categories, themes and sub-themes. Emerging themes were then peer-validated by two other researchers, and developed into a model of practice. Results: Seven consecutive stages contributed towards compression force problem solving: assessing the request; first impressions; explanations and consent; handling the breast and positioning; applying compression force; final adjustments; feedback. The model captures information gathering, problem framing, problem solving and decision making which inform an ‘ideal’ compression scenario. Behavioural problem solving, heuristics and intuitive decision making are reflected within this model. Conclusion: The application of compression should no longer be considered as one single task within mammography, but is now recognised as a seven stage problem solving continuum. This continuum model is the first to be applied to mammography, and is adaptable and transferable to other radiography practice settings. - Highlights: • Mammography compression should no longer be considered as one single examination task. • A seven stage breast
MARKOV CHAIN PORTFOLIO LIQUIDITY OPTIMIZATION MODEL
Directory of Open Access Journals (Sweden)
Eder Oliveira Abensur
2014-05-01
Full Text Available The international financial crisis of September 2008 and May 2010 showed the importance of liquidity as an attribute to be considered in portfolio decisions. This study proposes an optimization model based on available public data, using Markov chain and Genetic Algorithms concepts as it considers the classic duality of risk versus return and incorporating liquidity costs. The work intends to propose a multi-criterion non-linear optimization model using liquidity based on a Markov chain. The non-linear model was tested using Genetic Algorithms with twenty five Brazilian stocks from 2007 to 2009. The results suggest that this is an innovative development methodology and useful for developing an efficient and realistic financial portfolio, as it considers many attributes such as risk, return and liquidity.
Mallak, Saed
1996-01-01
Ankara : Department of Mathematics and Institute of Engineering and Sciences of Bilkent University, 1996. Thesis (Master's) -- Bilkent University, 1996. Includes bibliographical references leaves leaf 29 In thi.s work, we studierl the Ergodicilv of Non-Stationary .Markov chains. We gave several e.xainples with different cases. We proved that given a sec[uence of Markov chains such that the limit of this sec|uence is an Ergodic Markov chain, then the limit of the combination ...
Markov Chain Monte Carlo Methods
Indian Academy of Sciences (India)
Keywords. Markov chain; state space; stationary transition probability; stationary distribution; irreducibility; aperiodicity; stationarity; M-H algorithm; proposal distribution; acceptance probability; image processing; Gibbs sampler.
Volchenkov, Dima; Dawin, Jean René
A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.
DEFF Research Database (Denmark)
Kohlenbach, Ulrich Wilhelm
2002-01-01
We show that the so-called weak Markov's principle (WMP) which states that every pseudo-positive real number is positive is underivable in E-HA + AC. Since allows one to formalize (atl eastl arge parts of) Bishop's constructive mathematics, this makes it unlikely that WMP can be proved within...... the framework of Bishop-style mathematics (which has been open for about 20 years). The underivability even holds if the ine.ective schema of full comprehension (in all types) for negated formulas (in particular for -free formulas) is added, which allows one to derive the law of excluded middle...
Markov Chains For Testing Redundant Software
White, Allan L.; Sjogren, Jon A.
1990-01-01
Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.
Li, Ni; Huai, Wenqing; Wang, Shaodan
2017-08-01
C2 (command and control) has been understood to be a critical military component to meet an increasing demand for rapid information gathering and real-time decision-making in a dynamically changing battlefield environment. In this article, to improve a C2 behaviour model's reusability and interoperability, a behaviour modelling framework was proposed to specify a C2 model's internal modules and a set of interoperability interfaces based on the C-BML (coalition battle management language). WTA (weapon target assignment) is a typical C2 autonomous decision-making behaviour modelling problem. Different from most WTA problem descriptions, here sensors were considered to be available resources of detection and the relationship constraints between weapons and sensors were also taken into account, which brought it much closer to actual application. A modified differential evolution (MDE) algorithm was developed to solve this high-dimension optimisation problem and obtained an optimal assignment plan with high efficiency. In case study, we built a simulation system to validate the proposed C2 modelling framework and interoperability interface specification. Also, a new optimisation solution was used to solve the WTA problem efficiently and successfully.
The Train Driver Recovery Problem - Solution Method and Decision Support System Framework
DEFF Research Database (Denmark)
Rezanova, Natalia Jurjevna
2009-01-01
the proposed model and solution method is suitable for solving in real-time. Recovery duties are generated as resource constrained paths in duty networks, and the set partitioning problem is solved with a linear programming based branch-and-price algorithm. Dynamic column generation and problem space expansion...... driver decision support system in their operational environment. Besides solving a particular optimization problem, this thesis contributes with a description of the railway planning process, tactical crew scheduling and the real-time dispatching solutions, taking a starting point in DSB S....... Rezanova NJ, Ryan DM. The train driver recovery problem–A set partitioning based model and solution method. Computers and Operations Research, in press, 2009. doi: 10.1016/j.cor.2009.03.023. 2. Clausen J, Larsen A, Larsen J, Rezanova NJ. Disruption management in the airline industry–Concepts, models...
Exploring the Impact of Early Decisions in Variable Ordering for Constraint Satisfaction Problems
Directory of Open Access Journals (Sweden)
José Carlos Ortiz-Bayliss
2018-01-01
Full Text Available When solving constraint satisfaction problems (CSPs, it is a common practice to rely on heuristics to decide which variable should be instantiated at each stage of the search. But, this ordering influences the search cost. Even so, and to the best of our knowledge, no earlier work has dealt with how first variable orderings affect the overall cost. In this paper, we explore the cost of finding high-quality orderings of variables within constraint satisfaction problems. We also study differences among the orderings produced by some commonly used heuristics and the way bad first decisions affect the search cost. One of the most important findings of this work confirms the paramount importance of first decisions. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. We propose a simple method to improve early decisions of heuristics. By using it, performance of heuristics increases.
Exploring the Impact of Early Decisions in Variable Ordering for Constraint Satisfaction Problems.
Ortiz-Bayliss, José Carlos; Amaya, Ivan; Conant-Pablos, Santiago Enrique; Terashima-Marín, Hugo
2018-01-01
When solving constraint satisfaction problems (CSPs), it is a common practice to rely on heuristics to decide which variable should be instantiated at each stage of the search. But, this ordering influences the search cost. Even so, and to the best of our knowledge, no earlier work has dealt with how first variable orderings affect the overall cost. In this paper, we explore the cost of finding high-quality orderings of variables within constraint satisfaction problems. We also study differences among the orderings produced by some commonly used heuristics and the way bad first decisions affect the search cost. One of the most important findings of this work confirms the paramount importance of first decisions. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. We propose a simple method to improve early decisions of heuristics. By using it, performance of heuristics increases.
An application of prospect theory to a SHM-based decision problem
Bolognani, Denise; Verzobio, Andrea; Tonelli, Daniel; Cappello, Carlo; Glisic, Branko; Zonta, Daniele
2017-04-01
Decision making investigates choices that have uncertain consequences and that cannot be completely predicted. Rational behavior may be described by the so-called expected utility theory (EUT), whose aim is to help choosing among several solutions to maximize the expectation of the consequences. However, Kahneman and Tversky developed an alternative model, called prospect theory (PT), showing that the basic axioms of EUT are violated in several instances. In respect of EUT, PT takes into account irrational behaviors and heuristic biases. It suggests an alternative approach, in which probabilities are replaced by decision weights, which are strictly related to the decision maker's preferences and may change for different individuals. In particular, people underestimate the utility of uncertain scenarios compared to outcomes obtained with certainty, and show inconsistent preferences when the same choice is presented in different forms. The goal of this paper is precisely to analyze a real case study involving a decision problem regarding the Streicker Bridge, a pedestrian bridge on Princeton University campus. By modelling the manager of the bridge with the EUT first, and with PT later, we want to verify the differences between the two approaches and to investigate how the two models are sensitive to unpacking probabilities, which represent a common cognitive bias in irrational behaviors.
Markov Decision Processes Discrete Stochastic Dynamic Programming
Puterman, Martin L
2005-01-01
The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "This text is unique in bringing together so many results hitherto found only in part in other texts and papers. . . . The text is fairly self-contained, inclusive of some basic mathematical results needed, and provides a rich diet
A Markov Decision Process * EZUGWU, VO
African Journals Online (AJOL)
ADOWIE PERE
strategy of traders who, in the presence of cost transaction, invest on this risky ... designing autonomous intelligent agent for forest fire fighting. ... minimizing energy consumption and maximizing sensing .... The actions allow us to modify the ...
Nonlinear Markov processes: Deterministic case
International Nuclear Information System (INIS)
Frank, T.D.
2008-01-01
Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution
DEFF Research Database (Denmark)
Vidal, Rene Victor Valqui
2009-01-01
This paper reports on the work carried out supporting a rural community in Denmark under the LEADER+ programme. This is a programme that supports development in particularly vulnerable rural regions of the European countries members of EU. It supports creative and innovative projects that can...... contribute to long-term and sustainable development in these regions. The main tasks have been the organisation and facilitation of conferences and workshops to structure the problematic situation of identifying and designing innovative projects for the development of the community and to support decision...... making processes related to the agreement on action plans. Learning to design, plan, manage and facilitate conferences and workshops have also being another central activity. The main purpose of these conferences and workshops was not only problem structuring and decision making in connection...
Directory of Open Access Journals (Sweden)
Pingping Chi
2013-03-01
Full Text Available The interval neutrosophic set (INS can be easier to express the incomplete, indeterminate and inconsistent information, and TOPSIS is one of the most commonly used and effective method for multiple attribute decision making, however, in general, it can only process the attribute values with crisp numbers. In this paper, we have extended TOPSIS to INS, and with respect to the multiple attribute decision making problems in which the attribute weights are unknown and the attribute values take the form of INSs, we proposed an expanded TOPSIS method. Firstly, the definition of INS and the operational laws are given, and distance between INSs is defined. Then, the attribute weights are determined based on the Maximizing deviation method and an extended TOPSIS method is developed to rank the alternatives. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness.
Interval Neutrosophic Sets and Their Application in Multicriteria Decision Making Problems
Directory of Open Access Journals (Sweden)
Hong-yu Zhang
2014-01-01
Full Text Available As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world. And interval neutrosophic sets (INSs have been proposed exactly to address issues with a set of numbers in the real unit interval, not just a specific number. However, there are fewer reliable operations for INSs, as well as the INS aggregation operators and decision making method. For this purpose, the operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs in this paper. On the basis of the operations and comparison approach, two interval neutrosophic number aggregation operators are developed. Then, a method for multicriteria decision making problems is explored applying the aggregation operators. In addition, an example is provided to illustrate the application of the proposed method.
Interval neutrosophic sets and their application in multicriteria decision making problems.
Zhang, Hong-yu; Wang, Jian-qiang; Chen, Xiao-hong
2014-01-01
As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world. And interval neutrosophic sets (INSs) have been proposed exactly to address issues with a set of numbers in the real unit interval, not just a specific number. However, there are fewer reliable operations for INSs, as well as the INS aggregation operators and decision making method. For this purpose, the operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs) in this paper. On the basis of the operations and comparison approach, two interval neutrosophic number aggregation operators are developed. Then, a method for multicriteria decision making problems is explored applying the aggregation operators. In addition, an example is provided to illustrate the application of the proposed method.
International Nuclear Information System (INIS)
Floriani, Elena; Lima, Ricardo; Ourrad, Ouerdia; Spinelli, Lionel
2016-01-01
Highlights: • The flux through a Markov chain of a conserved quantity (mass) is studied. • Mass is supplied by an external source and ends in the absorbing states of the chain. • Meaningful for modeling open systems whose dynamics has a Markov property. • The analytical expression of mass distribution is given for a constant source. • The expression of mass distribution is given for periodic or random sources. - Abstract: In this paper we study the flux through a finite Markov chain of a quantity, that we will call mass, which moves through the states of the chain according to the Markov transition probabilities. Mass is supplied by an external source and accumulates in the absorbing states of the chain. We believe that studying how this conserved quantity evolves through the transient (non-absorbing) states of the chain could be useful for the modelization of open systems whose dynamics has a Markov property.
Finn, Peter R; Gerst, Kyle; Lake, Allison; Bogg, Tim
2017-09-01
Alcohol use disorders are associated with patterns of impulsive/risky decision making on behavioral economic decision tasks, but little is known about the factors affecting drinking-related decisions. The effects of incentives and disincentives to attend and drink at hypothetical alcohol-related party events as a function of lifetime (LT) alcohol and antisocial problems were examined in a sample of 434 young adults who varied widely in LT alcohol and antisocial problems. Moderate and high disincentives substantially discouraged decisions to attend the party events and were associated with decisions to drink less at the party events. High versus low party incentives were associated with more attendance decisions. LT antisocial problems were associated with being less deterred from attending by moderate and high disincentives. LT alcohol problems were associated with greater attendance at high party incentive contexts. LT alcohol problems were associated with drinking more at the majority of events; however, the results indicate that young adults with high levels of alcohol problems moderate their drinking in response to moderate and high disincentives. Finally, attendance and drinking decisions on this hypothetical task were significantly related to actual drinking practices. The results suggest that antisocial symptoms are associated with a reduced sensitivity to the potential negative consequences of drinking, while alcohol problems are associated with a greater sensitivity to the rewarding aspects of partying. The results also underline the value of directly assessing drinking-related decisions in different hypothetical contexts as well as assessing decisions about attendance at risky drinking events in addition to drinking amount decisions. Copyright © 2017 by the Research Society on Alcoholism.
An interlacing theorem for reversible Markov chains
International Nuclear Information System (INIS)
Grone, Robert; Salamon, Peter; Hoffmann, Karl Heinz
2008-01-01
Reversible Markov chains are an indispensable tool in the modeling of a vast class of physical, chemical, biological and statistical problems. Examples include the master equation descriptions of relaxing physical systems, stochastic optimization algorithms such as simulated annealing, chemical dynamics of protein folding and Markov chain Monte Carlo statistical estimation. Very often the large size of the state spaces requires the coarse graining or lumping of microstates into fewer mesoscopic states, and a question of utmost importance for the validity of the physical model is how the eigenvalues of the corresponding stochastic matrix change under this operation. In this paper we prove an interlacing theorem which gives explicit bounds on the eigenvalues of the lumped stochastic matrix. (fast track communication)
An interlacing theorem for reversible Markov chains
Energy Technology Data Exchange (ETDEWEB)
Grone, Robert; Salamon, Peter [Department of Mathematics and Statistics, San Diego State University, San Diego, CA 92182-7720 (United States); Hoffmann, Karl Heinz [Institut fuer Physik, Technische Universitaet Chemnitz, D-09107 Chemnitz (Germany)
2008-05-30
Reversible Markov chains are an indispensable tool in the modeling of a vast class of physical, chemical, biological and statistical problems. Examples include the master equation descriptions of relaxing physical systems, stochastic optimization algorithms such as simulated annealing, chemical dynamics of protein folding and Markov chain Monte Carlo statistical estimation. Very often the large size of the state spaces requires the coarse graining or lumping of microstates into fewer mesoscopic states, and a question of utmost importance for the validity of the physical model is how the eigenvalues of the corresponding stochastic matrix change under this operation. In this paper we prove an interlacing theorem which gives explicit bounds on the eigenvalues of the lumped stochastic matrix. (fast track communication)
Stochastic Dynamics through Hierarchically Embedded Markov Chains.
Vasconcelos, Vítor V; Santos, Fernando P; Santos, Francisco C; Pacheco, Jorge M
2017-02-03
Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects-such as mutations in evolutionary dynamics and a random exploration of choices in social systems-including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.
Problems of making decisions with account of risk and safety factors
Energy Technology Data Exchange (ETDEWEB)
Larichev, O I
1987-01-01
New trends in making decisions on accidents when using large-scale technologies-NPPs, chemical plants etc., are considered. Three main directions in the investigations in this field are distinguished. One of them consists in risk measuring (its perception by people, ways of its quantitative determination). The second direction consists in increasing the safety of large-scale production systems. Here the following questions are considered: risk assessment (the safety standard statement), site selection for new systems, man-machine interaction problems, development of safer technologies, cost benefit safety analysis. The third direction is connected with the problem of accidents and their analysis. This direction includes considering the reasons and process of the accident development, preparing for the possible accidents, monitoring under extreme conditions, accident effect analysis.
Modecki, Kathryn L; Zimmer-Gembeck, Melanie J; Guerra, Nancy
2017-03-01
Research on executive control during the teenage years points to shortfalls in emotion regulation, coping, and decision making as three linked capabilities associated with youth's externalizing behavior problems. Evidence gleaned from a detailed review of the literature makes clear that improvement of all three capabilities is critical to help young people better navigate challenges and prevent or reduce externalizing and related problems. Moreover, interventions can successfully improve these three capabilities and have been found to produce behavioral improvements with real-world significance. Examples of how successful interventions remediate more than one of these capabilities are provided. Future directions in research and practice are also proposed to move the field toward the development of more comprehensive programs for adolescents to foster their integration. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.
Problems of making decisions with account of risk and safety factors
International Nuclear Information System (INIS)
Larichev, O.I.
1987-01-01
New trends in making decisions on accidents when using large-scale technologies-NPPs, chemical plants etc., are considered. Three main directions in the investigations in this field are distinguished. One of them consists in risk measuring (its perception by people, ways of its quantitative determination). The second direction consists in increasing the safety of large-scale production systems. Here the following questions are considered: risk assessment (the safety standard statement), site selection for new systems, man-machine interaction problems, development of safer technologies, cost benefit safety analysis. The third direction is connected with the problem of accidents and their analysis. This direction includes considering the reasons and process of the accident development, preparing for the possible accidents, monitoring under extreme conditions, accident effect analysis
Martingales and Markov chains solved exercises and elements of theory
Baldi, Paolo; Priouret, Pierre
2002-01-01
CONDITIONAL EXPECTATIONSIntroductionDefinition and First PropertiesConditional Expectations and Conditional LawsExercisesSolutionsSTOCHASTIC PROCESSESGeneral FactsStopping TimesExercisesSolutionsMARTINGALESFirst DefinitionsFirst PropertiesThe Stopping TheoremMaximal InequalitiesSquare Integral MartingalesConvergence TheoremsRegular MartingalesExercisesProblemsSolutionsMARKOV CHAINSTransition Matrices, Markov ChainsConstruction and ExistenceComputations on the Canonical ChainPotential OperatorsPassage ProblemsRecurrence, TransienceRecurrent Irreducible ChainsPeriodicityExercisesProblemsSolution
Heidari, Mohammad; Shahbazi, Sara
2016-01-01
Background: The aim of this study was to determine the effect of problem-solving training on decision-making skill and critical thinking in emergency medical personnel. Materials and Methods: This study is an experimental study that performed in 95 emergency medical personnel in two groups of control (48) and experimental (47). Then, a short problem-solving course based on 8 sessions of 2 h during the term, was performed for the experimental group. Of data gathering was used demographic and researcher made decision-making and California critical thinking skills questionnaires. Data were analyzed using SPSS software. Results: The finding revealed that decision-making and critical thinking score in emergency medical personnel are low and problem-solving course, positively affected the personnel’ decision-making skill and critical thinking after the educational program (P problem-solving in various emergency medicine domains such as education, research, and management, is recommended. PMID:28149823
Rate estimation in partially observed Markov jump processes with measurement errors
Amrein, Michael; Kuensch, Hans R.
2010-01-01
We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...
Regeneration and general Markov chains
Directory of Open Access Journals (Sweden)
Vladimir V. Kalashnikov
1994-01-01
Full Text Available Ergodicity, continuity, finite approximations and rare visits of general Markov chains are investigated. The obtained results permit further quantitative analysis of characteristics, such as, rates of convergence, continuity (measured as a distance between perturbed and non-perturbed characteristics, deviations between Markov chains, accuracy of approximations and bounds on the distribution function of the first visit time to a chosen subset, etc. The underlying techniques use the embedding of the general Markov chain into a wide sense regenerative process with the help of splitting construction.
Quadratic Variation by Markov Chains
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Horel, Guillaume
We introduce a novel estimator of the quadratic variation that is based on the the- ory of Markov chains. The estimator is motivated by some general results concerning filtering contaminated semimartingales. Specifically, we show that filtering can in prin- ciple remove the effects of market...... microstructure noise in a general framework where little is assumed about the noise. For the practical implementation, we adopt the dis- crete Markov chain model that is well suited for the analysis of financial high-frequency prices. The Markov chain framework facilitates simple expressions and elegant analyti...
Automated generation of partial Markov chain from high level descriptions
International Nuclear Information System (INIS)
Brameret, P.-A.; Rauzy, A.; Roussel, J.-M.
2015-01-01
We propose an algorithm to generate partial Markov chains from high level implicit descriptions, namely AltaRica models. This algorithm relies on two components. First, a variation on Dijkstra's algorithm to compute shortest paths in a graph. Second, the definition of a notion of distance to select which states must be kept and which can be safely discarded. The proposed method solves two problems at once. First, it avoids a manual construction of Markov chains, which is both tedious and error prone. Second, up the price of acceptable approximations, it makes it possible to push back dramatically the exponential blow-up of the size of the resulting chains. We report experimental results that show the efficiency of the proposed approach. - Highlights: • We generate Markov chains from a higher level safety modeling language (AltaRica). • We use a variation on Dijkstra's algorithm to generate partial Markov chains. • Hence we solve two problems: the first problem is the tedious manual construction of Markov chains. • The second problem is the blow-up of the size of the chains, at the cost of decent approximations. • The experimental results highlight the efficiency of the method
Portfolio Optimization in a Semi-Markov Modulated Market
International Nuclear Information System (INIS)
Ghosh, Mrinal K.; Goswami, Anindya; Kumar, Suresh K.
2009-01-01
We address a portfolio optimization problem in a semi-Markov modulated market. We study both the terminal expected utility optimization on finite time horizon and the risk-sensitive portfolio optimization on finite and infinite time horizon. We obtain optimal portfolios in relevant cases. A numerical procedure is also developed to compute the optimal expected terminal utility for finite horizon problem
Adaptive Markov Chain Monte Carlo
Jadoon, Khan
2016-08-08
A substantial interpretation of electromagnetic induction (EMI) measurements requires quantifying optimal model parameters and uncertainty of a nonlinear inverse problem. For this purpose, an adaptive Bayesian Markov chain Monte Carlo (MCMC) algorithm is used to assess multi-orientation and multi-offset EMI measurements in an agriculture field with non-saline and saline soil. In the MCMC simulations, posterior distribution was computed using Bayes rule. The electromagnetic forward model based on the full solution of Maxwell\\'s equations was used to simulate the apparent electrical conductivity measured with the configurations of EMI instrument, the CMD mini-Explorer. The model parameters and uncertainty for the three-layered earth model are investigated by using synthetic data. Our results show that in the scenario of non-saline soil, the parameters of layer thickness are not well estimated as compared to layers electrical conductivity because layer thicknesses in the model exhibits a low sensitivity to the EMI measurements, and is hence difficult to resolve. Application of the proposed MCMC based inversion to the field measurements in a drip irrigation system demonstrate that the parameters of the model can be well estimated for the saline soil as compared to the non-saline soil, and provide useful insight about parameter uncertainty for the assessment of the model outputs.
Intelligent Search Method Based ACO Techniques for a Multistage Decision Problem EDP/LFP
Directory of Open Access Journals (Sweden)
Mostefa RAHLI
2006-07-01
the algorithm getting to him a rate preferably more or less justifiable. In operational research, this subject is known under the name of CPO [14] (combinatory problem optimization.The choice of a numerical method to use for a merged case study and calculation of the LFP/Fitting/EDP what is [7, 8, 9, 10, 18, 19, 20] (in theoretical form of a problem compensates the final decision to adopt and a strategy of optimal production (which is a practical problem form and the final task most wanted.Each method is imposed by:· The algorithm complexity.· In an application gathering all calculations, the number of uses of method compared to the total number of later issues.· The maximum number of iterations for a given use.· The maximum iterations count allowed for this algorithm kind.· The limitations of the algorithm such as: applicability of a method (algorithm adapted or not to the problem; does the problem constrained or not; problem dimension or order N (N ≤ Nmax; the algorithm stability.It's well-known that for an approached calculation method, the propagation of errors strongly conditions the need of making its adequate choice and if it can be adopted compared to others for the same area.More is the number of the elementary operations is large more the final result misses precision and especially if the finality of the study is a responsible decision to make and a satisfaction of constraints and multiple conditions. Our study proposes an inference based solution (AI with the use of ACO technique (Ant colony Optimization2.
Energy Technology Data Exchange (ETDEWEB)
Watson, S R; Hayward, G M
1982-01-01
In our interim report a general review was given of the characteristics of three formal methods for aiding decision making in relation to the general problems posed in radioactive waste management. In this report, consideration is given to examples of the sort of proposals that the Environment Departments may be asked to review, and two of the formal decision aids (cost-benefit analysis and decision analysis) which could be used to assist these tasks are discussed. The example decisions chosen are the siting of an underground repository for intermediate-level wastes and the choice of a waste management procedure for an intermediate-level waste stream.
Markov Chain Monte Carlo Methods
Indian Academy of Sciences (India)
Systat Software Asia-Pacific. Ltd., in Bangalore, where the technical work for the development of the statistical software Systat takes ... In Part 4, we discuss some applications of the Markov ... one can construct the joint probability distribution of.
Reviving Markov processes and applications
International Nuclear Information System (INIS)
Cai, H.
1988-01-01
In this dissertation we study a procedure which restarts a Markov process when the process is killed by some arbitrary multiplicative functional. The regenerative nature of this revival procedure is characterized through a Markov renewal equation. An interesting duality between the revival procedure and the classical killing operation is found. Under the condition that the multiplicative functional possesses an intensity, the generators of the revival process can be written down explicitly. An intimate connection is also found between the perturbation of the sample path of a Markov process and the perturbation of a generator (in Kato's sense). The applications of the theory include the study of the processes like piecewise-deterministic Markov process, virtual waiting time process and the first entrance decomposition (taboo probability)
Confluence reduction for Markov automata
Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette
Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. Recently, the process algebra MAPA was introduced to efficiently model such systems. As always, the state space explosion threatens the analysability of the models
Confluence Reduction for Markov Automata
Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette; Braberman, Victor; Fribourg, Laurent
Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. Recently, the process algebra MAPA was introduced to efficiently model such systems. As always, the state space explosion threatens the analysability of the models
Dorazio, R.M.; Johnson, F.A.
2003-01-01
Bayesian inference and decision theory may be used in the solution of relatively complex problems of natural resource management, owing to recent advances in statistical theory and computing. In particular, Markov chain Monte Carlo algorithms provide a computational framework for fitting models of adequate complexity and for evaluating the expected consequences of alternative management actions. We illustrate these features using an example based on management of waterfowl habitat.
Pawlikowski, Mirko; Brand, Matthias
2011-08-15
The dysfunctional behavior of excessive Internet gamers, such as preferring the immediate reward (to play World of Warcraft) despite the negative long-term consequences may be comparable with the dysfunctional behavior in substance abusers or individuals with behavioral addictions, e.g. pathological gambling. In these disorders, general decision-making deficits have been demonstrated. Hence, the aim of the present work was to examine decision-making competences of excessive World of Warcraft players. Nineteen excessive Internet gamers (EIG) and a control group (CG) consisting of 19 non-gamers were compared with respect to decision-making abilities. The Game of Dice Task (GDT) was applied to measure decision-making under risky conditions. Furthermore psychological-psychiatric symptoms were assessed in both groups. The EIG showed a reduced decision-making ability in the GDT. Furthermore the EIG group showed a higher psychological-psychiatric symptomatology in contrast to the CG. The results indicate that the reduced decision-making ability of EIG is comparable with patients with other forms of behavioral addiction (e.g. pathological gambling), impulse control disorders or substance abusers. Thus, these results suggest that excessive Internet gaming may be based on a myopia for the future, meaning that EIG prefer to play World of Warcraft despite the negative long-term consequences in social or work domains of life. 2011 Elsevier Ltd. All rights reserved.
Qi, Xiao-Wen; Zhang, Jun-Ling; Zhao, Shu-Ping; Liang, Chang-Yong
2017-10-02
In order to be prepared against potential balance-breaking risks affecting economic development, more and more countries have recognized emergency response solutions evaluation (ERSE) as an indispensable activity in their governance of sustainable development. Traditional multiple criteria group decision making (MCGDM) approaches to ERSE have been facing simultaneous challenging characteristics of decision hesitancy and prioritization relations among assessing criteria, due to the complexity in practical ERSE problems. Therefore, aiming at the special type of ERSE problems that hold the two characteristics, we investigate effective MCGDM approaches by hiring interval-valued dual hesitant fuzzy set (IVDHFS) to comprehensively depict decision hesitancy. To exploit decision information embedded in prioritization relations among criteria, we firstly define an fuzzy entropy measure for IVDHFS so that its derivative decision models can avoid potential information distortion in models based on classic IVDHFS distance measures with subjective supplementing mechanism; further, based on defined entropy measure, we develop two fundamental prioritized operators for IVDHFS by extending Yager's prioritized operators. Furthermore, on the strength of above methods, we construct two hesitant fuzzy MCGDM approaches to tackle complex scenarios with or without known weights for decision makers, respectively. Finally, case studies have been conducted to show effectiveness and practicality of our proposed approaches.
Directory of Open Access Journals (Sweden)
Xiao-Wen Qi
2017-10-01
Full Text Available In order to be prepared against potential balance-breaking risks affecting economic development, more and more countries have recognized emergency response solutions evaluation (ERSE as an indispensable activity in their governance of sustainable development. Traditional multiple criteria group decision making (MCGDM approaches to ERSE have been facing simultaneous challenging characteristics of decision hesitancy and prioritization relations among assessing criteria, due to the complexity in practical ERSE problems. Therefore, aiming at the special type of ERSE problems that hold the two characteristics, we investigate effective MCGDM approaches by hiring interval-valued dual hesitant fuzzy set (IVDHFS to comprehensively depict decision hesitancy. To exploit decision information embedded in prioritization relations among criteria, we firstly define an fuzzy entropy measure for IVDHFS so that its derivative decision models can avoid potential information distortion in models based on classic IVDHFS distance measures with subjective supplementing mechanism; further, based on defined entropy measure, we develop two fundamental prioritized operators for IVDHFS by extending Yager’s prioritized operators. Furthermore, on the strength of above methods, we construct two hesitant fuzzy MCGDM approaches to tackle complex scenarios with or without known weights for decision makers, respectively. Finally, case studies have been conducted to show effectiveness and practicality of our proposed approaches.
Hidden Markov processes theory and applications to biology
Vidyasagar, M
2014-01-01
This book explores important aspects of Markov and hidden Markov processes and the applications of these ideas to various problems in computational biology. The book starts from first principles, so that no previous knowledge of probability is necessary. However, the work is rigorous and mathematical, making it useful to engineers and mathematicians, even those not interested in biological applications. A range of exercises is provided, including drills to familiarize the reader with concepts and more advanced problems that require deep thinking about the theory. Biological applications are t
Fatrias, D.; Kamil, I.; Meilani, D.
2018-03-01
Coordinating business operation with suppliers becomes increasingly important to survive and prosper under the dynamic business environment. A good partnership with suppliers not only increase efficiency, but also strengthen corporate competitiveness. Associated with such concern, this study aims to develop a practical approach of multi-criteria supplier evaluation using combined methods of Taguchi loss function (TLF), best-worst method (BWM) and VIse Kriterijumska Optimizacija kompromisno Resenje (VIKOR). A new framework of integrative approach adopting these methods is our main contribution for supplier evaluation in literature. In this integrated approach, a compromised supplier ranking list based on the loss score of suppliers is obtained using efficient steps of a pairwise comparison based decision making process. Implemetation to the case problem with real data from crumb rubber industry shows the usefulness of the proposed approach. Finally, a suitable managerial implication is presented.
An Integer Programming Model for Multi-Echelon Supply Chain Decision Problem Considering Inventories
Harahap, Amin; Mawengkang, Herman; Siswadi; Effendi, Syahril
2018-01-01
In this paper we address a problem that is of significance to the industry, namely the optimal decision of a multi-echelon supply chain and the associated inventory systems. By using the guaranteed service approach to model the multi-echelon inventory system, we develop a mixed integer; programming model to simultaneously optimize the transportation, inventory and network structure of a multi-echelon supply chain. To solve the model we develop a direct search approach using a strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points.
Is problem-based learning an ideal format for developing ethical decision skills?
Directory of Open Access Journals (Sweden)
Peter H. Harasym
2013-10-01
Full Text Available Ethical decision making is a complex process, which involves the interaction of knowledge, skills, and attitude. To enhance the teaching and learning on ethics reasoning, multiple teaching strategies have to be applied. A medical ethical reasoning (MER model served as a framework of the development of ethics reasoning and their suggested instructional strategies. Problem-based learning (PBL, being used to facilitate students' critical thinking, self-directed learning, collaboration, and communication skills, has been considered effective on ethics education, especially when incorporated with experiential experience. Unlike lecturing that mainly disseminates knowledge and activates the left brain, PBL encourages “whole-brain” learning. However, PBL has several disadvantages, such as its inefficiency, lack of adequately trained preceptors, and the in-depth, silo learning within a relatively small number of cases. Because each school tends to utilize PBL in different ways, either the curriculum designer or the learning strategy, it is important to maximize the advantages of a PBL session, PBL then becomes an ideal format for refining students' ethical decisions and behaviors.
Energy Technology Data Exchange (ETDEWEB)
1979-01-01
The use of probabilistic, and especially Bayesian, methods is explained. The concepts of risk and decision, and probability and frequency are elucidated. The mechanics of probability and probabilistic calculations is discussed. The use of the method for particular problems, such as the frequency of aircraft crashes at a specified nuclear reactor site, is illustrated. 64 figures, 20 tables. (RWR)
International Nuclear Information System (INIS)
1979-01-01
The use of probabilistic, and especially Bayesian, methods is explained. The concepts of risk and decision, and probability and frequency are elucidated. The mechanics of probability and probabilistic calculations is discussed. The use of the method for particular problems, such as the frequency of aircraft crashes at a specified nuclear reactor site, is illustrated. 64 figures, 20 tables
Flouri, E; Ruddy, A; Midouhas, E
2017-04-01
Maternal depression may affect the emotional/behavioural outcomes of children with normal neurocognitive functioning less severely than it does those without. To guide prevention and intervention efforts, research must specify which aspects of a child's cognitive functioning both moderate the effect of maternal depression and are amenable to change. Working memory and decision making may be amenable to change and are so far unexplored as moderators of this effect. Our sample was 17 160 Millennium Cohort Study children. We analysed trajectories of externalizing (conduct and hyperactivity) and internalizing (emotional and peer) problems, measured with the Strengths and Difficulties Questionnaire at the ages 3, 5, 7 and 11 years, using growth curve models. We characterized maternal depression, also time-varying at these ages, by a high score on the K6. Working memory was measured with the Cambridge Neuropsychological Test Automated Battery Spatial Working Memory Task, and decision making (risk taking and quality of decision making) with the Cambridge Gambling Task, both at age 11 years. Maternal depression predicted both the level and the growth of problems. Risk taking and poor-quality decision making were related positively to externalizing and non-significantly to internalizing problems. Poor working memory was related to both problem types. Neither decision making nor working memory explained the effect of maternal depression on child internalizing/externalizing problems. Importantly, risk taking amplified the effect of maternal depression on internalizing problems, and poor working memory that on internalizing and conduct problems. Impaired decision making and working memory in children amplify the adverse effect of maternal depression on, particularly, internalizing problems.
Evolving the structure of hidden Markov Models
DEFF Research Database (Denmark)
won, K. J.; Prugel-Bennett, A.; Krogh, A.
2006-01-01
A genetic algorithm (GA) is proposed for finding the structure of hidden Markov Models (HMMs) used for biological sequence analysis. The GA is designed to preserve biologically meaningful building blocks. The search through the space of HMM structures is combined with optimization of the emission...... and transition probabilities using the classic Baum-Welch algorithm. The system is tested on the problem of finding the promoter and coding region of C. jejuni. The resulting HMM has a superior discrimination ability to a handcrafted model that has been published in the literature....
Genetic Algorithms Principles Towards Hidden Markov Model
Directory of Open Access Journals (Sweden)
Nabil M. Hewahi
2011-10-01
Full Text Available In this paper we propose a general approach based on Genetic Algorithms (GAs to evolve Hidden Markov Models (HMM. The problem appears when experts assign probability values for HMM, they use only some limited inputs. The assigned probability values might not be accurate to serve in other cases related to the same domain. We introduce an approach based on GAs to find
out the suitable probability values for the HMM to be mostly correct in more cases than what have been used to assign the probability values.
Understanding common risk analysis problems leads to better E and P decisions
International Nuclear Information System (INIS)
Smith, M.B.
1994-01-01
Many petroleum geologists, engineers and managers who have been introduced to petroleum risk analysis doubt that probability theory actually works in practice. Discovery probability estimates for exploration prospects always seem to be more optimistic than after-the-fact results. In general, probability estimates seem to be plucked from the air without any objective basis. Because of subtleties in probability theories, errors may result in applying risk analysis to real problems. Four examples have been selected to illustrate how misunderstanding in applying risk analysis may lead to incorrect decisions. Examples 1 and 2 show how falsely assuming statistical independence distorts probability calculations. Example 1 and 2 show how falsely assuming statistical independence distorts probability calculations. Example 3 discusses problems with related variable using the Monte Carlo method. Example 4 shows how subsurface data yields a probability value that is superior to a simple statistical estimate. The potential mistakes in the following examples would go unnoticed in analyses in most companies. Lack of objectivity and flawed theory would be blamed when fault actually would lies with incorrect application of basic probability principles
Crosby, R; Milhausen, R; Sanders, S A; Graham, C A; Yarber, W L
2008-06-01
This exploratory study compared the frequency of condom use errors and problems between men reporting that condom use for penile-vaginal sex was a mutual decision compared with men making the decision unilaterally. Nearly 2000 people completed a web-based questionnaire. A sub-sample of 660 men reporting that they last used a condom for penile-vaginal sex (within the past three months) was analysed. Nine condom use errors/problems were assessed. Multivariate analyses controlled for men's age, marital status, and level of experience using condoms. Men's unilateral decision-making was associated with increased odds of removing condoms before sex ended (adjusted odds ratio (AOR) 2.51, p = 0.002), breakage (AOR 3.90, p = 0.037), and slippage during withdrawal (AOR 2.04, p = 0.019). Men's self-reported level of experience using condoms was significantly associated with seven out of nine errors/problems, with those indicating less experience consistently reporting more errors/problems. Findings suggest that female involvement in the decision to use condoms for penile-vaginal sex may be partly protective against some condom errors/problems. Men's self-reported level of experience using condoms may be a useful indicator of the need for education designed to promote the correct use of condoms. Education programmes may benefit men by urging them to involve their female partner in condom use decisions.
An analytical study of the Q(s, S) policy applied to the joint replenishment problem
DEFF Research Database (Denmark)
Nielsen, Christina; Larsen, Christian
2005-01-01
be considered supply chain management problems. The paper uses Markov decision theory to work out an analytical solution procedure to evaluate the costs of a particular Q(s,S) policy, and thereby a method for computing the optimal Q(s,S) policy, under the assumption that demands follow a Poisson Process...
An analytical study of the Q(s,S) policy applied on the joint replenishment problem
DEFF Research Database (Denmark)
Nielsen, Christina; Larsen, Christian
2002-01-01
be considered supply chain management problems. The paper uses Markov decision theory to work out an analytical solution procedure to evaluate the costs of a particular Q(s,S) policy, and thereby a method to compute the optimal Q(s,S) policy, under the assumption that demands follow a Poisson process...
International Nuclear Information System (INIS)
Gitinavard, Hossein; Mousavi, S. Meysam; Vahdani, Behnam
2017-01-01
In numerous real-world energy decision problems, decision makers often encounter complex environments, in which existent imprecise data and uncertain information lead us to make an appropriate decision. In this paper, a new soft computing group decision-making approach is introduced based on novel compromise ranking method and interval-valued hesitant fuzzy sets (IVHFSs) for energy decision-making problems under multiple criteria. In the proposed approach, the assessment information is provided by energy experts or decision makers based on interval-valued hesitant fuzzy elements under incomplete criteria weights. In this respect, a new ranking index is presented respecting to interval-valued hesitant fuzzy Hamming distance measure to prioritize energy candidates, and criteria weights are computed based on an extended maximizing deviation method by considering the preferences experts' judgments about the relative importance of each criterion. Also, a decision making trial and evaluation laboratory (DEMATEL) method is extended under an IVHF-environment to compute the interdependencies between and within the selected criteria in the hierarchical structure. Accordingly, to demonstrate the applicability of the presented approach a case study and a practical example are provided regarding to hierarchical structure and criteria interdependencies relations for renewable energy and energy policy selection problems. Hence, the obtained computational results are compared with a fuzzy decision-making method from the recent literature based on some comparison parameters to show the advantages and constraints of the proposed approach. Finally, a sensitivity analysis is prepared to indicate effects of different criteria weights on ranking results to present the robustness or sensitiveness of the proposed soft computing approach versus the relative importance of criteria. - Highlights: • Introducing a novel interval-valued hesitant fuzzy compromise ranking method. • Presenting
Directory of Open Access Journals (Sweden)
Hassan Hashemi
2018-05-01
Full Text Available This study introduces a new decision model with multi-criteria analysis by a group of decision makers (DMs with intuitionistic fuzzy sets (IFSs. The presented model depends on a new integration of IFSs theory, ELECTRE and VIKOR along with grey relational analysis (GRA. To portray uncertain real-life situations and take account of complex decision problem, multi-criteria group decision-making (MCGDM model by totally unknown importance are introduced with IF-setting. Hence, a weighting method depended on Entropy and IFSs, is developed to present the weights of DMs and evaluation factors. A new ranking approach is provided for prioritizing the alternatives. To indicate the applicability of the presented new decision model, an industrial application for assessing contractors in the construction industry is given and discussed from the recent literature.
Maximizing Entropy over Markov Processes
DEFF Research Database (Denmark)
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2013-01-01
The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code....
Maximizing entropy over Markov processes
DEFF Research Database (Denmark)
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2014-01-01
The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code. © 2014 Elsevier...
Monte Carlo simulation of Markov unreliability models
International Nuclear Information System (INIS)
Lewis, E.E.; Boehm, F.
1984-01-01
A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)
Aytaç Adalı, Esra; Tuş Işık, Ayşegül
2017-06-01
A decision making process requires the values of conflicting objectives for alternatives and the selection of the best alternative according to the needs of decision makers. Multi-objective optimization methods may provide solution for this selection. In this paper it is aimed to present the laptop selection problem based on MOORA plus full multiplicative form (MULTIMOORA) and multi-objective optimization on the basis of simple ratio analysis (MOOSRA) which are relatively new multi-objective optimization methods. The novelty of this paper is solving this problem with the MULTIMOORA and MOOSRA methods for the first time.
Model checking conditional CSL for continuous-time Markov chains
DEFF Research Database (Denmark)
Gao, Yang; Xu, Ming; Zhan, Naijun
2013-01-01
In this paper, we consider the model-checking problem of continuous-time Markov chains (CTMCs) with respect to conditional logic. To the end, we extend Continuous Stochastic Logic introduced in Aziz et al. (2000) [1] to Conditional Continuous Stochastic Logic (CCSL) by introducing a conditional...
On the Metric-based Approximate Minimization of Markov Chains
DEFF Research Database (Denmark)
Bacci, Giovanni; Bacci, Giorgio; Larsen, Kim Guldstrand
2018-01-01
In this paper we address the approximate minimization problem of Markov Chains (MCs) from a behavioral metric-based perspective. Specifically, given a finite MC and a positive integer k, we are looking for an MC with at most k states having minimal distance to the original. The metric considered...
Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion
DEFF Research Database (Denmark)
Zunino, Andrea; Lange, Katrine; Melnikova, Yulia
2014-01-01
We present a study on the inversion of seismic reflection data generated from a synthetic reservoir model. Our aim is to invert directly for rock facies and porosity of the target reservoir zone. We solve this inverse problem using a Markov chain Monte Carlo (McMC) method to handle the nonlinear...
Markov chains with quasitoeplitz transition matrix: first zero hitting
Directory of Open Access Journals (Sweden)
Alexander M. Dukhovny
1989-01-01
Full Text Available This paper continues the investigation of Markov Chains with a quasitoeplitz transition matrix. Generating functions of first zero hitting probabilities and mean times are found by the solution of special Riemann boundary value problems on the unit circle. Duality is discussed.
On the Total Variation Distance of Semi-Markov Chains
DEFF Research Database (Denmark)
Bacci, Giorgio; Bacci, Giovanni; Larsen, Kim Guldstrand
2015-01-01
Semi-Markov chains (SMCs) are continuous-time probabilistic transition systems where the residence time on states is governed by generic distributions on the positive real line. This paper shows the tight relation between the total variation distance on SMCs and their model checking problem over...
On the Metric-Based Approximate Minimization of Markov Chains
DEFF Research Database (Denmark)
Bacci, Giovanni; Bacci, Giorgio; Larsen, Kim Guldstrand
2017-01-01
We address the behavioral metric-based approximate minimization problem of Markov Chains (MCs), i.e., given a finite MC and a positive integer k, we are interested in finding a k-state MC of minimal distance to the original. By considering as metric the bisimilarity distance of Desharnais at al...
Markov chains and mixing times
Levin, David A; Wilmer, Elizabeth L
2009-01-01
This book is an introduction to the modern approach to the theory of Markov chains. The main goal of this approach is to determine the rate of convergence of a Markov chain to the stationary distribution as a function of the size and geometry of the state space. The authors develop the key tools for estimating convergence times, including coupling, strong stationary times, and spectral methods. Whenever possible, probabilistic methods are emphasized. The book includes many examples and provides brief introductions to some central models of statistical mechanics. Also provided are accounts of r
ANALYSING ACCEPTANCE SAMPLING PLANS BY MARKOV CHAINS
Directory of Open Access Journals (Sweden)
Mohammad Mirabi
2012-01-01
Full Text Available
ENGLISH ABSTRACT: In this research, a Markov analysis of acceptance sampling plans in a single stage and in two stages is proposed, based on the quality of the items inspected. In a stage of this policy, if the number of defective items in a sample of inspected items is more than the upper threshold, the batch is rejected. However, the batch is accepted if the number of defective items is less than the lower threshold. Nonetheless, when the number of defective items falls between the upper and lower thresholds, the decision-making process continues to inspect the items and collect further samples. The primary objective is to determine the optimal values of the upper and lower thresholds using a Markov process to minimise the total cost associated with a batch acceptance policy. A solution method is presented, along with a numerical demonstration of the application of the proposed methodology.
AFRIKAANSE OPSOMMING: In hierdie navorsing word ’n Markov-ontleding gedoen van aannamemonsternemingsplanne wat plaasvind in ’n enkele stap of in twee stappe na gelang van die kwaliteit van die items wat geïnspekteer word. Indien die eerste monster toon dat die aantal defektiewe items ’n boonste grens oorskry, word die lot afgekeur. Indien die eerste monster toon dat die aantal defektiewe items minder is as ’n onderste grens, word die lot aanvaar. Indien die eerste monster toon dat die aantal defektiewe items in die gebied tussen die boonste en onderste grense lê, word die besluitnemingsproses voortgesit en verdere monsters word geneem. Die primêre doel is om die optimale waardes van die booonste en onderste grense te bepaal deur gebruik te maak van ’n Markov-proses sodat die totale koste verbonde aan die proses geminimiseer kan word. ’n Oplossing word daarna voorgehou tesame met ’n numeriese voorbeeld van die toepassing van die voorgestelde oplossing.
An approach to solve group-decision-making problems with ordinal interval numbers.
Fan, Zhi-Ping; Liu, Yang
2010-10-01
The ordinal interval number is a form of uncertain preference information in group decision making (GDM), while it is seldom discussed in the existing research. This paper investigates how the ranking order of alternatives is determined based on preference information of ordinal interval numbers in GDM problems. When ranking a large quantity of ordinal interval numbers, the efficiency and accuracy of the ranking process are critical. A new approach is proposed to rank alternatives using ordinal interval numbers when every ranking ordinal in an ordinal interval number is thought to be uniformly and independently distributed in its interval. First, we give the definition of possibility degree on comparing two ordinal interval numbers and the related theory analysis. Then, to rank alternatives, by comparing multiple ordinal interval numbers, a collective expectation possibility degree matrix on pairwise comparisons of alternatives is built, and an optimization model based on this matrix is constructed. Furthermore, an algorithm is also presented to rank alternatives by solving the model. Finally, two examples are used to illustrate the use of the proposed approach.
Markov chain solution of photon multiple scattering through turbid slabs.
Lin, Ying; Northrop, William F; Li, Xuesong
2016-11-14
This work introduces a Markov Chain solution to model photon multiple scattering through turbid slabs via anisotropic scattering process, i.e., Mie scattering. Results show that the proposed Markov Chain model agree with commonly used Monte Carlo simulation for various mediums such as medium with non-uniform phase functions and absorbing medium. The proposed Markov Chain solution method successfully converts the complex multiple scattering problem with practical phase functions into a matrix form and solves transmitted/reflected photon angular distributions by matrix multiplications. Such characteristics would potentially allow practical inversions by matrix manipulation or stochastic algorithms where widely applied stochastic methods such as Monte Carlo simulations usually fail, and thus enable practical diagnostics reconstructions such as medical diagnosis, spray analysis, and atmosphere sciences.
International Nuclear Information System (INIS)
Vilalta, R; Ocegueda-Hernandez, F; Valerio, R; Watts, G
2010-01-01
Decision tree learning constitutes a suitable approach to classification due to its ability to partition the variable space into regions of class-uniform events, while providing a structure amenable to interpretation, in contrast to other methods such as neural networks. But an inherent limitation of decision tree learning is the progressive lessening of the statistical support of the final classifier as clusters of single-class events are split on every partition, a problem known as the fragmentation problem. We describe a software system called DTFE, for Decision Tree Fragmentation Evaluator, that measures the degree of fragmentation caused by a decision tree learner on every event cluster. Clusters are found through a decomposition of the data using a technique known as Spectral Clustering. Each cluster is analyzed in terms of the number and type of partitions induced by the decision tree. Our domain of application lies on the search for single top quark production, a challenging problem due to large and similar backgrounds, low energetic signals, and low number of jets. The output of the machine-learning software tool consists of a series of statistics describing the degree of data fragmentation.
Directory of Open Access Journals (Sweden)
José Mateo
2017-01-01
Full Text Available Numerous contemporary problems that project managers face today can be considered as unstructured decision problems characterized by multiple actors and perspectives, incommensurable and/or conflicting objectives, and important intangibles. This work environment demands that project managers possess not only hard skills but also soft skills with the ability to take a management perspective and, above all, develop real leadership capabilities. In this paper, a family of problem structured methods for decision support aimed at assisting project managers in tackling complex problems are presented. Problem structured methods are a family of soft operations research methods for decision support that assist groups of diverse composition to agree a problem focus and make commitments to consequential action. Project management programs are challenged to implement these methodologies in such a way that it is organized around the key competences that a project manager needs in order to be more effective, work efficiently as members of interdisciplinary teams and successfully execute even a small project.
Consistency and refinement for Interval Markov Chains
DEFF Research Database (Denmark)
Delahaye, Benoit; Larsen, Kim Guldstrand; Legay, Axel
2012-01-01
Interval Markov Chains (IMC), or Markov Chains with probability intervals in the transition matrix, are the base of a classic specification theory for probabilistic systems [18]. The standard semantics of IMCs assigns to a specification the set of all Markov Chains that satisfy its interval...
Umoren, Grace
2007-01-01
The aim of this study was to investigate the effect of Science-Technology-Society (STS) curriculum on students' scientific literacy, problem solving and decision making. Four hundred and eighty (480) Senior Secondary two science and non-science students were randomly selected from intact classes in six secondary schools in Calabar Municipality of…
International Nuclear Information System (INIS)
Watson, S.R.; Hayward, G.M.
1982-03-01
In our interim report we gave a general review of the characteristics of three formal methods for aiding decision making in relation to the general problems posed in radioactive waste management. In this report we go on to consider examples of the sort of proposals that the Environment Departments may be asked to review, and to discuss how two of the formal decision aids (cost-benefit analysis and decision analysis) could be used to assist these tasks. The example decisions we have chosen are the siting of an underground repository for intermediate-level wastes and the choice of a waste management procedure for an intermediate-level waste stream. (U.K.)
Katoen, Joost P.; Maneesh Khattri, M.; Zapreev, I.S.; Zapreev, I.S.
2005-01-01
This short tool paper introduces MRMC, a model checker for discrete-time and continuous-time Markov reward models. It supports reward extensions of PCTL and CSL, and allows for the automated verification of properties concerning long-run and instantaneous rewards as well as cumulative rewards. In
Adaptive Partially Hidden Markov Models
DEFF Research Database (Denmark)
Forchhammer, Søren Otto; Rasmussen, Tage
1996-01-01
Partially Hidden Markov Models (PHMM) have recently been introduced. The transition and emission probabilities are conditioned on the past. In this report, the PHMM is extended with a multiple token version. The different versions of the PHMM are applied to bi-level image coding....
An Exploration of Dual Systems via Time Pressure Manipulation in Decision-making Problems
Guo, Lisa
2017-01-01
Every day, decisions need to be made where time is a limiting factor. Regardless of situation, time constraints often place a premium on rapid decision-making. Researchers have been interested in studying this human behavior and understanding its underlying cognitive processes. In previous studies, scientists have believed that the cognitive processes underlying decision-making behavior were consistent with dual-process modes of thinking. Critics of dual-process theory question the vagueness ...
The problems of enforcement of decisions on the deprivation of the driver's license
Directory of Open Access Journals (Sweden)
Oleg Beketov
2017-01-01
Full Text Available УДК 342.9The subject of research is legal regulation and practice the enforcement of punishment on deprivation of the license permitting to drive a variety of vehicles.The purpose of this article to show that even for such well-established for many years, narrow and specific law enforcement procedure as the execution of administrative punishment in the form of deprivation of the license there is is very typical whitespace in legal regulation leading to conflicts and risks of enforcement. Methodology. The analysis of administrative-legal actions of officials of State Traffic Safety Inspectorate and State Technical Supervision Authority, as well as legislation on the enforcement of their decisions on cases of administrative offences.Results. It is possible to identify the main causes of the problem of law enforcement, placing them in order of importance:1. The lack of the necessary normative legal acts, regulating the procedure of interaction of the bodies of Rostekhnadzor with the traffic police authorities, courts (judges, and the rules of procedure of the issuance of the certificate of the tractor operator-the machinist after the end of the period of deprivation, the absence of an approved format (sample medical certificate, etc., i.e. a very significant omissions of administrative-legal regulation.2. The failure of judges to the provisions of part 2 of article 32.5 of the administrative code of submitting to the authorities of state technical control of decisions on deprivation of the right of management by a tractor, self-propelled machine or other types of equipment for execution.3. Insufficient level of interaction of police with the Gostekhnadzor for the execution of administrative punishment in the field of traffic.4. The lack of access of authorities of state technical control for Federal information system, integrated into the necessary parts of the information system of internal Affairs bodies and the State information system on
Honest Importance Sampling with Multiple Markov Chains.
Tan, Aixin; Doss, Hani; Hobert, James P
2015-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable
Directory of Open Access Journals (Sweden)
Pankaj Khanna
2014-04-01
Full Text Available An integrated information system based DSS is developed for Open and Distance Learning (ODL institutions in India. The system has been web structured with the most suitable newly developed modules. A DSS model has been developed for solving semi-structured and unstructured problems including decision making with regard to various programmes and activities operating in the ODLIs. The DSS model designed for problem solving is generally based on quantitative formulas, whereas for problems involving imprecision and uncertainty, a fuzzy theory based DSS is employed. The computer operated system thus developed would help the ODLI management to quickly identify programmes and activities that require immediate attention. It shall also provide guidance for obtaining the most appropriate managerial decisions without any loss of time. As a result, the various subsystems operating in the ODLI are able to administer its activities more efficiently and effectively to enhance the overall performance of the concerned ODL institution to a new level.
Celuch, Kevin; Saxby, Carl
2013-01-01
The present study extends understanding of the self-regulatory aspects of ethical decision making by integrating and exploring relationships among counterfactual thinking, attribution, anticipatory emotions, and ethical decision-making constructs and processes. Specifically, we examine the effects of a manipulation designed to stimulate a…
Neyman, Markov processes and survival analysis.
Yang, Grace
2013-07-01
J. Neyman used stochastic processes extensively in his applied work. One example is the Fix and Neyman (F-N) competing risks model (1951) that uses finite homogeneous Markov processes to analyse clinical trials with breast cancer patients. We revisit the F-N model, and compare it with the Kaplan-Meier (K-M) formulation for right censored data. The comparison offers a way to generalize the K-M formulation to include risks of recovery and relapses in the calculation of a patient's survival probability. The generalization is to extend the F-N model to a nonhomogeneous Markov process. Closed-form solutions of the survival probability are available in special cases of the nonhomogeneous processes, like the popular multiple decrement model (including the K-M model) and Chiang's staging model, but these models do not consider recovery and relapses while the F-N model does. An analysis of sero-epidemiology current status data with recurrent events is illustrated. Fix and Neyman used Neyman's RBAN (regular best asymptotic normal) estimates for the risks, and provided a numerical example showing the importance of considering both the survival probability and the length of time of a patient living a normal life in the evaluation of clinical trials. The said extension would result in a complicated model and it is unlikely to find analytical closed-form solutions for survival analysis. With ever increasing computing power, numerical methods offer a viable way of investigating the problem.
Asymptotic evolution of quantum Markov chains
Energy Technology Data Exchange (ETDEWEB)
Novotny, Jaroslav [FNSPE, CTU in Prague, 115 19 Praha 1 - Stare Mesto (Czech Republic); Alber, Gernot [Institut fuer Angewandte Physik, Technische Universitaet Darmstadt, D-64289 Darmstadt (Germany)
2012-07-01
The iterated quantum operations, so called quantum Markov chains, play an important role in various branches of physics. They constitute basis for many discrete models capable to explore fundamental physical problems, such as the approach to thermal equilibrium, or the asymptotic dynamics of macroscopic physical systems far from thermal equilibrium. On the other hand, in the more applied area of quantum technology they also describe general characteristic properties of quantum networks or they can describe different quantum protocols in the presence of decoherence. A particularly, an interesting aspect of these quantum Markov chains is their asymptotic dynamics and its characteristic features. We demonstrate there is always a vector subspace (typically low-dimensional) of so-called attractors on which the resulting superoperator governing the iterative time evolution of quantum states can be diagonalized and in which the asymptotic quantum dynamics takes place. As the main result interesting algebraic relations are presented for this set of attractors which allow to specify their dual basis and to determine them in a convenient way. Based on this general theory we show some generalizations concerning the theory of fixed points or asymptotic evolution of random quantum operations.
Directory of Open Access Journals (Sweden)
Chizuru eShikishima
2015-11-01
Full Text Available Why does decision making differ among individuals? People sometimes make seemingly inconsistent decisions with lower expected (monetary utility even when objective information of probabilities and rewards are provided. It is noteworthy, however, that a certain proportion of people do not provide anomalous responses, choosing the alternatives with higher expected utility, thus appearing to be more rational. We investigated the genetic and environmental influences on these types of individual differences in decision making using a classical Allais problem task. Participants were 1,199 Japanese adult twins aged 20–47. Univariate genetic analysis revealed that approximately a third of the Allais problem response variance was explained by genetic factors and the rest by environmental factors unique to individuals and measurement error. The environmental factor shared between families did not contribute to the variance. Subsequent multivariate genetic analysis clarified that decision making using the expected utility theory was associated with general intelligence and that the association was largely mediated by the same genetic factor. We approach the mechanism underlying two types of rational decision making from the perspective of genetic correlations with cognitive abilities.
Shikishima, Chizuru; Hiraishi, Kai; Yamagata, Shinji; Ando, Juko; Okada, Mitsuhiro
2015-01-01
Why does decision making differ among individuals? People sometimes make seemingly inconsistent decisions with lower expected (monetary) utility even when objective information of probabilities and reward are provided. It is noteworthy, however, that a certain proportion of people do not provide anomalous responses, choosing the alternatives with higher expected utility, thus appearing to be more "rational." We investigated the genetic and environmental influences on these types of individual differences in decision making using a classical Allais problem task. Participants were 1,199 Japanese adult twins aged 20-47. Univariate genetic analysis revealed that approximately a third of the Allais problem response variance was explained by genetic factors and the rest by environmental factors unique to individuals and measurement error. The environmental factor shared between families did not contribute to the variance. Subsequent multivariate genetic analysis clarified that decision making using the expected utility theory was associated with general intelligence and that the association was largely mediated by the same genetic factor. We approach the mechanism underlying two types of "rational" decision making from the perspective of genetic correlations with cognitive abilities.
Peplak, Joanna; Song, Ju-Hyun; Colasante, Tyler; Malti, Tina
2017-10-01
This study examined the development of children's decisions, reasoning, and emotions in contexts of peer inclusion/exclusion. We asked an ethnically diverse sample of 117 children aged 4years (n=59; 60% girls) and 8years (n=58; 49% girls) to choose between including hypothetical peers of the same or opposite gender and with or without attention deficit/hyperactivity problems and aggressive behavior. Children also provided justifications for, and emotions associated with, their inclusion decisions. Both 4- and 8-year-olds predominantly chose to include the in-group peer (i.e., the same-gender peer and peers without behavior problems), thereby demonstrating a normative in-group inclusive bias. Nevertheless, children included the out-group peer more in the gender context than in the behavior problem contexts. The majority of children reported group functioning-related, group identity-related, and stereotype-related reasoning after their in-group inclusion decisions, and they associated happy feelings with such decisions. Although most children attributed sadness to the excluded out-group peer, they attributed more anger to the excluded out-group peer in the aggression context compared with other contexts. We discuss the implications of our findings for current theorizing about children's social-cognitive and emotional development in contexts of peer inclusion and exclusion. Copyright © 2017 Elsevier Inc. All rights reserved.
Analyzing Taiwan IC Assembly Industry by Grey-Markov Forecasting Model
Directory of Open Access Journals (Sweden)
Lei-Chuan Lin
2013-01-01
Full Text Available This study utilizes the black swan theorem to discuss how to face the lack of historical data and outliers. They may cause huge influences which make it impossible for people to predict the economy from their knowledge or experiences. Meanwhile, they cause the general dilemma of which prediction tool to be used which is also considered in this study. For the reason above, this study uses 2009 Q1 to 2010 Q4 quarterly revenue trend of Taiwan’s semiconductor packaging and testing industry under the global financial turmoil as basis and the grey prediction method to deal with nonlinear problems and small data. Under the lack of information and economic drastic changes, this study applies Markov model to predict the industry revenues of GM(1,1 and DGM(1,1 results. The results show that the accuracy of 2010 Q1–Q3 is 88.37%, 90.27%, sand 91.13%, respectively. Besides, they are better than the results of GM(1,1 and DGM(1,1 which are 86.51%, 77.35%, 75.46% and 73.77%, 74.25%, 59.72%. The results show that the prediction ability of the grey prediction with Markov model is better than traditional GM(1,1 and DGM(1,1 sfacing the changes of financial crisis. The results also prove that the grey-Markov chain prediction can be the perfect criterion for decision-makers judgment even when the environment has undergone drastic changes which bring the impact of unpredictable conditions.
Operations and support cost modeling using Markov chains
Unal, Resit
1989-01-01
Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.
Directory of Open Access Journals (Sweden)
Martin Tetaz
2014-06-01
Full Text Available This paper discusses two different alternatives to deal with the problem of multiple objectives in decision making. Even Swaps and Choice Based Conjoint are analyzed using an election between hypothetical jobs as a frame of decision. We show that not only Choice Based Conjoint Analysis can be used to value the different tradeoffs associated, but it can also be used to predict people choices even when they are not aware of the trades involved between objectives. Finally a tailored pilot survey is used to show the Choice Based Method in practice, allowing us to obtain important conclusions regarding people willingness to pay for several Labor Formality aspects.
International Nuclear Information System (INIS)
Keppo, Ilkka; Strubegger, Manfred
2010-01-01
This paper presents the development and demonstration of a limited foresight energy system model. The presented model is implemented as an extension to a large, linear optimization model, MESSAGE. The motivation behind changing the model is to provide an alternative decision framework, where information for the full time frame is not available immediately and sequential decision making under incomplete information is implied. While the traditional optimization framework provides the globally optimal decisions for the modeled problem, the framework presented here may offer a better description of the decision environment, under which decision makers must operate. We further modify the model to accommodate flexible dynamic constraints, which give an option to implement investments faster, albeit with a higher cost. Finally, the operation of the model is demonstrated using a moving window of foresight, with which decisions are taken for the next 30 years, but can be reconsidered later, when more information becomes available. We find that the results demonstrate some of the pitfalls of short term planning, e.g. lagging investments during earlier periods lead to higher requirements later during the century. Furthermore, the energy system remains more reliant on fossil based energy carriers, leading to higher greenhouse gas emissions.
Adiabatic condition and the quantum hitting time of Markov chains
International Nuclear Information System (INIS)
Krovi, Hari; Ozols, Maris; Roland, Jeremie
2010-01-01
We present an adiabatic quantum algorithm for the abstract problem of searching marked vertices in a graph, or spatial search. Given a random walk (or Markov chain) P on a graph with a set of unknown marked vertices, one can define a related absorbing walk P ' where outgoing transitions from marked vertices are replaced by self-loops. We build a Hamiltonian H(s) from the interpolated Markov chain P(s)=(1-s)P+sP ' and use it in an adiabatic quantum algorithm to drive an initial superposition over all vertices to a superposition over marked vertices. The adiabatic condition implies that, for any reversible Markov chain and any set of marked vertices, the running time of the adiabatic algorithm is given by the square root of the classical hitting time. This algorithm therefore demonstrates a novel connection between the adiabatic condition and the classical notion of hitting time of a random walk. It also significantly extends the scope of previous quantum algorithms for this problem, which could only obtain a full quadratic speedup for state-transitive reversible Markov chains with a unique marked vertex.
Directory of Open Access Journals (Sweden)
Xia Lei
2010-12-01
Full Text Available General multi-objective optimization methods are hard to obtain prior information, how to utilize prior information has been a challenge. This paper analyzes the characteristics of Bayesian decision-making based on maximum entropy principle and prior information, especially in case that how to effectively improve decision-making reliability in deficiency of reference samples. The paper exhibits effectiveness of the proposed method using the real application of multi-frequency offset estimation in distributed multiple-input multiple-output system. The simulation results demonstrate Bayesian decision-making based on prior information has better global searching capability when sampling data is deficient.
Bayesian tomography by interacting Markov chains
Romary, T.
2017-12-01
In seismic tomography, we seek to determine the velocity of the undergound from noisy first arrival travel time observations. In most situations, this is an ill posed inverse problem that admits several unperfect solutions. Given an a priori distribution over the parameters of the velocity model, the Bayesian formulation allows to state this problem as a probabilistic one, with a solution under the form of a posterior distribution. The posterior distribution is generally high dimensional and may exhibit multimodality. Moreover, as it is known only up to a constant, the only sensible way to addressthis problem is to try to generate simulations from the posterior. The natural tools to perform these simulations are Monte Carlo Markov chains (MCMC). Classical implementations of MCMC algorithms generally suffer from slow mixing: the generated states are slow to enter the stationary regime, that is to fit the observations, and when one mode of the posterior is eventually identified, it may become difficult to visit others. Using a varying temperature parameter relaxing the constraint on the data may help to enter the stationary regime. Besides, the sequential nature of MCMC makes them ill fitted toparallel implementation. Running a large number of chains in parallel may be suboptimal as the information gathered by each chain is not mutualized. Parallel tempering (PT) can be seen as a first attempt to make parallel chains at different temperatures communicate but only exchange information between current states. In this talk, I will show that PT actually belongs to a general class of interacting Markov chains algorithm. I will also show that this class enables to design interacting schemes that can take advantage of the whole history of the chain, by authorizing exchanges toward already visited states. The algorithms will be illustrated with toy examples and an application to first arrival traveltime tomography.
Data Clustering and Evolving Fuzzy Decision Tree for Data Base Classification Problems
Chang, Pei-Chann; Fan, Chin-Yuan; Wang, Yen-Wen
Data base classification suffers from two well known difficulties, i.e., the high dimensionality and non-stationary variations within the large historic data. This paper presents a hybrid classification model by integrating a case based reasoning technique, a Fuzzy Decision Tree (FDT), and Genetic Algorithms (GA) to construct a decision-making system for data classification in various data base applications. The model is major based on the idea that the historic data base can be transformed into a smaller case-base together with a group of fuzzy decision rules. As a result, the model can be more accurately respond to the current data under classifying from the inductions by these smaller cases based fuzzy decision trees. Hit rate is applied as a performance measure and the effectiveness of our proposed model is demonstrated by experimentally compared with other approaches on different data base classification applications. The average hit rate of our proposed model is the highest among others.
Markov chains and mixing times
Levin, David A
2017-01-01
Markov Chains and Mixing Times is a magical book, managing to be both friendly and deep. It gently introduces probabilistic techniques so that an outsider can follow. At the same time, it is the first book covering the geometric theory of Markov chains and has much that will be new to experts. It is certainly THE book that I will use to teach from. I recommend it to all comers, an amazing achievement. -Persi Diaconis, Mary V. Sunseri Professor of Statistics and Mathematics, Stanford University Mixing times are an active research topic within many fields from statistical physics to the theory of algorithms, as well as having intrinsic interest within mathematical probability and exploiting discrete analogs of important geometry concepts. The first edition became an instant classic, being accessible to advanced undergraduates and yet bringing readers close to current research frontiers. This second edition adds chapters on monotone chains, the exclusion process and hitting time parameters. Having both exercises...
A multi-objective decision-making approach to the journal submission problem.
Directory of Open Access Journals (Sweden)
Tony E Wong
Full Text Available When researchers complete a manuscript, they need to choose a journal to which they will submit the study. This decision requires to navigate trade-offs between multiple objectives. One objective is to share the new knowledge as widely as possible. Citation counts can serve as a proxy to quantify this objective. A second objective is to minimize the time commitment put into sharing the research, which may be estimated by the total time from initial submission to final decision. A third objective is to minimize the number of rejections and resubmissions. Thus, researchers often consider the trade-offs between the objectives of (i maximizing citations, (ii minimizing time-to-decision, and (iii minimizing the number of resubmissions. To complicate matters further, this is a decision with multiple, potentially conflicting, decision-maker rationalities. Co-authors might have different preferences, for example about publishing fast versus maximizing citations. These diverging preferences can lead to conflicting trade-offs between objectives. Here, we apply a multi-objective decision analytical framework to identify the Pareto-front between these objectives and determine the set of journal submission pathways that balance these objectives for three stages of a researcher's career. We find multiple strategies that researchers might pursue, depending on how they value minimizing risk and effort relative to maximizing citations. The sequences that maximize expected citations within each strategy are generally similar, regardless of time horizon. We find that the "conditional impact factor"-impact factor times acceptance rate-is a suitable heuristic method for ranking journals, to strike a balance between minimizing effort objectives and maximizing citation count. Finally, we examine potential co-author tension resulting from differing rationalities by mapping out each researcher's preferred Pareto front and identifying compromise submission strategies
The Use Of Business Games In Problem Solving And Decision Making
Skiltere, D.; Bausova, I.
2004-01-01
The purpose of the research is to demonstrate possibilities of simulation games in education process; in managerial ability compliance tests; for training of managers in small and middle business; in decision-making. The use of simulation games in decision making gives an opportunity to prevent these drawbacks, although this kind of the use of the games is not the most complicated, labour–consuming and eventually the most expensive.
Analysis and design of Markov jump systems with complex transition probabilities
Zhang, Lixian; Shi, Peng; Zhu, Yanzheng
2016-01-01
The book addresses the control issues such as stability analysis, control synthesis and filter design of Markov jump systems with the above three types of TPs, and thus is mainly divided into three parts. Part I studies the Markov jump systems with partially unknown TPs. Different methodologies with different conservatism for the basic stability and stabilization problems are developed and compared. Then the problems of state estimation, the control of systems with time-varying delays, the case involved with both partially unknown TPs and uncertain TPs in a composite way are also tackled. Part II deals with the Markov jump systems with piecewise homogeneous TPs. Methodologies that can effectively handle control problems in the scenario are developed, including the one coping with the asynchronous switching phenomenon between the currently activated system mode and the controller/filter to be designed. Part III focuses on the Markov jump systems with memory TPs. The concept of σ-mean square stability is propo...
Markov Chain Ontology Analysis (MCOA).
Frost, H Robert; McCray, Alexa T
2012-02-03
Biomedical ontologies have become an increasingly critical lens through which researchers analyze the genomic, clinical and bibliographic data that fuels scientific research. Of particular relevance are methods, such as enrichment analysis, that quantify the importance of ontology classes relative to a collection of domain data. Current analytical techniques, however, remain limited in their ability to handle many important types of structural complexity encountered in real biological systems including class overlaps, continuously valued data, inter-instance relationships, non-hierarchical relationships between classes, semantic distance and sparse data. In this paper, we describe a methodology called Markov Chain Ontology Analysis (MCOA) and illustrate its use through a MCOA-based enrichment analysis application based on a generative model of gene activation. MCOA models the classes in an ontology, the instances from an associated dataset and all directional inter-class, class-to-instance and inter-instance relationships as a single finite ergodic Markov chain. The adjusted transition probability matrix for this Markov chain enables the calculation of eigenvector values that quantify the importance of each ontology class relative to other classes and the associated data set members. On both controlled Gene Ontology (GO) data sets created with Escherichia coli, Drosophila melanogaster and Homo sapiens annotations and real gene expression data extracted from the Gene Expression Omnibus (GEO), the MCOA enrichment analysis approach provides the best performance of comparable state-of-the-art methods. A methodology based on Markov chain models and network analytic metrics can help detect the relevant signal within large, highly interdependent and noisy data sets and, for applications such as enrichment analysis, has been shown to generate superior performance on both real and simulated data relative to existing state-of-the-art approaches.
Directory of Open Access Journals (Sweden)
Xinshang You
2016-09-01
Full Text Available This paper proposes a novel approach to cope with the multi-criteria group decision-making problems. We give the pairwise comparisons based on the best-worst-method (BWM, which can decrease comparison times. Additionally, our comparison results are determined with the positive and negative aspects. In order to deal with the decision matrices effectively, we consider the elimination and choice translation reality (ELECTRE III method under the intuitionistic multiplicative preference relations environment. The ELECTRE III method is designed for a double-automatic system. Under a certain limitation, without bothering the decision-makers to reevaluate the alternatives, this system can adjust some special elements that have the most influence on the group’s satisfaction degree. Moreover, the proposed method is suitable for both the intuitionistic multiplicative preference relation and the interval valued fuzzy preference relations through the transformation formula. An illustrative example is followed to demonstrate the rationality and availability of the novel method.
Markov processes characterization and convergence
Ethier, Stewart N
2009-01-01
The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists."[A]nyone who works with Markov processes whose state space is uncountably infinite will need this most impressive book as a guide and reference."-American Scientist"There is no question but that space should immediately be reserved for [this] book on the library shelf. Those who aspire to mastery of the contents should also reserve a large number of long winter evenings."-Zentralblatt f?r Mathematik und ihre Grenzgebiete/Mathematics Abstracts"Ethier and Kurtz have produced an excellent treatment of the modern theory of Markov processes that [is] useful both as a reference work and as a graduate textbook."-Journal of Statistical PhysicsMarkov Proce...
An Integrated approach to the Space Situational Awareness Problem
2016-12-15
MONITOR’S ACRONYM(S) AFOSR 11. SPONSOR/MONITOR’S REPORT NUMBER(S) FA955003 12. DISTRIBUTION/ AVAILABILITY STATEMENT DISTRIBUTION A: Distribution approved for...DDDAS applications. The algorithm is a heuristic solution to the underlying partially observed Markov Decision Problem (PMDP) that does not suffer from...INSTRUCTIONS FOR COMPLETING SF 298 1. REPORT DATE. Full publication date, including day, month, if available . Must cite at least the year and be
Trexler, M.
2017-12-01
Policy-makers today have almost infinite climate-relevant scientific and other information available to them. The problem for climate change decision-making isn't missing science or inadequate knowledge of climate risks; the problem is that the "right" climate change actionable knowledge isn't getting to the right decision-maker, or is getting there too early or too late to effectively influence her decision-making. Actionable knowledge is not one-size-fit-all, and for a given decision-maker might involve scientific, economic, or risk-based information. Simply producing more and more information as we are today is not the solution, and actually makes it harder for individual decision-makers to access "their" actionable knowledge. The Climatographers began building the Climate Web five years ago to test the hypothesis that a knowledge management system could help navigate the gap between infinite information and individual actionable knowledge. Today the Climate Web's more than 1,500 index terms allow instant access to almost any climate change topic. It is a curated public-access knowledgebase of more than 1,000 books, 2,000 videos, 15,000 reports and articles, 25,000 news stories, and 3,000 websites. But it is also much more, linking together tens of thousands of individually extracted ideas and graphics, and providing Deep Dives into more than 100 key topics from changing probability distributions of extreme events to climate communications best practices to cognitive dissonance in climate change decision-making. The public-access Climate Web is uniquely able to support cross-silo learning, collaboration, and actionable knowledge dissemination. The presentation will use the Climate Web to demonstrate why knowledge management should be seen as a critical component of science and policy-making collaborations.
Operations research problems statements and solutions
Poler, Raúl; Díaz-Madroñero, Manuel
2014-01-01
The objective of this book is to provide a valuable compendium of problems as a reference for undergraduate and graduate students, faculty, researchers and practitioners of operations research and management science. These problems can serve as a basis for the development or study of assignments and exams. Also, they can be useful as a guide for the first stage of the model formulation, i.e. the definition of a problem. The book is divided into 11 chapters that address the following topics: Linear programming, integer programming, non linear programming, network modeling, inventory theory, queue theory, tree decision, game theory, dynamic programming and markov processes. Readers are going to find a considerable number of statements of operations research applications for management decision-making. The solutions of these problems are provided in a concise way although all topics start with a more developed resolution. The proposed problems are based on the research experience of the authors in real-world com...
Directory of Open Access Journals (Sweden)
Masoud Rabbani
2013-09-01
Full Text Available Integration of various logistical components in supply chain management, such as transportation, inventory control and facility location are becoming common practice to avoid sub-optimization in nowadays’ competitive environment. The integration of transportation and inventory decisions is known as inventory routing problem (IRP in the literature. The problem aims to determine the delivery quantity for each customer and the network routes to be used in each period, so that the total inventory and transportation costs are to be minimized. On the contrary of conventional IRP that each retailer can only provide its demand from the supplier, in this paper, a new multi-period, multi-item IRP model with considering lateral trans-shipment, back-log and financial decisions is proposed as a business model in a distinct organization. The main purpose of this paper is applying an applicable inventory routing model with considering real world setting and solving it with an appropriate method.
The illusion of handy wins: Problem gambling, chasing, and affective decision-making.
Nigro, Giovanna; Ciccarelli, Maria; Cosenza, Marina
2018-01-01
Chasing losses is a behavioral marker and a diagnostic criterion for gambling disorder. It consists in continuing gambling to recoup previous losses. Although chasing has been recognized playing a central role in gambling disorder, research on this topic is relatively scarce, and it remains unclear whether chasing affects decision-making in behavioral tasks in which participants gain or loss some money. Even if several studies found that the more the gambling involvement, the poorer the decision-making, to date no research investigated the role of chasing in decision-making. The study aimed to first investigate the relation between chasing and decision-making in adult gamblers. One hundred and four VLT players were administered the South Oaks Gambling Screen (SOGS), a computerized task measuring chasing, and the Iowa Gambling Task (IGT). Correlation analysis showed that the higher the SOGS scores, the higher the propensity to chase, and the poorer the decision-making performance. Regression analysis revealed that chasing propensity and gambling severity predicted IGT performance. Mediation analysis indicated that the association between gambling severity and poor decision-making is mediated by chasing. Gambling severity was assessed by means of a self-report measure. The generalizability of findings is limited, since the study focused only on VLT players. This study provides the first evidence that chasing, along with gambling severity, affects decision-making, at least in behavioral tasks involving money. Since chasers and non-chasers could be two different sub-types of gamblers, treatment protocols should take into account the additive role of chasing in gambling disorder. Copyright © 2017 Elsevier B.V. All rights reserved.
SHARP ENTRYWISE PERTURBATION BOUNDS FOR MARKOV CHAINS.
Thiede, Erik; VAN Koten, Brian; Weare, Jonathan
For many Markov chains of practical interest, the invariant distribution is extremely sensitive to perturbations of some entries of the transition matrix, but insensitive to others; we give an example of such a chain, motivated by a problem in computational statistical physics. We have derived perturbation bounds on the relative error of the invariant distribution that reveal these variations in sensitivity. Our bounds are sharp, we do not impose any structural assumptions on the transition matrix or on the perturbation, and computing the bounds has the same complexity as computing the invariant distribution or computing other bounds in the literature. Moreover, our bounds have a simple interpretation in terms of hitting times, which can be used to draw intuitive but rigorous conclusions about the sensitivity of a chain to various types of perturbations.
International Nuclear Information System (INIS)
Ferrada, J.J.; Welch, T.D.; Osborne-Lee, I.W.; Nehls, J.W. Jr.
1995-01-01
Systems analysis methods and tools have been developed and applied to the problem of selecting treatment technologies for mixed wastes. The approach, which is based on decision analysis, process modeling, and process simulation with a tool developed in-house, provides a one-of-a-kind resource for waste treatment alternatives evaluation and has played a key role in developing mandated treatment plans for Oak Ridge Reservation mixed waste
International Nuclear Information System (INIS)
Rydell, R.J.
1980-01-01
One objective of this study is to develop a framework of analysis that is useful for investigating the conditions shaping the respective roles of science and politics in decision making on technology policy. The analytical framework used focuses upon the interactive R and D process and specifies the factors affecting change in and of that process. The distinguishing feature of this new analytical framework is its utility for investigating how participants in and R and D process go about defining and solving a growing variety of problems that they encounter as the costs, impacts, and stakes of technological change become more readily apparent. The framework is then applied to a particularly complex and politically controversial technology, the nuclear breeder reactor. Britain and the United States, the original pioneers of technology utilizing plutonium to produce electricity, were singled out in order to test the utility of the analytical framework for the comparative study of the R and D decision-making process. Although the study does not purport to have exhausted all possible interpretations of this complex subject, the results of the study suggest that the interactive R and D process represents an improvement over conventional modes of conceptualizing how R and D policies are formulated and changed. Efforts to resolve major national and international problems relating to science and technology will ultimately succeed only to the extent that these efforts are grounded in a deeper understanding of the conditions affecting how these problems are defined and approached in actual decision-making environments
Crossing over...Markov meets Mendel.
Mneimneh, Saad
2012-01-01
Chromosomal crossover is a biological mechanism to combine parental traits. It is perhaps the first mechanism ever taught in any introductory biology class. The formulation of crossover, and resulting recombination, came about 100 years after Mendel's famous experiments. To a great extent, this formulation is consistent with the basic genetic findings of Mendel. More importantly, it provides a mathematical insight for his two laws (and corrects them). From a mathematical perspective, and while it retains similarities, genetic recombination guarantees diversity so that we do not rapidly converge to the same being. It is this diversity that made the study of biology possible. In particular, the problem of genetic mapping and linkage-one of the first efforts towards a computational approach to biology-relies heavily on the mathematical foundation of crossover and recombination. Nevertheless, as students we often overlook the mathematics of these phenomena. Emphasizing the mathematical aspect of Mendel's laws through crossover and recombination will prepare the students to make an early realization that biology, in addition to being experimental, IS a computational science. This can serve as a first step towards a broader curricular transformation in teaching biological sciences. I will show that a simple and modern treatment of Mendel's laws using a Markov chain will make this step possible, and it will only require basic college-level probability and calculus. My personal teaching experience confirms that students WANT to know Markov chains because they hear about them from bioinformaticists all the time. This entire exposition is based on three homework problems that I designed for a course in computational biology. A typical reader is, therefore, an instructional staff member or a student in a computational field (e.g., computer science, mathematics, statistics, computational biology, bioinformatics). However, other students may easily follow by omitting the
Crossing over...Markov meets Mendel.
Directory of Open Access Journals (Sweden)
Saad Mneimneh
Full Text Available Chromosomal crossover is a biological mechanism to combine parental traits. It is perhaps the first mechanism ever taught in any introductory biology class. The formulation of crossover, and resulting recombination, came about 100 years after Mendel's famous experiments. To a great extent, this formulation is consistent with the basic genetic findings of Mendel. More importantly, it provides a mathematical insight for his two laws (and corrects them. From a mathematical perspective, and while it retains similarities, genetic recombination guarantees diversity so that we do not rapidly converge to the same being. It is this diversity that made the study of biology possible. In particular, the problem of genetic mapping and linkage-one of the first efforts towards a computational approach to biology-relies heavily on the mathematical foundation of crossover and recombination. Nevertheless, as students we often overlook the mathematics of these phenomena. Emphasizing the mathematical aspect of Mendel's laws through crossover and recombination will prepare the students to make an early realization that biology, in addition to being experimental, IS a computational science. This can serve as a first step towards a broader curricular transformation in teaching biological sciences. I will show that a simple and modern treatment of Mendel's laws using a Markov chain will make this step possible, and it will only require basic college-level probability and calculus. My personal teaching experience confirms that students WANT to know Markov chains because they hear about them from bioinformaticists all the time. This entire exposition is based on three homework problems that I designed for a course in computational biology. A typical reader is, therefore, an instructional staff member or a student in a computational field (e.g., computer science, mathematics, statistics, computational biology, bioinformatics. However, other students may easily follow by
Charles, Cathy; Gafni, Amiram
2014-03-01
Two international movements, evidence-based medicine (EBM) and shared decision-making (SDM) have grappled for some time with issues related to defining the meaning, role and measurement of values/preferences in their respective models of treatment decision-making. In this article, we identify and describe unresolved problems in the way that each movement addresses these issues. The starting point for this discussion is that at least two essential ingredients are needed for treatment decision-making: research information about treatment options and their potential benefits and risks; and the values/preferences of participants in the decision-making process. Both the EBM and SDM movements have encountered difficulties in defining the meaning, role and measurement of values/preferences in treatment decision-making. In the EBM model of practice, there is no clear and consistent definition of patient values/preferences and no guidance is provided on how to integrate these into an EBM model of practice. Methods advocated to measure patient values are also problematic. Within the SDM movement, patient values/preferences tend to be defined and measured in a restrictive and reductionist way as patient preferences for treatment options or attributes of options, while broader underlying value structures are ignored. In both models of practice, the meaning and expected role of physician values in decision-making are unclear. Values clarification exercises embedded in patient decision aids are suggested by SDM advocates to identify and communicate patient values/preferences for different treatment outcomes. Such exercises have the potential to impose a particular decision-making theory and/or process onto patients, which can change the way they think about and process information, potentially impeding them from making decisions that are consistent with their true values. The tasks of clarifying the meaning, role and measurement of values/preferences in treatment decision
Vedadi, Farhang; Shirani, Shahram
2014-01-01
A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.
A Markov chain model for CANDU feeder pipe degradation
International Nuclear Information System (INIS)
Datla, S.; Dinnie, K.; Usmani, A.; Yuan, X.-X.
2008-01-01
There is need for risk based approach to manage feeder pipe degradation to ensure safe operation by minimizing the nuclear safety risk. The current lack of understanding of some fundamental degradation mechanisms will result in uncertainty in predicting the rupture frequency. There are still concerns caused by uncertainties in the inspection techniques and engineering evaluations which should be addressed in the current procedures. A probabilistic approach is therefore useful in quantifying the risk and also it provides a tool for risk based decision making. This paper discusses the application of Markov chain model for feeder pipes in order to predict and manage the risks associated with the existing and future aging-related feeder degradation mechanisms. The major challenge in the approach is the lack of service data in characterizing the transition probabilities of the Markov model. The paper also discusses various approaches in estimating plant specific degradation rates. (author)
An Exploration of Dual Systems via Time Pressure Manipulation in Decision-making Problems
Guo, Lisa
Every day, decisions need to be made where time is a limiting factor. Regardless of situation, time constraints often place a premium on rapid decision-making. Researchers have been interested in studying this human behavior and understanding its underlying cognitive processes. In previous studies, scientists have believed that the cognitive processes underlying decision-making behavior were consistent with dual-process modes of thinking. Critics of dual-process theory question the vagueness of its definition, and claim that single-process accounts can explain the data just as well. My aim is to elucidate the cognitive processes that underlie decisions which involve some level of risk through the experimental manipulation of time pressure. Using this method, I hope to distinguish between competing hypotheses related to the origin of the effect. I will explore three types of decisions that illustrate these concepts: risky decision-making involving gambles, intertemporal choice, and one-shot public goods games involving social cooperation. In our experiments, participants made decisions about gambles framed as either gains or losses; decided upon intertemporal choices for smaller but sooner rewards or larger but later rewards; and played a one-shot public goods game involving social cooperation and contributing an amount of money to a group. In each case, we experimentally manipulated time pressure, either within subjects or among individuals. Results showed under time pressure, increased framing effects under in both hypothetical and incentivized choices; and greater contributions and cooperation among individuals, lending support to the dual process hypothesis that these effects arise from a fast, intuitive system. However, our intertemporal choice experiment showed that time constraints led to increased selection of the larger but later options, which suggests that the magnitude of the reward may play larger role in choice selection under cognitive load than
Verification of Open Interactive Markov Chains
Brazdil, Tomas; Hermanns, Holger; Krcal, Jan; Kretinsky, Jan; Rehak, Vojtech
2012-01-01
Interactive Markov chains (IMC) are compositional behavioral models extending both labeled transition systems and continuous-time Markov chains. IMC pair modeling convenience - owed to compositionality properties - with effective verification algorithms and tools - owed to Markov properties. Thus far however, IMC verification did not consider compositionality properties, but considered closed systems. This paper discusses the evaluation of IMC in an open and thus compositional interpretation....
Spectral methods for quantum Markov chains
Energy Technology Data Exchange (ETDEWEB)
Szehr, Oleg
2014-05-08
The aim of this project is to contribute to our understanding of quantum time evolutions, whereby we focus on quantum Markov chains. The latter constitute a natural generalization of the ubiquitous concept of a classical Markov chain to describe evolutions of quantum mechanical systems. We contribute to the theory of such processes by introducing novel methods that allow us to relate the eigenvalue spectrum of the transition map to convergence as well as stability properties of the Markov chain.
Spectral methods for quantum Markov chains
International Nuclear Information System (INIS)
Szehr, Oleg
2014-01-01
The aim of this project is to contribute to our understanding of quantum time evolutions, whereby we focus on quantum Markov chains. The latter constitute a natural generalization of the ubiquitous concept of a classical Markov chain to describe evolutions of quantum mechanical systems. We contribute to the theory of such processes by introducing novel methods that allow us to relate the eigenvalue spectrum of the transition map to convergence as well as stability properties of the Markov chain.
Directory of Open Access Journals (Sweden)
Pier Luigi Baldi
2006-06-01
Full Text Available This article points out some conditions which significantly exert an influence upon decision and compares decision making and problem solving as interconnected processes. Some strategies of decision making are also examined.
A scaling analysis of a cat and mouse Markov chain
Litvak, Nelli; Robert, Philippe
2012-01-01
If ($C_n$) a Markov chain on a discrete state space $S$, a Markov chain ($C_n, M_n$) on the product space $S \\times S$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain behaves like the original Markov chain and the second component changes only when both
Criterion of Semi-Markov Dependent Risk Model
Institute of Scientific and Technical Information of China (English)
Xiao Yun MO; Xiang Qun YANG
2014-01-01
A rigorous definition of semi-Markov dependent risk model is given. This model is a generalization of the Markov dependent risk model. A criterion and necessary conditions of semi-Markov dependent risk model are obtained. The results clarify relations between elements among semi-Markov dependent risk model more clear and are applicable for Markov dependent risk model.
Brevers, Damien; Noël, Xavier; He, Qinghua; Melrose, James A; Bechara, Antoine
2016-05-01
The aim of this study was to examine the impact of different neural systems on monetary decision making in frequent poker gamblers, who vary in their degree of problem gambling. Fifteen frequent poker players, ranging from non-problem to high-problem gambling, and 15 non-gambler controls were scanned using functional magnetic resonance imaging (fMRI) while performing the Iowa Gambling Task (IGT). During IGT deck selection, between-group fMRI analyses showed that frequent poker gamblers exhibited higher ventral-striatal but lower dorsolateral prefrontal and orbitofrontal activations as compared with controls. Moreover, using functional connectivity analyses, we observed higher ventral-striatal connectivity in poker players, and in regions involved in attentional/motor control (posterior cingulate), visual (occipital gyrus) and auditory (temporal gyrus) processing. In poker gamblers, scores of problem gambling severity were positively associated with ventral-striatal activations and with the connectivity between the ventral-striatum seed and the occipital fusiform gyrus and the middle temporal gyrus. Present results are consistent with findings from recent brain imaging studies showing that gambling disorder is associated with heightened motivational-reward processes during monetary decision making, which may hamper one's ability to moderate his level of monetary risk taking. © 2015 Society for the Study of Addiction.
Berlow, Noah; Pal, Ranadip
2011-01-01
Genetic Regulatory Networks (GRNs) are frequently modeled as Markov Chains providing the transition probabilities of moving from one state of the network to another. The inverse problem of inference of the Markov Chain from noisy and limited experimental data is an ill posed problem and often generates multiple model possibilities instead of a unique one. In this article, we address the issue of intervention in a genetic regulatory network represented by a family of Markov Chains. The purpose of intervention is to alter the steady state probability distribution of the GRN as the steady states are considered to be representative of the phenotypes. We consider robust stationary control policies with best expected behavior. The extreme computational complexity involved in search of robust stationary control policies is mitigated by using a sequential approach to control policy generation and utilizing computationally efficient techniques for updating the stationary probability distribution of a Markov chain following a rank one perturbation.
Variable context Markov chains for HIV protease cleavage site prediction.
Oğul, Hasan
2009-06-01
Deciphering the knowledge of HIV protease specificity and developing computational tools for detecting its cleavage sites in protein polypeptide chain are very desirable for designing efficient and specific chemical inhibitors to prevent acquired immunodeficiency syndrome. In this study, we developed a generative model based on a generalization of variable order Markov chains (VOMC) for peptide sequences and adapted the model for prediction of their cleavability by certain proteases. The new method, called variable context Markov chains (VCMC), attempts to identify the context equivalence based on the evolutionary similarities between individual amino acids. It was applied for HIV-1 protease cleavage site prediction problem and shown to outperform existing methods in terms of prediction accuracy on a common dataset. In general, the method is a promising tool for prediction of cleavage sites of all proteases and encouraged to be used for any kind of peptide classification problem as well.
Directory of Open Access Journals (Sweden)
Jean B. Lasserre
2000-01-01
Full Text Available We consider the class of Markov kernels for which the weak or strong Feller property fails to hold at some discontinuity set. We provide a simple necessary and sufficient condition for existence of an invariant probability measure as well as a Foster-Lyapunov sufficient condition. We also characterize a subclass, the quasi (weak or strong Feller kernels, for which the sequences of expected occupation measures share the same asymptotic properties as for (weak or strong Feller kernels. In particular, it is shown that the sequences of expected occupation measures of strong and quasi strong-Feller kernels with an invariant probability measure converge setwise to an invariant measure.
Markov process of muscle motors
International Nuclear Information System (INIS)
Kondratiev, Yu; Pechersky, E; Pirogov, S
2008-01-01
We study a Markov random process describing muscle molecular motor behaviour. Every motor is either bound up with a thin filament or unbound. In the bound state the motor creates a force proportional to its displacement from the neutral position. In both states the motor spends an exponential time depending on the state. The thin filament moves at a velocity proportional to the average of all displacements of all motors. We assume that the time which a motor stays in the bound state does not depend on its displacement. Then one can find an exact solution of a nonlinear equation appearing in the limit of an infinite number of motors
Ellaway, Rachel H; Poulton, Terry; Jivram, Trupti
2015-01-01
In 2009, St George's University of London (SGUL) replaced their paper-based problem-based learning (PBL) cases with virtual patients for intermediate-level undergraduate students. This involved the development of Decision-Problem-Based Learning (D-PBL), a variation on progressive-release PBL that uses virtual patients instead of paper cases, and focuses on patient management decisions and their consequences. Using a case study method, this paper describes four years of developing and running D-PBL at SGUL from individual activities up to the ways in which D-PBL functioned as an educational system. A number of broad issues were identified: the importance of debates and decision-making in making D-PBL activities engaging and rewarding; the complexities of managing small group dynamics; the time taken to complete D-PBL activities; the changing role of the facilitator; and the erosion of the D-PBL process over time. A key point in understanding this work is the construction and execution of the D-PBL activity, as much of the value of this approach arises from the actions and interactions of students, their facilitators and the virtual patients rather than from the design of the virtual patients alone. At a systems level D-PBL needs to be periodically refreshed to retain its effectiveness.
Parallel algorithms for simulating continuous time Markov chains
Nicol, David M.; Heidelberger, Philip
1992-01-01
We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.
Simulation based sequential Monte Carlo methods for discretely observed Markov processes
Neal, Peter
2014-01-01
Parameter estimation for discretely observed Markov processes is a challenging problem. However, simulation of Markov processes is straightforward using the Gillespie algorithm. We exploit this ease of simulation to develop an effective sequential Monte Carlo (SMC) algorithm for obtaining samples from the posterior distribution of the parameters. In particular, we introduce two key innovations, coupled simulations, which allow us to study multiple parameter values on the basis of a single sim...
Barbu, Vlad
2008-01-01
Semi-Markov processes are much more general and better adapted to applications than the Markov ones because sojourn times in any state can be arbitrarily distributed, as opposed to the geometrically distributed sojourn time in the Markov case. This book concerns with the estimation of discrete-time semi-Markov and hidden semi-Markov processes
International Nuclear Information System (INIS)
Schmieder, K.
1977-01-01
Standardization in nuclear engineering makes two demands on a legal instrument which is to make this standardization possible and which is to promote standardization in the nuclear licensing practice: On the basis of just one licence for a constructional part or a component, its applicability in any number of subsequent facility licensing procedures has to be warranted, and by virtue of its binding effect, standardization has to create a sufficiently big confidence protection with manufacturers, constructioneers and operators to offer sufficiently effective incentives for standardization. The nuclear preliminary decision pursuant to section 7 a of the Atomic Energy Act in the form of the component preliminary decision appears to be unsuitable as a legal instrument for standardization, as the preliminary decision refers exclusively to the construction of a concrete facility. For standardization in reactor engineering, the construction design approval appears to be basically the proper legal instrument on account of its legal structure as well as its economic effect. Its binding effect encouters a limitation with regard to third parties in so far that this limitation could question again the binding effect in a subsequent site-dependent nuclear licence procedure. The legal structure of the extent of the binding effect, which is decisive for the suitability of the construction design approval, lies with the legislator. The following questions have to be regulated: Ought the applicant to have a legal claim on the granting of a construction design approval, or ought it to be at the discretion of the authorities, and secondly, the extent of the binding effect in terms of time on the basis of the fixation of a time limit, or on the basis of the possibility of subsequent conditions to be imposed, or the revocation. (orig./HP) [de
De Kruijf, J.
2007-01-01
Water management issues are often complex, unstructured problems. They are complex, because they are part of a natural and human system wich consists of many diverse, interdependent elements, e.g. upstream events influence the water system downstream, different interdependent goverment layers,
Grover, Jeff
2016-01-01
This book is an extension of the author’s first book and serves as a guide and manual on how to specify and compute 2-, 3-, & 4-Event Bayesian Belief Networks (BBN). It walks the learner through the steps of fitting and solving fifty BBN numerically, using mathematical proof. The author wrote this book primarily for naïve learners and professionals, with a proof-based academic rigor. The author's first book on this topic, a primer introducing learners to the basic complexities and nuances associated with learning Bayes’ theory and inverse probability for the first time, was meant for non-statisticians unfamiliar with the theorem - as is this book. This new book expands upon that approach and is meant to be a prescriptive guide for building BBN and executive decision-making for students and professionals; intended so that decision-makers can invest their time and start using this inductive reasoning principle in their decision-making processes. It highlights the utility of an algorithm that served as ...
Exploring the Impact of Early Decisions in Variable Ordering for Constraint Satisfaction Problems
Ortiz-Bayliss, José Carlos; Amaya, Ivan; Conant-Pablos, Santiago Enrique; Terashima-Marín, Hugo
2018-01-01
When solving constraint satisfaction problems (CSPs), it is a common practice to rely on heuristics to decide which variable should be instantiated at each stage of the search. But, this ordering influences the search cost. Even so, and to the best of our knowledge, no earlier work has dealt with how first variable orderings affect the overall cost. In this paper, we explore the cost of finding high-quality orderings of variables within constraint satisfaction problems. We also study differen...
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Jones, Edmund; Masconi, Katya L.; Sweeting, Michael J.; Thompson, Simon G.; Powell, Janet T.
2018-01-01
Markov models are often used to evaluate the cost-effectiveness of new healthcare interventions but they are sometimes not flexible enough to allow accurate modeling or investigation of alternative scenarios and policies. A Markov model previously demonstrated that a one-off invitation to screening for abdominal aortic aneurysm (AAA) for men aged 65 y in the UK and subsequent follow-up of identified AAAs was likely to be highly cost-effective at thresholds commonly adopted in the UK (£20,000 to £30,000 per quality adjusted life-year). However, new evidence has emerged and the decision problem has evolved to include exploration of the circumstances under which AAA screening may be cost-effective, which the Markov model is not easily able to address. A new model to handle this more complex decision problem was needed, and the case of AAA screening thus provides an illustration of the relative merits of Markov models and discrete event simulation (DES) models. An individual-level DES model was built using the R programming language to reflect possible events and pathways of individuals invited to screening v. those not invited. The model was validated against key events and cost-effectiveness, as observed in a large, randomized trial. Different screening protocol scenarios were investigated to demonstrate the flexibility of the DES. The case of AAA screening highlights the benefits of DES, particularly in the context of screening studies.
The Decision of Information Safety Problems at Processing of the Biometric Personal Data
Directory of Open Access Journals (Sweden)
Y. G. Gorshkov
2010-03-01
Full Text Available The requirements imposed on transfer by the personal biometric information in systems and communication networks according to Federal Law № 152 “Personal data” are defined. Lacks of used decisions protection of such biometric data, as the test speech information, including parameters of a speech path, and also acoustic signals of tones and noise of heart of the person on an example of telemedicine systems construction with the using of a network telephone channels general using and wireless networks Wi-Fi are considered. Directions of works are formulated on safety of the personal biometric data transferred in telecommunication systems.
Timed Comparisons of Semi-Markov Processes
DEFF Research Database (Denmark)
Pedersen, Mathias Ruggaard; Larsen, Kim Guldstrand; Bacci, Giorgio
2018-01-01
-Markov processes, and investigate the question of how to compare two semi-Markov processes with respect to their time-dependent behaviour. To this end, we introduce the relation of being “faster than” between processes and study its algorithmic complexity. Through a connection to probabilistic automata we obtain...
Inhomogeneous Markov point processes by transformation
DEFF Research Database (Denmark)
Jensen, Eva B. Vedel; Nielsen, Linda Stougaard
2000-01-01
We construct parametrized models for point processes, allowing for both inhomogeneity and interaction. The inhomogeneity is obtained by applying parametrized transformations to homogeneous Markov point processes. An interesting model class, which can be constructed by this transformation approach......, is that of exponential inhomogeneous Markov point processes. Statistical inference For such processes is discussed in some detail....
Markov-modulated and feedback fluid queues
Scheinhardt, Willem R.W.
1998-01-01
In the last twenty years the field of Markov-modulated fluid queues has received considerable attention. In these models a fluid reservoir receives and/or releases fluid at rates which depend on the actual state of a background Markov chain. In the first chapter of this thesis we give a short
Framing of decision problem in short and long term and probability perception
Directory of Open Access Journals (Sweden)
Anna Wielicka-Regulska
2010-01-01
Full Text Available Consumer preferences are dependent on problem framing and time perspective. For experiment’s participants avoiding of losses was less probable in distant time perspective than in near term. On the contrary, achieving gains in near future was less probable than in remote time. One may expect different reactions when presenting problem in terms of gains than in terms of losses. This can be exploited in promotion of highly desired social behaviours like savings for retirement, keeping good diet, investing in learning, and other advantageous activities that are usually put forward by consumers.
Directory of Open Access Journals (Sweden)
Vatutin Eduard
2017-12-01
Full Text Available The article deals with the problem of analysis of effectiveness of the heuristic methods with limited depth-first search techniques of decision obtaining in the test problem of getting the shortest path in graph. The article briefly describes the group of methods based on the limit of branches number of the combinatorial search tree and limit of analyzed subtree depth used to solve the problem. The methodology of comparing experimental data for the estimation of the quality of solutions based on the performing of computational experiments with samples of graphs with pseudo-random structure and selected vertices and arcs number using the BOINC platform is considered. It also shows description of obtained experimental results which allow to identify the areas of the preferable usage of selected subset of heuristic methods depending on the size of the problem and power of constraints. It is shown that the considered pair of methods is ineffective in the selected problem and significantly inferior to the quality of solutions that are provided by ant colony optimization method and its modification with combinatorial returns.
Vatutin, Eduard
2017-12-01
The article deals with the problem of analysis of effectiveness of the heuristic methods with limited depth-first search techniques of decision obtaining in the test problem of getting the shortest path in graph. The article briefly describes the group of methods based on the limit of branches number of the combinatorial search tree and limit of analyzed subtree depth used to solve the problem. The methodology of comparing experimental data for the estimation of the quality of solutions based on the performing of computational experiments with samples of graphs with pseudo-random structure and selected vertices and arcs number using the BOINC platform is considered. It also shows description of obtained experimental results which allow to identify the areas of the preferable usage of selected subset of heuristic methods depending on the size of the problem and power of constraints. It is shown that the considered pair of methods is ineffective in the selected problem and significantly inferior to the quality of solutions that are provided by ant colony optimization method and its modification with combinatorial returns.
Directory of Open Access Journals (Sweden)
Chih-Kun Ke
2012-01-01
Full Text Available In business enterprises, especially the manufacturing industry, various problem situations may occur during the production process. A situation denotes an evaluation point to determine the status of a production process. A problem may occur if there is a discrepancy between the actual situation and the desired one. Thus, a problem-solving process is often initiated to achieve the desired situation. In the process, how to determine an action need to be taken to resolve the situation becomes an important issue. Therefore, this work uses a selection approach for optimized problem-solving process to assist workers in taking a reasonable action. A grey relational utility model and a multicriteria decision analysis are used to determine the optimal selection order of candidate actions. The selection order is presented to the worker as an adaptive recommended solution. The worker chooses a reasonable problem-solving action based on the selection order. This work uses a high-tech company’s knowledge base log as the analysis data. Experimental results demonstrate that the proposed selection approach is effective.
Classification Using Markov Blanket for Feature Selection
DEFF Research Database (Denmark)
Zeng, Yifeng; Luo, Jian
2009-01-01
Selecting relevant features is in demand when a large data set is of interest in a classification task. It produces a tractable number of features that are sufficient and possibly improve the classification performance. This paper studies a statistical method of Markov blanket induction algorithm...... for filtering features and then applies a classifier using the Markov blanket predictors. The Markov blanket contains a minimal subset of relevant features that yields optimal classification performance. We experimentally demonstrate the improved performance of several classifiers using a Markov blanket...... induction as a feature selection method. In addition, we point out an important assumption behind the Markov blanket induction algorithm and show its effect on the classification performance....
Quantitative risk stratification in Markov chains with limiting conditional distributions.
Chan, David C; Pollett, Philip K; Weinstein, Milton C
2009-01-01
Many clinical decisions require patient risk stratification. The authors introduce the concept of limiting conditional distributions, which describe the equilibrium proportion of surviving patients occupying each disease state in a Markov chain with death. Such distributions can quantitatively describe risk stratification. The authors first establish conditions for the existence of a positive limiting conditional distribution in a general Markov chain and describe a framework for risk stratification using the limiting conditional distribution. They then apply their framework to a clinical example of a treatment indicated for high-risk patients, first to infer the risk of patients selected for treatment in clinical trials and then to predict the outcomes of expanding treatment to other populations of risk. For the general chain, a positive limiting conditional distribution exists only if patients in the earliest state have the lowest combined risk of progression or death. The authors show that in their general framework, outcomes and population risk are interchangeable. For the clinical example, they estimate that previous clinical trials have selected the upper quintile of patient risk for this treatment, but they also show that expanded treatment would weakly dominate this degree of targeted treatment, and universal treatment may be cost-effective. Limiting conditional distributions exist in most Markov models of progressive diseases and are well suited to represent risk stratification quantitatively. This framework can characterize patient risk in clinical trials and predict outcomes for other populations of risk.
A Bayesian Markov geostatistical model for estimation of hydrogeological properties
International Nuclear Information System (INIS)
Rosen, L.; Gustafson, G.
1996-01-01
A geostatistical methodology based on Markov-chain analysis and Bayesian statistics was developed for probability estimations of hydrogeological and geological properties in the siting process of a nuclear waste repository. The probability estimates have practical use in decision-making on issues such as siting, investigation programs, and construction design. The methodology is nonparametric which makes it possible to handle information that does not exhibit standard statistical distributions, as is often the case for classified information. Data do not need to meet the requirements on additivity and normality as with the geostatistical methods based on regionalized variable theory, e.g., kriging. The methodology also has a formal way for incorporating professional judgments through the use of Bayesian statistics, which allows for updating of prior estimates to posterior probabilities each time new information becomes available. A Bayesian Markov Geostatistical Model (BayMar) software was developed for implementation of the methodology in two and three dimensions. This paper gives (1) a theoretical description of the Bayesian Markov Geostatistical Model; (2) a short description of the BayMar software; and (3) an example of application of the model for estimating the suitability for repository establishment with respect to the three parameters of lithology, hydraulic conductivity, and rock quality designation index (RQD) at 400--500 meters below ground surface in an area around the Aespoe Hard Rock Laboratory in southeastern Sweden
Robust filtering and prediction for systems with embedded finite-state Markov-Chain dynamics
International Nuclear Information System (INIS)
Pate, E.B.
1986-01-01
This research developed new methodologies for the design of robust near-optimal filters/predictors for a class of system models that exhibit embedded finite-state Markov-chain dynamics. These methodologies are developed through the concepts and methods of stochastic model building (including time-series analysis), game theory, decision theory, and filtering/prediction for linear dynamic systems. The methodology is based on the relationship between the robustness of a class of time-series models and quantization which is applied to the time series as part of the model identification process. This relationship is exploited by utilizing the concept of an equivalence, through invariance of spectra, between the class of Markov-chain models and the class of autoregressive moving average (ARMA) models. This spectral equivalence permits a straightforward implementation of the desirable robust properties of the Markov-chain approximation in a class of models which may be applied in linear-recursive form in a linear Kalman filter/predictor structure. The linear filter/predictor structure is shown to provide asymptotically optimal estimates of states which represent one or more integrations of the Markov-chain state. The development of a new saddle-point theorem for a game based on the Markov-chain model structure gives rise to a technique for determining a worst case Markov-chain process, upon which a robust filter/predictor design if based
The Bacterial Sequential Markov Coalescent.
De Maio, Nicola; Wilson, Daniel J
2017-05-01
Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is
Hesitant fuzzy soft sets with application in multicriteria group decision making problems.
Wang, Jian-qiang; Li, Xin-E; Chen, Xiao-hong
2015-01-01
Soft sets have been regarded as a useful mathematical tool to deal with uncertainty. In recent years, many scholars have shown an intense interest in soft sets and extended standard soft sets to intuitionistic fuzzy soft sets, interval-valued fuzzy soft sets, and generalized fuzzy soft sets. In this paper, hesitant fuzzy soft sets are defined by combining fuzzy soft sets with hesitant fuzzy sets. And some operations on hesitant fuzzy soft sets based on Archimedean t-norm and Archimedean t-conorm are defined. Besides, four aggregation operations, such as the HFSWA, HFSWG, GHFSWA, and GHFSWG operators, are given. Based on these operators, a multicriteria group decision making approach with hesitant fuzzy soft sets is also proposed. To demonstrate its accuracy and applicability, this approach is finally employed to calculate a numerical example.
A hybrid model using decision tree and neural network for credit scoring problem
Directory of Open Access Journals (Sweden)
Amir Arzy Soltan
2012-08-01
Full Text Available Nowadays credit scoring is an important issue for financial and monetary organizations that has substantial impact on reduction of customer attraction risks. Identification of high risk customer can reduce finished cost. An accurate classification of customer and low type 1 and type 2 errors have been investigated in many studies. The primary objective of this paper is to develop a new method, which chooses the best neural network architecture based on one column hidden layer MLP, multiple columns hidden layers MLP, RBFN and decision trees and ensembling them with voting methods. The proposed method of this paper is run on an Australian credit data and a private bank in Iran called Export Development Bank of Iran and the results are used for making solution in low customer attraction risks.
Applying Markov Chains for NDVI Time Series Forecasting of Latvian Regions
Directory of Open Access Journals (Sweden)
Stepchenko Arthur
2015-12-01
Full Text Available Time series of earth observation based estimates of vegetation inform about variations in vegetation at the scale of Latvia. A vegetation index is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation. NDVI index is an important variable for vegetation forecasting and management of various problems, such as climate change monitoring, energy usage monitoring, managing the consumption of natural resources, agricultural productivity monitoring, drought monitoring and forest fire detection. In this paper, we make a one-step-ahead prediction of 7-daily time series of NDVI index using Markov chains. The choice of a Markov chain is due to the fact that a Markov chain is a sequence of random variables where each variable is located in some state. And a Markov chain contains probabilities of moving from one state to other.
Female employment in regions of the North of Russia: problems and decision ways
Directory of Open Access Journals (Sweden)
Vera Eduardovna Toskunina
2013-12-01
Full Text Available This article is devoted to the analysis of condition of female employment in regions of North of Russia. The research hypothesis is an assumption that the possibilities of female employment in northern regions of Russia are considerably reduced because of branch structure of economy with its raw trend. It increase a problem of female unemployment and causes necessity to take the additional measures for its adjustment by the executive authority The authors allocated the major factors influencing on the possibilities of women’s employment in a region. The tools are proved, and recommendations about decreasing the existing problems in the field of female employment in subjects of the Northern part of the Russian Federation are given on the basis of the analysis of statistical data, standard regulation, and policy documents.
Inference with constrained hidden Markov models in PRISM
DEFF Research Database (Denmark)
Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp
2010-01-01
A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference. De......_different are integrated. We experimentally validate our approach on the biologically motivated problem of global pairwise alignment.......A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference...
Sentiment classification technology based on Markov logic networks
He, Hui; Li, Zhigang; Yao, Chongchong; Zhang, Weizhe
2016-07-01
With diverse online media emerging, there is a growing concern of sentiment classification problem. At present, text sentiment classification mainly utilizes supervised machine learning methods, which feature certain domain dependency. On the basis of Markov logic networks (MLNs), this study proposed a cross-domain multi-task text sentiment classification method rooted in transfer learning. Through many-to-one knowledge transfer, labeled text sentiment classification, knowledge was successfully transferred into other domains, and the precision of the sentiment classification analysis in the text tendency domain was improved. The experimental results revealed the following: (1) the model based on a MLN demonstrated higher precision than the single individual learning plan model. (2) Multi-task transfer learning based on Markov logical networks could acquire more knowledge than self-domain learning. The cross-domain text sentiment classification model could significantly improve the precision and efficiency of text sentiment classification.
Directory of Open Access Journals (Sweden)
Elaheh Abazarian
2015-01-01
Conclusion: The results showed that teaching problem solving and decision making skills was very effective in reducing diabetic patients′ depression and anxiety and resulted in reducing their depression and anxiety.
Zhu, Zheng; Andresen, Juan Carlos; Janzen, Katharina; Katzgraber, Helmut G.
2013-03-01
We study the equilibrium and nonequilibrium properties of Boolean decision problems with competing interactions on scale-free graphs in a magnetic field. Previous studies at zero field have shown a remarkable equilibrium stability of Boolean variables (Ising spins) with competing interactions (spin glasses) on scale-free networks. When the exponent that describes the power-law decay of the connectivity of the network is strictly larger than 3, the system undergoes a spin-glass transition. However, when the exponent is equal to or less than 3, the glass phase is stable for all temperatures. First we perform finite-temperature Monte Carlo simulations in a field to test the robustness of the spin-glass phase and show, in agreement with analytical calculations, that the system exhibits a de Almeida-Thouless line. Furthermore, we study avalanches in the system at zero temperature to see if the system displays self-organized criticality. This would suggest that damage (avalanches) can spread across the whole system with nonzero probability, i.e., that Boolean decision problems on scale-free networks with competing interactions are fragile when not in thermal equilibrium.
Zhu, Zheng; Andresen, Juan Carlos; Moore, M. A.; Katzgraber, Helmut G.
2014-02-01
We study the equilibrium and nonequilibrium properties of Boolean decision problems with competing interactions on scale-free networks in an external bias (magnetic field). Previous studies at zero field have shown a remarkable equilibrium stability of Boolean variables (Ising spins) with competing interactions (spin glasses) on scale-free networks. When the exponent that describes the power-law decay of the connectivity of the network is strictly larger than 3, the system undergoes a spin-glass transition. However, when the exponent is equal to or less than 3, the glass phase is stable for all temperatures. First, we perform finite-temperature Monte Carlo simulations in a field to test the robustness of the spin-glass phase and show that the system has a spin-glass phase in a field, i.e., exhibits a de Almeida-Thouless line. Furthermore, we study avalanche distributions when the system is driven by a field at zero temperature to test if the system displays self-organized criticality. Numerical results suggest that avalanches (damage) can spread across the whole system with nonzero probability when the decay exponent of the interaction degree is less than or equal to 2, i.e., that Boolean decision problems on scale-free networks with competing interactions can be fragile when not in thermal equilibrium.
Mazher, Wamidh Jalil; Ibrahim, Hadeel T.; Ucan, Osman N.; Bayat, Oguz
2018-03-01
This paper aims to design a drone swarm network by employing free-space optical (FSO) communication for detecting and deep decision making of topological problems (e.g., oil pipeline leak), where deep decision making requires the highest image resolution. Drones have been widely used for monitoring and detecting problems in industrial applications during which the drone sends images from the on-air camera video stream using radio frequency (RF) signals. To obtain higher-resolution images, higher bandwidth (BW) is required. The current study proposed the use of the FSO communication system to facilitate higher BW for higher image resolution. Moreover, the number of drones required to survey a large physical area exceeded the capabilities of RF technologies. Our configuration of the drones is V-shaped swarm with one leading drone called mother drone (DM). The optical decode-and-forward (DF) technique is used to send the optical payloads of all drones in V-shaped swarm to the single ground station through DM. Furthermore, it is found that the transmitted optical power (Pt) is required for each drone based on the threshold outage probability of FSO link failure among the onboard optical-DF drones. The bit error rate of optical payload is calculated based on optical-DF onboard processing. Finally, the number of drones required for different image resolutions based on the size of the considered topological area is optimized.
Tavakkoli-Moghaddam, Reza; Forouzanfar, Fateme; Ebrahimnejad, Sadoullah
2013-07-01
This paper considers a single-sourcing network design problem for a three-level supply chain. For the first time, a novel mathematical model is presented considering risk-pooling, the inventory existence at distribution centers (DCs) under demand uncertainty, the existence of several alternatives to transport the product between facilities, and routing of vehicles from distribution centers to customer in a stochastic supply chain system, simultaneously. This problem is formulated as a bi-objective stochastic mixed-integer nonlinear programming model. The aim of this model is to determine the number of located distribution centers, their locations, and capacity levels, and allocating customers to distribution centers and distribution centers to suppliers. It also determines the inventory control decisions on the amount of ordered products and the amount of safety stocks at each opened DC, selecting a type of vehicle for transportation. Moreover, it determines routing decisions, such as determination of vehicles' routes starting from an opened distribution center to serve its allocated customers and returning to that distribution center. All are done in a way that the total system cost and the total transportation time are minimized. The Lingo software is used to solve the presented model. The computational results are illustrated in this paper.
Schmidt games and Markov partitions
International Nuclear Information System (INIS)
Tseng, Jimmy
2009-01-01
Let T be a C 2 -expanding self-map of a compact, connected, C ∞ , Riemannian manifold M. We correct a minor gap in the proof of a theorem from the literature: the set of points whose forward orbits are nondense has full Hausdorff dimension. Our correction allows us to strengthen the theorem. Combining the correction with Schmidt games, we generalize the theorem in dimension one: given a point x 0 in M, the set of points whose forward orbit closures miss x 0 is a winning set. Finally, our key lemma, the no matching lemma, may be of independent interest in the theory of symbolic dynamics or the theory of Markov partitions
International Nuclear Information System (INIS)
Tahvili, Sahar; Österberg, Jonas; Silvestrov, Sergei; Biteus, Jonas
2014-01-01
One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation
Energy Technology Data Exchange (ETDEWEB)
Tahvili, Sahar [Mälardalen University (Sweden); Österberg, Jonas; Silvestrov, Sergei [Division of Applied Mathematics, Mälardalen University (Sweden); Biteus, Jonas [Scania CV (Sweden)
2014-12-10
One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation.
Decision heuristic or preference? Attribute non-attendance in discrete choice problems.
Heidenreich, Sebastian; Watson, Verity; Ryan, Mandy; Phimister, Euan
2018-01-01
This paper investigates if respondents' choice to not consider all characteristics of a multiattribute health service may represent preferences. Over the last decade, an increasing number of studies account for attribute non-attendance (ANA) when using discrete choice experiments to elicit individuals' preferences. Most studies assume such behaviour is a heuristic and therefore uninformative. This assumption may result in misleading welfare estimates if ANA reflects preferences. This is the first paper to assess if ANA is a heuristic or genuine preference without relying on respondents' self-stated motivation and the first study to explore this question within a health context. Based on findings from cognitive psychology, we expect that familiar respondents are less likely to use a decision heuristic to simplify choices than unfamiliar respondents. We employ a latent class model of discrete choice experiment data concerned with National Health Service managers' preferences for support services that assist with performance concerns. We present quantitative and qualitative evidence that in our study ANA mostly represents preferences. We also show that wrong assumptions about ANA result in inadequate welfare measures that can result in suboptimal policy advice. Future research should proceed with caution when assuming that ANA is a heuristic. Copyright © 2017 John Wiley & Sons, Ltd.
Application of Bayesian statistical decision theory for a maintenance optimization problem
International Nuclear Information System (INIS)
Procaccia, H.; Cordier, R.; Muller, S.
1997-01-01
Reliability-centered maintenance (RCM) is a rational approach that can be used to identify the equipment of facilities that may turn out to be critical with respect to safety, to availability, or to maintenance costs. Is is dor these critical pieces of equipment alone that a corrective (one waits for a failure) or preventive (the type and frequency are specified) maintenance policy is established. But this approach has limitations: - when there is little operating feedback and it concerns rare events affecting a piece of equipment judged critical on a priori grounds (how is it possible, in this case, to decide whether or not it is critical, since there is conflict between the gravity of the potential failure and its frequency?); - when the aim is propose an optimal maintenance frequency for a critical piece of equipment - changing the maintenance frequency hitherto applied may cause a significant drift in the observed reliability of the equipment, an aspect not generally taken into account in the RCM approach. In these two situations, expert judgments can be combined with the available operating feedback (Bayesian approach) and the combination of risk of failure and economic consequences taken into account (statistical decision theory) to achieve a true optimization of maintenance policy choices. This paper presents an application on the maintenance of diesel generator component
Finite Markov processes and their applications
Iosifescu, Marius
2007-01-01
A self-contained treatment of finite Markov chains and processes, this text covers both theory and applications. Author Marius Iosifescu, vice president of the Romanian Academy and director of its Center for Mathematical Statistics, begins with a review of relevant aspects of probability theory and linear algebra. Experienced readers may start with the second chapter, a treatment of fundamental concepts of homogeneous finite Markov chain theory that offers examples of applicable models.The text advances to studies of two basic types of homogeneous finite Markov chains: absorbing and ergodic ch
Markov chains models, algorithms and applications
Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen
2013-01-01
This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters. Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods
Markov chains analytic and Monte Carlo computations
Graham, Carl
2014-01-01
Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec
Directory of Open Access Journals (Sweden)
Zied Hajej
2015-01-01
Full Text Available Due to the expensive production equipment, many manufacturers usually lease production equipment with a warranty period during a finite leasing horizon, rather than purchasing them. The lease contract contains the possibility of obtaining an extended warranty for a given additional cost. In this paper, based on the forecasting production/maintenance optimization problem, we develop a mathematical model to study the lease contract with basic and extended warranty based on win-win relationship between the lessee and the lessor. The influence of the production rates in the equipment degradation consequently on the total cost by each side during the finite leasing horizon is stated in order to determine a theoretical condition under which a compromise-pricing zone exists under different possibilities of maintenance policies.
Corporate Income Taxation: Selected Problems and Decisions. The Case of Ukraine
Directory of Open Access Journals (Sweden)
Kateryna Proskura
2016-04-01
Full Text Available This paper is devoted to the issues of corporate income taxation in Ukraine and finding ways to resolve them in the context of European integration. The aim of this paper is demonstrate ways to improve corpo- rate income taxation on the basis of balancing the interests of taxpayers against those of the government. The paper will highlight the key issues of corporate income taxation in Ukraine with its large share of unprofitable enterprises, unequal regulations for different corporate taxpayers and the requirement to pay tax advances even where there is an absence of taxable income. Based on our analysis, the causes of the origin and deepening problems of corporate income taxation in Ukraine will be demonstrated. A compar- ative analysis of income taxation in Poland and Ukraine was performed. It is believed that some elements of the Polish experience in the taxation of income can be applied to Ukraine.
Directory of Open Access Journals (Sweden)
A. V. Skrypnikov
2015-01-01
working out in detail of them as far as development and clarification of other subsystems of management information, i.e. decision of questions of development of complex hardware in the conditions of incompleteness of data about the info-base of the system.
A scaling analysis of a cat and mouse Markov chain
Litvak, Nelli; Robert, Philippe
Motivated by an original on-line page-ranking algorithm, starting from an arbitrary Markov chain $(C_n)$ on a discrete state space ${\\cal S}$, a Markov chain $(C_n,M_n)$ on the product space ${\\cal S}^2$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain
On the entropy of a hidden Markov process.
Jacquet, Philippe; Seroussi, Gadiel; Szpankowski, Wojciech
2008-05-01
We study the entropy rate of a hidden Markov process (HMP) defined by observing the output of a binary symmetric channel whose input is a first-order binary Markov process. Despite the simplicity of the models involved, the characterization of this entropy is a long standing open problem. By presenting the probability of a sequence under the model as a product of random matrices, one can see that the entropy rate sought is equal to a top Lyapunov exponent of the product. This offers an explanation for the elusiveness of explicit expressions for the HMP entropy rate, as Lyapunov exponents are notoriously difficult to compute. Consequently, we focus on asymptotic estimates, and apply the same product of random matrices to derive an explicit expression for a Taylor approximation of the entropy rate with respect to the parameter of the binary symmetric channel. The accuracy of the approximation is validated against empirical simulation results. We also extend our results to higher-order Markov processes and to Rényi entropies of any order.
Robust Dynamics and Control of a Partially Observed Markov Chain
International Nuclear Information System (INIS)
Elliott, R. J.; Malcolm, W. P.; Moore, J. P.
2007-01-01
In a seminal paper, Martin Clark (Communications Systems and Random Process Theory, Darlington, 1977, pp. 721-734, 1978) showed how the filtered dynamics giving the optimal estimate of a Markov chain observed in Gaussian noise can be expressed using an ordinary differential equation. These results offer substantial benefits in filtering and in control, often simplifying the analysis and an in some settings providing numerical benefits, see, for example Malcolm et al. (J. Appl. Math. Stoch. Anal., 2007, to appear).Clark's method uses a gauge transformation and, in effect, solves the Wonham-Zakai equation using variation of constants. In this article, we consider the optimal control of a partially observed Markov chain. This problem is discussed in Elliott et al. (Hidden Markov Models Estimation and Control, Applications of Mathematics Series, vol. 29, 1995). The innovation in our results is that the robust dynamics of Clark are used to compute forward in time dynamics for a simplified adjoint process. A stochastic minimum principle is established
Hidden Markov models in automatic speech recognition
Wrzoskowicz, Adam
1993-11-01
This article describes a method for constructing an automatic speech recognition system based on hidden Markov models (HMMs). The author discusses the basic concepts of HMM theory and the application of these models to the analysis and recognition of speech signals. The author provides algorithms which make it possible to train the ASR system and recognize signals on the basis of distinct stochastic models of selected speech sound classes. The author describes the specific components of the system and the procedures used to model and recognize speech. The author discusses problems associated with the choice of optimal signal detection and parameterization characteristics and their effect on the performance of the system. The author presents different options for the choice of speech signal segments and their consequences for the ASR process. The author gives special attention to the use of lexical, syntactic, and semantic information for the purpose of improving the quality and efficiency of the system. The author also describes an ASR system developed by the Speech Acoustics Laboratory of the IBPT PAS. The author discusses the results of experiments on the effect of noise on the performance of the ASR system and describes methods of constructing HMM's designed to operate in a noisy environment. The author also describes a language for human-robot communications which was defined as a complex multilevel network from an HMM model of speech sounds geared towards Polish inflections. The author also added mandatory lexical and syntactic rules to the system for its communications vocabulary.
Stability and perturbations of countable Markov maps
Jordan, Thomas; Munday, Sara; Sahlsten, Tuomas
2018-04-01
Let T and , , be countable Markov maps such that the branches of converge pointwise to the branches of T, as . We study the stability of various quantities measuring the singularity (dimension, Hölder exponent etc) of the topological conjugacy between and T when . This is a well-understood problem for maps with finitely-many branches, and the quantities are stable for small ɛ, that is, they converge to their expected values if . For the infinite branch case their stability might be expected to fail, but we prove that even in the infinite branch case the quantity is stable under some natural regularity assumptions on and T (under which, for instance, the Hölder exponent of fails to be stable). Our assumptions apply for example in the case of Gauss map, various Lüroth maps and accelerated Manneville-Pomeau maps when varying the parameter α. For the proof we introduce a mass transportation method from the cusp that allows us to exploit thermodynamical ideas from the finite branch case. Dedicated to the memory of Bernd O Stratmann
Single-trial EEG-informed fMRI analysis of emotional decision problems in hot executive function.
Guo, Qian; Zhou, Tiantong; Li, Wenjie; Dong, Li; Wang, Suhong; Zou, Ling
2017-07-01
Executive function refers to conscious control in psychological process which relates to thinking and action. Emotional decision is a part of hot executive function and contains emotion and logic elements. As a kind of important social adaptation ability, more and more attention has been paid in recent years. Gambling task can be well performed in the study of emotional decision. As fMRI researches focused on gambling task show not completely consistent brain activation regions, this study adopted EEG-fMRI fusion technology to reveal brain neural activity related with feedback stimuli. In this study, an EEG-informed fMRI analysis was applied to process simultaneous EEG-fMRI data. First, relative power-spectrum analysis and K-means clustering method were performed separately to extract EEG-fMRI features. Then, Generalized linear models were structured using fMRI data and using different EEG features as regressors. The results showed that in the win versus loss stimuli, the activated regions almost covered the caudate, the ventral striatum (VS), the orbital frontal cortex (OFC), and the cingulate. Wide activation areas associated with reward and punishment were revealed by the EEG-fMRI integration analysis than the conventional fMRI results, such as the posterior cingulate and the OFC. The VS and the medial prefrontal cortex (mPFC) were found when EEG power features were performed as regressors of GLM compared with results entering the amplitudes of feedback-related negativity (FRN) as regressors. Furthermore, the brain region activation intensity was the strongest when theta-band power was used as a regressor compared with the other two fusion results. The EEG-based fMRI analysis can more accurately depict the whole-brain activation map and analyze emotional decision problems.
Observation uncertainty in reversible Markov chains.
Metzner, Philipp; Weber, Marcus; Schütte, Christof
2010-09-01
In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .
Generated dynamics of Markov and quantum processes
Janßen, Martin
2016-01-01
This book presents Markov and quantum processes as two sides of a coin called generated stochastic processes. It deals with quantum processes as reversible stochastic processes generated by one-step unitary operators, while Markov processes are irreversible stochastic processes generated by one-step stochastic operators. The characteristic feature of quantum processes are oscillations, interference, lots of stationary states in bounded systems and possible asymptotic stationary scattering states in open systems, while the characteristic feature of Markov processes are relaxations to a single stationary state. Quantum processes apply to systems where all variables, that control reversibility, are taken as relevant variables, while Markov processes emerge when some of those variables cannot be followed and are thus irrelevant for the dynamic description. Their absence renders the dynamic irreversible. A further aim is to demonstrate that almost any subdiscipline of theoretical physics can conceptually be put in...
Confluence reduction for Markov automata (extended version)
Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette
Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. Recently, the process algebra MAPA was introduced to efficiently model such systems. As always, the state space explosion threatens the analysability of the models
Wood, Nathan; Jones, Jeanne; Schelling, John; Schmidtlein, Mathew
2014-01-01
Tsunami vertical-evacuation (TVE) refuges can be effective risk-reduction options for coastal communities with local tsunami threats but no accessible high ground for evacuations. Deciding where to locate TVE refuges is a complex risk-management question, given the potential for conflicting stakeholder priorities and multiple, suitable sites. We use the coastal community of Ocean Shores (Washington, USA) and the local tsunami threat posed by Cascadia subduction zone earthquakes as a case study to explore the use of geospatial, multi-criteria decision analysis for framing the locational problem of TVE siting. We demonstrate a mixed-methods approach that uses potential TVE sites identified at community workshops, geospatial analysis to model changes in pedestrian evacuation times for TVE options, and statistical analysis to develop metrics for comparing population tradeoffs and to examine influences in decision making. Results demonstrate that no one TVE site can save all at-risk individuals in the community and each site provides varying benefits to residents, employees, customers at local stores, tourists at public venues, children at schools, and other vulnerable populations. The benefit of some proposed sites varies depending on whether or not nearby bridges will be functioning after the preceding earthquake. Relative rankings of the TVE sites are fairly stable under various criteria-weighting scenarios but do vary considerably when comparing strategies to exclusively protect tourists or residents. The proposed geospatial framework can serve as an analytical foundation for future TVE siting discussions.
Garland, Ann F; Taylor, Robin; Brookman-Frazee, Lauren; Baker-Ericzen, Mary; Haine-Schlagel, Rachel; Liu, Yi Hui; Wong, Sarina
2015-06-01
Race/ethnic disparities in utilization of children's mental health care have been well documented and are particularly concerning given the long-term risks of untreated mental health problems (Institute of Medicine, 2003; Kessler et al. Am J Psychiatry 152:10026-1032, 1995). Research investigating the higher rates of unmet need among race/ethnic minority youths has focused primarily on policy, fiscal, and individual child or family factors that can influence service access and use. Alternatively, this study examines provider behavior as a potential influence on race/ethnic disparities in mental health care. The goal of the study was to examine whether patient (family) race/ethnicity influences physician diagnostic and treatment decision-making for childhood disruptive behavior problems. The study utilized an internet-based video vignette with corresponding survey of 371 randomly selected physicians from across the USA representing specialties likely to treat these patients (pediatricians, family physicians, general and child psychiatrists). Participants viewed a video vignette in which only race/ethnicity of the mother randomly varied (non-Hispanic White, Hispanic, and African American) and then responded to questions about diagnosis and recommended treatments. Physicians assigned diagnoses such as oppositional defiant disorder (48 %) and attention deficit disorder (63 %) to the child, but there were no differences in diagnosis based on race/ethnicity. The majority of respondents recommended psychosocial treatment (98 %) and/or psychoactive medication treatment (60 %), but there were no significant differences based on race/ethnicity. Thus, in this study using mock patient stimuli and controlling for other factors, such as insurance coverage, we did not find major differences in physician diagnostic or treatment decision-making based on patient race/ethnicity.
Semi-Markov Arnason-Schwarz models.
King, Ruth; Langrock, Roland
2016-06-01
We consider multi-state capture-recapture-recovery data where observed individuals are recorded in a set of possible discrete states. Traditionally, the Arnason-Schwarz model has been fitted to such data where the state process is modeled as a first-order Markov chain, though second-order models have also been proposed and fitted to data. However, low-order Markov models may not accurately represent the underlying biology. For example, specifying a (time-independent) first-order Markov process involves the assumption that the dwell time in each state (i.e., the duration of a stay in a given state) has a geometric distribution, and hence that the modal dwell time is one. Specifying time-dependent or higher-order processes provides additional flexibility, but at the expense of a potentially significant number of additional model parameters. We extend the Arnason-Schwarz model by specifying a semi-Markov model for the state process, where the dwell-time distribution is specified more generally, using, for example, a shifted Poisson or negative binomial distribution. A state expansion technique is applied in order to represent the resulting semi-Markov Arnason-Schwarz model in terms of a simpler and computationally tractable hidden Markov model. Semi-Markov Arnason-Schwarz models come with only a very modest increase in the number of parameters, yet permit a significantly more flexible state process. Model selection can be performed using standard procedures, and in particular via the use of information criteria. The semi-Markov approach allows for important biological inference to be drawn on the underlying state process, for example, on the times spent in the different states. The feasibility of the approach is demonstrated in a simulation study, before being applied to real data corresponding to house finches where the states correspond to the presence or absence of conjunctivitis. © 2015, The International Biometric Society.
A Bayesian model for binary Markov chains
Directory of Open Access Journals (Sweden)
Belkheir Essebbar
2004-02-01
Full Text Available This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.
Bayesian analysis of Markov point processes
DEFF Research Database (Denmark)
Berthelsen, Kasper Klitgaard; Møller, Jesper
2006-01-01
Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...... a partially ordered Markov point process as the auxiliary variable. As the method requires simulation from the "unknown" likelihood, perfect simulation algorithms for spatial point processes become useful....
Subharmonic projections for a quantum Markov semigroup
International Nuclear Information System (INIS)
Fagnola, Franco; Rebolledo, Rolando
2002-01-01
This article introduces a concept of subharmonic projections for a quantum Markov semigroup, in view of characterizing the support projection of a stationary state in terms of the semigroup generator. These results, together with those of our previous article [J. Math. Phys. 42, 1296 (2001)], lead to a method for proving the existence of faithful stationary states. This is often crucial in the analysis of ergodic properties of quantum Markov semigroups. The method is illustrated by applications to physical models
Transition Effect Matrices and Quantum Markov Chains
Gudder, Stan
2009-06-01
A transition effect matrix (TEM) is a quantum generalization of a classical stochastic matrix. By employing a TEM we obtain a quantum generalization of a classical Markov chain. We first discuss state and operator dynamics for a quantum Markov chain. We then consider various types of TEMs and vector states. In particular, we study invariant, equilibrium and singular vector states and investigate projective, bistochastic, invertible and unitary TEMs.
Energy Technology Data Exchange (ETDEWEB)
Frank, T D [Center for the Ecological Study of Perception and Action, Department of Psychology, University of Connecticut, 406 Babbidge Road, Storrs, CT 06269 (United States)
2008-07-18
We discuss nonlinear Markov processes defined on discrete time points and discrete state spaces using Markov chains. In this context, special attention is paid to the distinction between linear and nonlinear Markov processes. We illustrate that the Chapman-Kolmogorov equation holds for nonlinear Markov processes by a winner-takes-all model for social conformity. (fast track communication)
International Nuclear Information System (INIS)
Frank, T D
2008-01-01
We discuss nonlinear Markov processes defined on discrete time points and discrete state spaces using Markov chains. In this context, special attention is paid to the distinction between linear and nonlinear Markov processes. We illustrate that the Chapman-Kolmogorov equation holds for nonlinear Markov processes by a winner-takes-all model for social conformity. (fast track communication)
International Nuclear Information System (INIS)
Vernon, David; Meier, Alan
2012-01-01
Energy related Principal–Agent (PA) problems cause inefficient combinations of investment, operating costs, and usage behavior. The complex market structure of the trucking industry contributes to split incentives because entities responsible for investments in energy efficiency do not always pay fuel costs and drivers are often not rewarded for fuel-efficient operation. Some contractual relationships exist in the trucking industry that hinder responses to fuel price signals. Up to 91% of total trucking fuel consumption in the U.S. is affected by “usage” PA problems, where the driver does not pay fuel costs and lacks incentive for fuel saving operation. Approximately 23% of trailers are exposed to an “efficiency problem” when owners of rented trailers do not pay fuel costs and therefore have little incentive to invest in efficiency upgrades such as improved trailer aerodynamics and reduced tire rolling resistance. This study shows that PA problems have the potential to significantly increase fuel consumption through avoided investments, insufficient maintenance, and fuel-wasting practices. Further research into the causes and effects of PA problems can shape policies to promote better alignment of costs and benefits, leading to reduced fuel use and carbon emissions. - Highlights: ► We identify and quantify principal agent market failures in the trucking industry. ► Up to 91% of truck fuel consumption is exposed to a usage principal–agent market failure. ► Twenty-three percent of trailers are exposed to an efficiency principal–agent market failure. ► These market failures at least partially insulate key decision makers from fuel price signals.
Markov Processes in Image Processing
Petrov, E. P.; Kharina, N. L.
2018-05-01
Digital images are used as an information carrier in different sciences and technologies. The aspiration to increase the number of bits in the image pixels for the purpose of obtaining more information is observed. In the paper, some methods of compression and contour detection on the basis of two-dimensional Markov chain are offered. Increasing the number of bits on the image pixels will allow one to allocate fine object details more precisely, but it significantly complicates image processing. The methods of image processing do not concede by the efficiency to well-known analogues, but surpass them in processing speed. An image is separated into binary images, and processing is carried out in parallel with each without an increase in speed, when increasing the number of bits on the image pixels. One more advantage of methods is the low consumption of energy resources. Only logical procedures are used and there are no computing operations. The methods can be useful in processing images of any class and assignment in processing systems with a limited time and energy resources.
Fitting Hidden Markov Models to Psychological Data
Directory of Open Access Journals (Sweden)
Ingmar Visser
2002-01-01
Full Text Available Markov models have been used extensively in psychology of learning. Applications of hidden Markov models are rare however. This is partially due to the fact that comprehensive statistics for model selection and model assessment are lacking in the psychological literature. We present model selection and model assessment statistics that are particularly useful in applying hidden Markov models in psychology. These statistics are presented and evaluated by simulation studies for a toy example. We compare AIC, BIC and related criteria and introduce a prediction error measure for assessing goodness-of-fit. In a simulation study, two methods of fitting equality constraints are compared. In two illustrative examples with experimental data we apply selection criteria, fit models with constraints and assess goodness-of-fit. First, data from a concept identification task is analyzed. Hidden Markov models provide a flexible approach to analyzing such data when compared to other modeling methods. Second, a novel application of hidden Markov models in implicit learning is presented. Hidden Markov models are used in this context to quantify knowledge that subjects express in an implicit learning task. This method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.
Demeter, R M; Kristensen, A R; Dijkstra, J; Oude Lansink, A G J M; Meuwissen, M P M; van Arendonk, J A M
2011-12-01
Herd optimization models that determine economically optimal insemination and replacement decisions are valuable research tools to study various aspects of farming systems. The aim of this study was to develop a herd optimization and simulation model for dairy cattle. The model determines economically optimal insemination and replacement decisions for individual cows and simulates whole-herd results that follow from optimal decisions. The optimization problem was formulated as a multi-level hierarchic Markov process, and a state space model with Bayesian updating was applied to model variation in milk yield. Methodological developments were incorporated in 2 main aspects. First, we introduced an additional level to the model hierarchy to obtain a more tractable and efficient structure. Second, we included a recently developed cattle feed intake model. In addition to methodological developments, new parameters were used in the state space model and other biological functions. Results were generated for Dutch farming conditions, and outcomes were in line with actual herd performance in the Netherlands. Optimal culling decisions were sensitive to variation in milk yield but insensitive to energy requirements for maintenance and feed intake capacity. We anticipate that the model will be applied in research and extension. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Bounding spectral gaps of Markov chains: a novel exact multi-decomposition technique
International Nuclear Information System (INIS)
Destainville, N
2003-01-01
We propose an exact technique to calculate lower bounds of spectral gaps of discrete time reversible Markov chains on finite state sets. Spectral gaps are a common tool for evaluating convergence rates of Markov chains. As an illustration, we successfully use this technique to evaluate the 'absorption time' of the 'Backgammon model', a paradigmatic model for glassy dynamics. We also discuss the application of this technique to the 'contingency table problem', a notoriously difficult problem from probability theory. The interest of this technique is that it connects spectral gaps, which are quantities related to dynamics, with static quantities, calculated at equilibrium
Bounding spectral gaps of Markov chains: a novel exact multi-decomposition technique
Energy Technology Data Exchange (ETDEWEB)
Destainville, N [Laboratoire de Physique Theorique - IRSAMC, CNRS/Universite Paul Sabatier, 118, route de Narbonne, 31062 Toulouse Cedex 04 (France)
2003-04-04
We propose an exact technique to calculate lower bounds of spectral gaps of discrete time reversible Markov chains on finite state sets. Spectral gaps are a common tool for evaluating convergence rates of Markov chains. As an illustration, we successfully use this technique to evaluate the 'absorption time' of the 'Backgammon model', a paradigmatic model for glassy dynamics. We also discuss the application of this technique to the 'contingency table problem', a notoriously difficult problem from probability theory. The interest of this technique is that it connects spectral gaps, which are quantities related to dynamics, with static quantities, calculated at equilibrium.
Krüger, Jan
2015-01-01
The present thesis deals with the foundations for solving the decision problem of site selection for a feasibility study of gas-fired power plants, based on realistic and practical statements, under a business approach. The analysis of different theories and the investigation of site-relevant decision criteria has illustrated the broad range of site-specific factors and criteria that are to be taken into account. On the basis of existing projects, in which site theories were analysed for vari...
Caveats on Bayesian and hidden-Markov models (v2.8)
Schomaker, Lambert
2016-01-01
This paper describes a number of fundamental and practical problems in the application of hidden-Markov models and Bayes when applied to cursive-script recognition. Several problems, however, will have an effect in other application areas. The most fundamental problem is the propagation of error in the product of probabilities. This is a common and pervasive problem which deserves more attention. On the basis of Monte Carlo modeling, tables for the expected relative error are given. It seems ...
Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng
2010-01-01
This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology. PMID:23554632
Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng
2010-05-01
This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology.
Kirsch, Florian
2016-12-01
the remaining two studies less than $30,000 per life-year gained. Nevertheless, if the reporting and selection of data problems are addressed, then Markov models should provide more reliable information for decision makers, because understanding under what circumstances a DMP is cost-effective is an important determinant of efficient resource allocation. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Revisiting Boltzmann learning: parameter estimation in Markov random fields
DEFF Research Database (Denmark)
Hansen, Lars Kai; Andersen, Lars Nonboe; Kjems, Ulrik
1996-01-01
This article presents a generalization of the Boltzmann machine that allows us to use the learning rule for a much wider class of maximum likelihood and maximum a posteriori problems, including both supervised and unsupervised learning. Furthermore, the approach allows us to discuss regularization...... and generalization in the context of Boltzmann machines. We provide an illustrative example concerning parameter estimation in an inhomogeneous Markov field. The regularized adaptation produces a parameter set that closely resembles the “teacher” parameters, hence, will produce segmentations that closely reproduce...
Memory functions and correlations in additive binary Markov chains
International Nuclear Information System (INIS)
Melnyk, S S; Usatenko, O V; Yampol'skii, V A; Apostolov, S S; Maiselis, Z A
2006-01-01
A theory of additive Markov chains with a long-range memory, proposed earlier in Usatenko et al (2003 Phys. Rev. E 68 061107), is developed and used to describe statistical properties of long-range correlated systems. The convenient characteristics of such systems, memory functions and their relation to the correlation properties of the systems are examined. Various methods for finding the memory function via the correlation function are proposed. The inverse problem (calculation of the correlation function by means of the prescribed memory function) is also solved. This is demonstrated for the analytically solvable model of the system with a step-wise memory function
Memory functions and correlations in additive binary Markov chains
Energy Technology Data Exchange (ETDEWEB)
Melnyk, S S [A Ya Usikov Institute for Radiophysics and Electronics, Ukrainian Academy of Science, 12 Proskura Street, 61085 Kharkov (Ukraine); Usatenko, O V [A Ya Usikov Institute for Radiophysics and Electronics, Ukrainian Academy of Science, 12 Proskura Street, 61085 Kharkov (Ukraine); Yampol' skii, V A [A Ya Usikov Institute for Radiophysics and Electronics, Ukrainian Academy of Science, 12 Proskura Street, 61085 Kharkov (Ukraine); Apostolov, S S [V N Karazin Kharkov National University, 4 Svoboda Sq., Kharkov 61077 (Ukraine); Maiselis, Z A [V N Karazin Kharkov National University, 4 Svoboda Sq., Kharkov 61077 (Ukraine)
2006-11-17
A theory of additive Markov chains with a long-range memory, proposed earlier in Usatenko et al (2003 Phys. Rev. E 68 061107), is developed and used to describe statistical properties of long-range correlated systems. The convenient characteristics of such systems, memory functions and their relation to the correlation properties of the systems are examined. Various methods for finding the memory function via the correlation function are proposed. The inverse problem (calculation of the correlation function by means of the prescribed memory function) is also solved. This is demonstrated for the analytically solvable model of the system with a step-wise memory function.
On Construction of Quantum Markov Chains on Cayley trees
International Nuclear Information System (INIS)
Accardi, Luigi; Mukhamedov, Farrukh; Souissi, Abdessatar
2016-01-01
The main aim of the present paper is to provide a new construction of quantum Markov chain (QMC) on arbitrary order Cayley tree. In that construction, a QMC is defined as a weak limit of finite volume states with boundary conditions, i.e. QMC depends on the boundary conditions. Note that this construction reminds statistical mechanics models with competing interactions on trees. If one considers one dimensional tree, then the provided construction reduces to well-known one, which was studied by the first author. Our construction will allow to investigate phase transition problem in a quantum setting. (paper)
Dearfield, Kerry L; Hoelzer, Karin; Kause, Janell R
2014-08-01
Stakeholders in the public health risk analysis community can possess differing opinions about what is meant by "conduct a risk assessment." In reality, there is no one-size-fits-all risk assessment that can address all public health issues, problems, and regulatory needs. Although several international and national organizations (e.g., Codex Alimentarius Commission, Office International des Epizooties, Food and Agricultural Organization, World Health Organization, National Research Council, and European Food Safety Authority) have addressed this issue, confusion remains. The type and complexity of a risk assessment must reflect the risk management needs to appropriately inform a regulatory or nonregulatory decision, i.e., a risk assessment is ideally "fit for purpose" and directly applicable to risk management issues of concern. Frequently however, there is a lack of understanding by those not completely familiar with risk assessment regarding the specific utility of different approaches for assessing public health risks. This unfamiliarity can unduly hamper the acceptance of risk assessment results by risk managers and may reduce the usefulness of such results for guiding public health policies, practices, and operations. Differences in interpretation of risk assessment terminology further complicate effective communication among risk assessors, risk managers, and stakeholders. This article provides an overview of the types of risk assessments commonly conducted, with examples primarily from the food and agricultural sectors, and a discussion of the utility and limitations of these specific approaches for assessing public health risks. Clarification of the risk management issues and corresponding risk assessment design needs during the formative stages of the risk analysis process is a key step for ensuring that the most appropriate assessment of risk is developed and used to guide risk management decisions.
Directory of Open Access Journals (Sweden)
Farxaneh Bahrami
2013-05-01
Full Text Available Aim: The purpose of this study was to examine the effect of the training of problem-solving and decision-making skills on the reduction of addicts’ positive attitudes to narcotics. Method: The design of this study was experimental design namely: pre and post test with control group. The population included all addicts referring to Sanandaj self-report centers (500 addicts. By random sampling, 60 addicts were selected and completed the attitude questionnaire to narcotics use. Each of experimental groups was under problem-solving and decision-making skills training for ten 90 minute sessions. No training given to control group. Results: After training, two experimental groups significantly had lower levels of positive attitude to narcotics use. No difference was observed between two experimental groups. Conclusion: The results of this study indicated that the training of problem-solving and decision-making skills can reduce the addicts’ positive attitudes to narcotics.
Zipf exponent of trajectory distribution in the hidden Markov model
Bochkarev, V. V.; Lerner, E. Yu
2014-03-01
This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different.
Zipf exponent of trajectory distribution in the hidden Markov model
International Nuclear Information System (INIS)
Bochkarev, V V; Lerner, E Yu
2014-01-01
This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different
Performance Modeling of Communication Networks with Markov Chains
Mo, Jeonghoon
2010-01-01
This book is an introduction to Markov chain modeling with applications to communication networks. It begins with a general introduction to performance modeling in Chapter 1 where we introduce different performance models. We then introduce basic ideas of Markov chain modeling: Markov property, discrete time Markov chain (DTMe and continuous time Markov chain (CTMe. We also discuss how to find the steady state distributions from these Markov chains and how they can be used to compute the system performance metric. The solution methodologies include a balance equation technique, limiting probab
Directory of Open Access Journals (Sweden)
Ming Chen
2015-11-01
Full Text Available In multi-criteria group decision-making (MCGDM, one of the most important problems is to determine the weights of criteria and experts. This paper intends to present two Min-Max models to optimize the point estimates of the weights. Since each expert generally possesses a uniform viewpoint on the importance (weighted value of each criterion when he/she needs to rank the alternatives, the objective function in the first model is to minimize the maximum variation between the actual score vector and the ideal one for all the alternatives such that the optimal weights of criteria are consistent in ranking all the alternatives for the same expert. The second model is designed to optimize the weights of experts such that the obtained overall evaluation for each alternative can collect the perspectives of the experts as many as possible. Thus, the objective function in the second model is to minimize the maximum variation between the actual vector of evaluations and the ideal one for all the experts, such that the optimal weights can reduce the difference among the experts in evaluating the same alternative. For the constructed Min-Max models, another focus in this paper is on the development of an efficient algorithm for the optimal weights. Some applications are employed to show the significance of the models and algorithm. From the numerical results, it is clear that the developed Min-Max models more effectively solve the MCGDM problems including the ones with incomplete score matrices, compared with the methods available in the literature. Specifically, by the proposed method, (1 the evaluation uniformity of each expert on the same criteria is guaranteed; (2 The overall evaluation for each alternative can collect the judgements of the experts as many as possible; (3 The highest discrimination degree of the alternatives is obtained.
Coding with partially hidden Markov models
DEFF Research Database (Denmark)
Forchhammer, Søren; Rissanen, J.
1995-01-01
Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...
Markov and mixed models with applications
DEFF Research Database (Denmark)
Mortensen, Stig Bousgaard
This thesis deals with mathematical and statistical models with focus on applications in pharmacokinetic and pharmacodynamic (PK/PD) modelling. These models are today an important aspect of the drug development in the pharmaceutical industry and continued research in statistical methodology within...... or uncontrollable factors in an individual. Modelling using SDEs also provides new tools for estimation of unknown inputs to a system and is illustrated with an application to estimation of insulin secretion rates in diabetic patients. Models for the eect of a drug is a broader area since drugs may affect...... for non-parametric estimation of Markov processes are proposed to give a detailed description of the sleep process during the night. Statistically the Markov models considered for sleep states are closely related to the PK models based on SDEs as both models share the Markov property. When the models...
Consistent Estimation of Partition Markov Models
Directory of Open Access Journals (Sweden)
Jesús E. García
2017-04-01
Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.
Learning Representation and Control in Markov Decision Processes
2013-10-21
449–456. MIT Press, 2006. [35] D. Koller and N. Friedman. Graphical Models. MIT Press, 2009. [36] J. Zico Kolter and Andrew Y. Ng. Regularization and...ICML ’09, pages 521–528, New York, NY, USA, 2009. ACM. [37] J. Zico Kolter and Andrew Y. Ng. Regularization and feature selection in least-squares...temporal differ- ence learning. In Proceedings of 27 th International Conference on Machine Learning, 2009. [38] J. Zico Z. Kolter . The Fixed Points of Off
Intelligent Sensing in Dynamic Environments Using Markov Decision Process
Nanayakkara, Thrishantha; Halgamuge, Malka N.; Sridhar, Prasanna; Madni, Asad M.
2011-01-01
In a network of low-powered wireless sensors, it is essential to capture as many environmental events as possible while still preserving the battery life of the sensor node. This paper focuses on a real-time learning algorithm to extend the lifetime of a sensor node to sense and transmit environmental events. A common method that is generally adopted in ad-hoc sensor networks is to periodically put the sensor nodes to sleep. The purpose of the learning algorithm is to couple the sensor’s sleeping behavior to the natural statistics of the environment hence that it can be in optimal harmony with changes in the environment, the sensors can sleep when steady environment and stay awake when turbulent environment. This paper presents theoretical and experimental validation of a reward based learning algorithm that can be implemented on an embedded sensor. The key contribution of the proposed approach is the design and implementation of a reward function that satisfies a trade-off between the above two mutually contradicting objectives, and a linear critic function to approximate the discounted sum of future rewards in order to perform policy learning. PMID:22346624
Intelligent Sensing in Dynamic Environments Using Markov Decision Process
Directory of Open Access Journals (Sweden)
Asad M. Madni
2011-01-01
Full Text Available In a network of low-powered wireless sensors, it is essential to capture as many environmental events as possible while still preserving the battery life of the sensor node. This paper focuses on a real-time learning algorithm to extend the lifetime of a sensor node to sense and transmit environmental events. A common method that is generally adopted in ad-hoc sensor networks is to periodically put the sensor nodes to sleep. The purpose of the learning algorithm is to couple the sensor’s sleeping behavior to the natural statistics of the environment hence that it can be in optimal harmony with changes in the environment, the sensors can sleep when steady environment and stay awake when turbulent environment. This paper presents theoretical and experimental validation of a reward based learning algorithm that can be implemented on an embedded sensor. The key contribution of the proposed approach is the design and implementation of a reward function that satisfies a trade-off between the above two mutually contradicting objectives, and a linear critic function to approximate the discounted sum of future rewards in order to perform policy learning.
Inhomogeneous Markov Models for Describing Driving Patterns
DEFF Research Database (Denmark)
Iversen, Emil Banning; Møller, Jan K.; Morales, Juan Miguel
2017-01-01
. Specifically, an inhomogeneous Markov model that captures the diurnal variation in the use of a vehicle is presented. The model is defined by the time-varying probabilities of starting and ending a trip, and is justified due to the uncertainty associated with the use of the vehicle. The model is fitted to data...... collected from the actual utilization of a vehicle. Inhomogeneous Markov models imply a large number of parameters. The number of parameters in the proposed model is reduced using B-splines....
Inhomogeneous Markov Models for Describing Driving Patterns
DEFF Research Database (Denmark)
Iversen, Jan Emil Banning; Møller, Jan Kloppenborg; Morales González, Juan Miguel
. Specically, an inhomogeneous Markov model that captures the diurnal variation in the use of a vehicle is presented. The model is dened by the time-varying probabilities of starting and ending a trip and is justied due to the uncertainty associated with the use of the vehicle. The model is tted to data...... collected from the actual utilization of a vehicle. Inhomogeneous Markov models imply a large number of parameters. The number of parameters in the proposed model is reduced using B-splines....
Detecting Structural Breaks using Hidden Markov Models
DEFF Research Database (Denmark)
Ntantamis, Christos
Testing for structural breaks and identifying their location is essential for econometric modeling. In this paper, a Hidden Markov Model (HMM) approach is used in order to perform these tasks. Breaks are defined as the data points where the underlying Markov Chain switches from one state to another....... The estimation of the HMM is conducted using a variant of the Iterative Conditional Expectation-Generalized Mixture (ICE-GEMI) algorithm proposed by Delignon et al. (1997), that permits analysis of the conditional distributions of economic data and allows for different functional forms across regimes...
Predicting Protein Secondary Structure with Markov Models
DEFF Research Database (Denmark)
Fischer, Paul; Larsen, Simon; Thomsen, Claus
2004-01-01
we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....
Markov processes an introduction for physical scientists
Gillespie, Daniel T
1991-01-01
Markov process theory is basically an extension of ordinary calculus to accommodate functions whos time evolutions are not entirely deterministic. It is a subject that is becoming increasingly important for many fields of science. This book develops the single-variable theory of both continuous and jump Markov processes in a way that should appeal especially to physicists and chemists at the senior and graduate level.Key Features* A self-contained, prgamatic exposition of the needed elements of random variable theory* Logically integrated derviations of the Chapman-Kolmogorov e
Zhao, Zhibiao
2011-06-01
We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.
Markov Chain Model with Catastrophe to Determine Mean Time to Default of Credit Risky Assets
Dharmaraja, Selvamuthu; Pasricha, Puneet; Tardelli, Paola
2017-11-01
This article deals with the problem of probabilistic prediction of the time distance to default for a firm. To model the credit risk, the dynamics of an asset is described as a function of a homogeneous discrete time Markov chain subject to a catastrophe, the default. The behaviour of the Markov chain is investigated and the mean time to the default is expressed in a closed form. The methodology to estimate the parameters is given. Numerical results are provided to illustrate the applicability of the proposed model on real data and their analysis is discussed.
A Cost-Effective Smoothed Multigrid with Modified Neighborhood-Based Aggregation for Markov Chains
Directory of Open Access Journals (Sweden)
Zhao-Li Shen
2015-01-01
Full Text Available Smoothed aggregation multigrid method is considered for computing stationary distributions of Markov chains. A judgement which determines whether to implement the whole aggregation procedure is proposed. Through this strategy, a large amount of time in the aggregation procedure is saved without affecting the convergence behavior. Besides this, we explain the shortage and irrationality of the Neighborhood-Based aggregation which is commonly used in multigrid methods. Then a modified version is presented to remedy and improve it. Numerical experiments on some typical Markov chain problems are reported to illustrate the performance of these methods.
2nd International Workshop on the Numerical Solution of Markov Chains
1995-01-01
Computations with Markov Chains presents the edited and reviewed proceedings of the Second International Workshop on the Numerical Solution of Markov Chains, held January 16--18, 1995, in Raleigh, North Carolina. New developments of particular interest include recent work on stability and conditioning, Krylov subspace-based methods for transient solutions, quadratic convergent procedures for matrix geometric problems, further analysis of the GTH algorithm, the arrival of stochastic automata networks at the forefront of modelling stratagems, and more. An authoritative overview of the field for applied probabilists, numerical analysts and systems modelers, including computer scientists and engineers.
Dissipativity-Based Reliable Control for Fuzzy Markov Jump Systems With Actuator Faults.
Tao, Jie; Lu, Renquan; Shi, Peng; Su, Hongye; Wu, Zheng-Guang
2017-09-01
This paper is concerned with the problem of reliable dissipative control for Takagi-Sugeno fuzzy systems with Markov jumping parameters. Considering the influence of actuator faults, a sufficient condition is developed to ensure that the resultant closed-loop system is stochastically stable and strictly ( Q, S,R )-dissipative based on a relaxed approach in which mode-dependent and fuzzy-basis-dependent Lyapunov functions are employed. Then a reliable dissipative control for fuzzy Markov jump systems is designed, with sufficient condition proposed for the existence of guaranteed stability and dissipativity controller. The effectiveness and potential of the obtained design method is verified by two simulation examples.
Hidden Markov latent variable models with multivariate longitudinal data.
Song, Xinyuan; Xia, Yemao; Zhu, Hongtu
2017-03-01
Cocaine addiction is chronic and persistent, and has become a major social and health problem in many countries. Existing studies have shown that cocaine addicts often undergo episodic periods of addiction to, moderate dependence on, or swearing off cocaine. Given its reversible feature, cocaine use can be formulated as a stochastic process that transits from one state to another, while the impacts of various factors, such as treatment received and individuals' psychological problems on cocaine use, may vary across states. This article develops a hidden Markov latent variable model to study multivariate longitudinal data concerning cocaine use from a California Civil Addict Program. The proposed model generalizes conventional latent variable models to allow bidirectional transition between cocaine-addiction states and conventional hidden Markov models to allow latent variables and their dynamic interrelationship. We develop a maximum-likelihood approach, along with a Monte Carlo expectation conditional maximization (MCECM) algorithm, to conduct parameter estimation. The asymptotic properties of the parameter estimates and statistics for testing the heterogeneity of model parameters are investigated. The finite sample performance of the proposed methodology is demonstrated by simulation studies. The application to cocaine use study provides insights into the prevention of cocaine use. © 2016, The International Biometric Society.
Large deviations for Markov chains in the positive quadrant
Energy Technology Data Exchange (ETDEWEB)
Borovkov, A A; Mogul' skii, A A [S.L. Sobolev Institute for Mathematics, Siberian Branch of the Russian Academy of Sciences, Novosibirsk (Russian Federation)
2001-10-31
The paper deals with so-called N-partially space-homogeneous time-homogeneous Markov chains X(y,n), n=0,1,2,..., X(y,0)=y, in the positive quadrant. These Markov chains are characterized by the following property of the transition probabilities P(y,A)=P(X(y,1) element of A): for some N{>=}0 the measure P(y,dx) depends only on x{sub 2}, y{sub 2}, and x{sub 1}-y{sub 1} in the domain x{sub 1}>N, y{sub 1}>N, and only on x{sub 1}, y{sub 1}, and x{sub 2}-y{sub 2} in the domain x{sub 2}>N, y{sub 2}>N. For such chains the asymptotic behaviour is found for a fixed set B as s{yields}{infinity}, |x|{yields}{infinity}, and n{yields}{infinity}. Some other conditions on the growth of parameters are also considered, for example, |x-y|{yields}{infinity}, |y|{yields}{infinity}. A study is made of the structure of the most probable trajectories, which give the main contribution to this asymptotics, and a number of other results pertaining to the topic are established. Similar results are obtained for the narrower class of 0-partially homogeneous ergodic chains under less restrictive moment conditions on the transition probabilities P(y,dx). Moreover, exact asymptotic expressions for the probabilities P(X(0,n) element of x+B) are found for 0-partially homogeneous ergodic chains under some additional conditions. The interest in partially homogeneous Markov chains in positive octants is due to the mathematical aspects (new and interesting problems arise in the framework of general large deviation theory) as well as applied issues, for such chains prove to be quite accurate mathematical models for numerous basic types of queueing and communication networks such as the widely known Jackson networks, polling systems, or communication networks associated with the ALOHA algorithm. There is a vast literature dealing with the analysis of these objects. The present paper is an attempt to find the extent to which an asymptotic analysis is possible for Markov chains of this type in their general
Corson, Alan; And Others
Presented are key issues to be addressed by state, regional, and local governments and agencies in creating effective hazardous waste management programs. Eight chapters broadly frame the topics which state-level decision makers should consider. These chapters include: (1) definition of hazardous waste; (2) problem definition and recognition; (3)…
Polka, Walter S.; Litchka, Peter R.; Calzi, Frank F.; Denig, Stephen J.; Mete, Rosina E.
2014-01-01
The major focus of this paper is a gender-based analysis of school superintendent decision-making and problem-solving as well as an investigation of contemporary leadership dilemmas. The findings are based on responses from 258 superintendents of K-12 school districts in Delaware, Maryland, New Jersey, New York, and Pennsylvania collected over a…
B. Kaynar; S.I. Birbil (Ilker); J.B.G. Frenk (Hans)
2007-01-01
textabstractIn this paper portfolio problems with linear loss functions and multivariate elliptical distributed returns are studied. We consider two risk measures, Value-at-Risk and Conditional-Value-at-Risk, and two types of decision makers, risk neutral and risk averse. For Value-at-Risk, we show
Prediction of Annual Rainfall Pattern Using Hidden Markov Model ...
African Journals Online (AJOL)
ADOWIE PERE
Hidden Markov model is very influential in stochastic world because of its ... the earth from the clouds. The usual ... Rainfall modelling and ... Markov Models have become popular tools ... environment sciences, University of Jos, plateau state,.
Extending Markov Automata with State and Action Rewards
Guck, Dennis; Timmer, Mark; Blom, Stefan; Bertrand, N.; Bortolussi, L.
This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton that allows the modelling of systems incorporating rewards in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Our models support both rewards that are
Uncovering and testing the fuzzy clusters based on lumped Markov chain in complex network.
Jing, Fan; Jianbin, Xie; Jinlong, Wang; Jinshuai, Qu
2013-01-01
Identifying clusters, namely groups of nodes with comparatively strong internal connectivity, is a fundamental task for deeply understanding the structure and function of a network. By means of a lumped Markov chain model of a random walker, we propose two novel ways of inferring the lumped markov transition matrix. Furthermore, some useful results are proposed based on the analysis of the properties of the lumped Markov process. To find the best partition of complex networks, a novel framework including two algorithms for network partition based on the optimal lumped Markovian dynamics is derived to solve this problem. The algorithms are constructed to minimize the objective function under this framework. It is demonstrated by the simulation experiments that our algorithms can efficiently determine the probabilities with which a node belongs to different clusters during the learning process and naturally supports the fuzzy partition. Moreover, they are successfully applied to real-world network, including the social interactions between members of a karate club.
Projected metastable Markov processes and their estimation with observable operator models
International Nuclear Information System (INIS)
Wu, Hao; Prinz, Jan-Hendrik; Noé, Frank
2015-01-01
The determination of kinetics of high-dimensional dynamical systems, such as macromolecules, polymers, or spin systems, is a difficult and generally unsolved problem — both in simulation, where the optimal reaction coordinate(s) are generally unknown and are difficult to compute, and in experimental measurements, where only specific coordinates are observable. Markov models, or Markov state models, are widely used but suffer from the fact that the dynamics on a coarsely discretized state spaced are no longer Markovian, even if the dynamics in the full phase space are. The recently proposed projected Markov models (PMMs) are a formulation that provides a description of the kinetics on a low-dimensional projection without making the Markovianity assumption. However, as yet no general way of estimating PMMs from data has been available. Here, we show that the observed dynamics of a PMM can be exactly described by an observable operator model (OOM) and derive a PMM estimator based on the OOM learning
Counting of oligomers in sequences generated by markov chains for DNA motif discovery.
Shan, Gao; Zheng, Wei-Mou
2009-02-01
By means of the technique of the imbedded Markov chain, an efficient algorithm is proposed to exactly calculate first, second moments of word counts and the probability for a word to occur at least once in random texts generated by a Markov chain. A generating function is introduced directly from the imbedded Markov chain to derive asymptotic approximations for the problem. Two Z-scores, one based on the number of sequences with hits and the other on the total number of word hits in a set of sequences, are examined for discovery of motifs on a set of promoter sequences extracted from A. thaliana genome. Source code is available at http://www.itp.ac.cn/zheng/oligo.c.
Christie, Vanessa L.; Landess, David J.
2012-01-01
In the international arena, decision makers are often swayed away from fact-based analysis by their own individual cultural and political bias. Modeling and Simulation-based training can raise awareness of individual predisposition and improve the quality of decision making by focusing solely on fact vice perception. This improved decision making methodology will support the multinational collaborative efforts of military and civilian leaders to solve challenges more effectively. The intent of this experimental research is to create a framework that allows decision makers to "come to the table" with the latest and most significant facts necessary to determine an appropriate solution for any given contingency.
DNA motif alignment by evolving a population of Markov chains.
Bi, Chengpeng
2009-01-30
Deciphering cis-regulatory elements or de novo motif-finding in genomes still remains elusive although much algorithmic effort has been expended. The Markov chain Monte Carlo (MCMC) method such as Gibbs motif samplers has been widely employed to solve the de novo motif-finding problem through sequence local alignment. Nonetheless, the MCMC-based motif samplers still suffer from local maxima like EM. Therefore, as a prerequisite for finding good local alignments, these motif algorithms are often independently run a multitude of times, but without information exchange between different chains. Hence it would be worth a new algorithm design enabling such information exchange. This paper presents a novel motif-finding algorithm by evolving a population of Markov chains with information exchange (PMC), each of which is initialized as a random alignment and run by the Metropolis-Hastings sampler (MHS). It is progressively updated through a series of local alignments stochastically sampled. Explicitly, the PMC motif algorithm performs stochastic sampling as specified by a population-based proposal distribution rather than individual ones, and adaptively evolves the population as a whole towards a global maximum. The alignment information exchange is accomplished by taking advantage of the pooled motif site distributions. A distinct method for running multiple independent Markov chains (IMC) without information exchange, or dubbed as the IMC motif algorithm, is also devised to compare with its PMC counterpart. Experimental studies demonstrate that the performance could be improved if pooled information were used to run a population of motif samplers. The new PMC algorithm was able to improve the convergence and outperformed other popular algorithms tested using simulated and biological motif sequences.
Intention-Aware Autonomous Driving Decision-Making in an Uncontrolled Intersection
Directory of Open Access Journals (Sweden)
Weilong Song
2016-01-01
Full Text Available Autonomous vehicles need to perform social accepted behaviors in complex urban scenarios including human-driven vehicles with uncertain intentions. This leads to many difficult decision-making problems, such as deciding a lane change maneuver and generating policies to pass through intersections. In this paper, we propose an intention-aware decision-making algorithm to solve this challenging problem in an uncontrolled intersection scenario. In order to consider uncertain intentions, we first develop a continuous hidden Markov model to predict both the high-level motion intention (e.g., turn right, turn left, and go straight and the low level interaction intentions (e.g., yield status for related vehicles. Then a partially observable Markov decision process (POMDP is built to model the general decision-making framework. Due to the difficulty in solving POMDP, we use proper assumptions and approximations to simplify this problem. A human-like policy generation mechanism is used to generate the possible candidates. Human-driven vehicles’ future motion model is proposed to be applied in state transition process and the intention is updated during each prediction time step. The reward function, which considers the driving safety, traffic laws, time efficiency, and so forth, is designed to calculate the optimal policy. Finally, our method is evaluated in simulation with PreScan software and a driving simulator. The experiments show that our method could lead autonomous vehicle to pass through uncontrolled intersections safely and efficiently.
Perturbation theory for Markov chains via Wasserstein distance
Rudolf, Daniel; Schweizer, Nikolaus
2017-01-01
Perturbation theory for Markov chains addresses the question of how small differences in the transition probabilities of Markov chains are reflected in differences between their distributions. We prove powerful and flexible bounds on the distance of the nth step distributions of two Markov chains
Recursive recovery of Markov transition probabilities from boundary value data
Energy Technology Data Exchange (ETDEWEB)
Patch, Sarah Kathyrn [Univ. of California, Berkeley, CA (United States)
1994-04-01
In an effort to mathematically describe the anisotropic diffusion of infrared radiation in biological tissue Gruenbaum posed an anisotropic diffusion boundary value problem in 1989. In order to accommodate anisotropy, he discretized the temporal as well as the spatial domain. The probabilistic interpretation of the diffusion equation is retained; radiation is assumed to travel according to a random walk (of sorts). In this random walk the probabilities with which photons change direction depend upon their previous as well as present location. The forward problem gives boundary value data as a function of the Markov transition probabilities. The inverse problem requires finding the transition probabilities from boundary value data. Problems in the plane are studied carefully in this thesis. Consistency conditions amongst the data are derived. These conditions have two effects: they prohibit inversion of the forward map but permit smoothing of noisy data. Next, a recursive algorithm which yields a family of solutions to the inverse problem is detailed. This algorithm takes advantage of all independent data and generates a system of highly nonlinear algebraic equations. Pluecker-Grassmann relations are instrumental in simplifying the equations. The algorithm is used to solve the 4 x 4 problem. Finally, the smallest nontrivial problem in three dimensions, the 2 x 2 x 2 problem, is solved.
Quantum Enhanced Inference in Markov Logic Networks.
Wittek, Peter; Gogolin, Christian
2017-04-19
Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.
Markov Random Fields on Triangle Meshes
DEFF Research Database (Denmark)
Andersen, Vedrana; Aanæs, Henrik; Bærentzen, Jakob Andreas
2010-01-01
In this paper we propose a novel anisotropic smoothing scheme based on Markov Random Fields (MRF). Our scheme is formulated as two coupled processes. A vertex process is used to smooth the mesh by displacing the vertices according to a MRF smoothness prior, while an independent edge process label...
A Martingale Decomposition of Discrete Markov Chains
DEFF Research Database (Denmark)
Hansen, Peter Reinhard
We consider a multivariate time series whose increments are given from a homogeneous Markov chain. We show that the martingale component of this process can be extracted by a filtering method and establish the corresponding martingale decomposition in closed-form. This representation is useful fo...
Renewal characterization of Markov modulated Poisson processes
Directory of Open Access Journals (Sweden)
Marcel F. Neuts
1989-01-01
Full Text Available A Markov Modulated Poisson Process (MMPP M(t defined on a Markov chain J(t is a pure jump process where jumps of M(t occur according to a Poisson process with intensity λi whenever the Markov chain J(t is in state i. M(t is called strongly renewal (SR if M(t is a renewal process for an arbitrary initial probability vector of J(t with full support on P={i:λi>0}. M(t is called weakly renewal (WR if there exists an initial probability vector of J(t such that the resulting MMPP is a renewal process. The purpose of this paper is to develop general characterization theorems for the class SR and some sufficiency theorems for the class WR in terms of the first passage times of the bivariate Markov chain [J(t,M(t]. Relevance to the lumpability of J(t is also studied.
Evaluation of Usability Utilizing Markov Models
Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane
2012-01-01
Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…
Bayesian analysis for reversible Markov chains
Diaconis, P.; Rolles, S.W.W.
2006-01-01
We introduce a natural conjugate prior for the transition matrix of a reversible Markov chain. This allows estimation and testing. The prior arises from random walk with reinforcement in the same way the Dirichlet prior arises from Pólya’s urn. We give closed form normalizing constants, a simple
Bisimulation and Simulation Relations for Markov Chains
Baier, Christel; Hermanns, H.; Katoen, Joost P.; Wolf, Verena; Aceto, L.; Gordon, A.
2006-01-01
Formal notions of bisimulation and simulation relation play a central role for any kind of process algebra. This short paper sketches the main concepts for bisimulation and simulation relations for probabilistic systems, modelled by discrete- or continuous-time Markov chains.